Joint Wireless Resource and Computation Offloading Optimization for Energy Efficient Internet of Vehicles

The Internet of Vehicles (IoV) is an emerging paradigm, which is expected to be an integral component of beyond-fifth-generation and sixth-generation mobile networks. However, the processing requirements and strict delay constraints of IoV applications pose a challenge to vehicle processing units. To this end, multi-access edge computing (MEC) can leverage the availability of computing resources at the edge of the network to meet the intensive computation demands. Nevertheless, the optimal allocation of computing resources is challenging due to the various parameters, such as the number of vehicles, the available resources, and the particular requirements of each task. In this work, we consider a network consisting of multiple vehicles connected to MEC-enabled roadside units (RSUs) and propose an approach that minimizes the total energy consumption of the system by jointly optimizing the task offloading decision, the allocation of power and bandwidth, and the assignment of tasks to MEC-enabled RSUs. Due to the original problem complexity, we decouple it into subproblems and we leverage the block coordinate descent method to iteratively optimize them. Finally, the numerical results demonstrate that the proposed solution can effectively minimize total energy consumption for various numbers of vehicles and MEC nodes while maintaining a low outage probability.


Joint Wireless Resource and Computation Offloading Optimization for Energy Efficient
Internet of Vehicles

I. INTRODUCTION
T HE SIXTH-GENERATION (6G) of mobile networks aims to integrate the advances in wireless communication technologies to deliver enhanced performance compared to fifth-generation (5G) mobile networks and realize new applications and services [1], requiring increased computing capabilities and low computing latency. The Internet of Vehicles (IoV) is an emerging paradigm derived from the concept of the Internet of Things and features great potential in the Beyond 5G/6G era [2]- [5]. The IoV paradigm aims to deliver an intelligent and efficient transportation system able to support applications such as autonomous driving, traffic prediction, and road security and safety [6]. Such applications often have strict delay constraints and require intensive computations [7]. Although the computing capabilities of vehicles are higher than conventional mobile devices, the complex processing requirements and strict delay constraints of IoV applications pose a challenge to vehicle processing units. In addition, an individual vehicle's available computing resources may not be able to meet the aforementioned requirements and constraints.
Multi-access edge computing (MEC), formerly known as mobile edge computing, can leverage the availability of computing resources located at the edge of the network to efficiently realize computing resource sharing, in order to meet the intensive computing demands posed by IoV applications. In this direction, a device can offload a task to a MEC-enabled small cell (SC), where sufficient computation resources exist. Nevertheless, the orchestration of resource sharing among various devices and SCs is challenging due to the heterogeneity of the resources and the time-varying topology of vehicular networks. Furthermore, the dense deployment of MECenabled SCs will result in higher total energy consumption. Consequently, minimizing the total energy consumption while taking into account the quality of service (QoS) requirements of the application, is challenging [8], [9].

A. Related Works
In this direction, research efforts are being focused on exploiting the ample computing resources of the edge nodes by offloading the tasks of mobile devices or vehicles. In more This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ detail, the minimization of the task processing time is the focus of the research works in [10]- [19]. The authors in [10] developed a task offloading optimization approach that aims to minimize task computation delay and energy consumption. Xu et al. [11] investigated an offloading system, where the QoS depends on the task response time and developed a deep reinforcement learning approach that minimizes the response time. Zhao et al. [12] considered the partial offloading of vehicle tasks to multiple smart devices, such as drones and edge nodes, and minimized the execution time of a task taking into account energy consumption and rental rate of the smart device. In [13], the authors combined reinforcement learning and heuristic algorithms to optimize the allocation of user applications to vehicular computation resources. Moreover, Luo et al. [14] designed a dynamic programmingbased algorithm that minimizes the processing latency of tasks in heterogeneous MEC environments. In [15], the authors leveraged reinforcement learning to develop a mobile offloading method aiming to minimize the cost of task migration under energy constraints. The authors of [16] proposed a federated learning-based offloading scheme for minimizing the total latency in vehicular environments, where each task can be divided into three parts so it can be respectively processed locally, offloaded to another vehicle, or offloaded to a MEC node. Yadav et al. [17] developed an algorithm to minimize the task latency by optimally selecting the tasks to be offloaded to the MEC nodes. In [18], the authors formulated the minimization of task latency by jointly optimizing the offloading decision, as well as the wireless and computing resource allocation in satellite-assisted vehicle-to-vehicle communications. The authors of [19] utilized the particle swarm optimization algorithm to minimize the processing time of each task by offloading portions of the task to multiple vehicles.
Alternatively, the maximization of the throughput is the focus of the research works presented in [20]- [22] The authors in [20] investigated the resource allocation in networks consisting of unmanned aerial vehicles and formulated a mixedinteger non-linear problem, aiming to maximize the average throughput while satisfying energy constraints. In addition, Ning et al. [21] leveraged non-orthogonal multiple access and MEC technologies to develop a method that maximizes the link throughput, by optimizing power allocation, subchannel assignment, and task assignment. Furthermore, Lu et al. [22] considered a network, where two unmanned aerial vehicles (UAVs) provide wireless power transfer to two ground devices and developed a solution based on successive convex programming in order to maximize the sum average transmission rate.
The approaches presented in [23]- [29] are focused on maximizing the system utility. Specifically, Dai et al. [23] proposed a low-complexity algorithm to jointly optimize the offloading decision and resource allocation toward maximizing the system utility. In [24], the authors proposed a vehicle-assisted offloading scheme that aims to maximize the long-term utility rate of a vehicular network using a reinforcement learning method. The authors in [25] addressed the maximization of the system offloading utility rate taking into account the task execution order and the available computing resources. The authors of [26] designed a collaborative resource allocation and offloading decision optimization scheme for maximizing the utility rate of the system. In [27], Zhang et al. adopted a deep Q-learning method for optimizing the offloading decision and the data uploading method (i.e., vehicle-to-vehicle, vehicle-to-base-station) with the aim of maximizing the system utility rate. The authors in [28] presented a joint resource allocation and task scheduling scheme for maximizing the system's utility by formulating the corresponding optimization problem as a Stackelberg game. Xu et al. [29] leveraged a multi-objective evolutionary algorithm based on decomposition to minimize the task processing latency and maximize the utilization of the system resources.
Finally, the research works in [30]- [40] are focused on minimizing the system energy consumption through the optimal allocation of the available resources. Particularly, the authors in [30] formulated the computation offloading as a mixedinteger non-linear programming problem and proposed a genetic algorithm that minimizes the energy consumption. In [31], the authors leveraged a deep reinforcement learning approach for minimizing the energy consumption through the joint optimization of the offloading decision and the assignment of tasks to the MEC nodes. The authors in [32] investigated the trade-off between the task latency and energy consumption and developed an approach to find the optimal task offloading decision and the allocation of wireless resources. Zhou et al. [33] developed a scheme based on the alternating direction method of multipliers for minimizing the total energy consumption of the system by finding the optimal offloading decision for each task. In [34], the authors presented a method based on the Lagrange dual decomposition method for minimizing the energy consumption through the joint optimization of the offloading decision, the allocation of transmission power, and the scaling of computing resources. The authors in [35] proposed an approach that maximizes the system energy efficiency by optimally allocating the offloading transmission power and time, as well as scaling the device chip computing frequency. Jang et al. [36] investigated the energy consumption assuming partial and complete offloading in vehicular edge computing environments and proposed a solution for optimally assigning the offloading of the task in time-slots. The authors in [37] developed an energy-efficient fog computation offloading scheme in order to meet the stringent requirements of the industrial Internet of Things. The scheme leverages an accelerated gradient descent algorithm that optimizes the offloading ratio, the transmission power and time, and the local central processing unit (CPU) computation speed. Wang et al. [38] focused on the energy consumption of an edge system and proposed an imitation learning-enabled scheduling algorithm that takes into account the latency constraints of the tasks. In [39], the authors presented a deep reinforcement learning method to minimize the long-term energy consumption and task processing latency through the optimization of the offloading decision and the allocation of computing resources. Lagkas et al. [40] developed a joint allocation scheme, involving three optimization phases for the edge, radio, and optical resources, respectively.

B. Contributions
The aforementioned works presented some interesting results, however, some of them are focused on optimizing only a particular aspect of the offloading process (e.g., the offloading decision), while most of the are focused on jointly optimizing the offloading decision and the allocation of the transmission power. Furthermore, some of the research works are focused on the joint optimization of the wireless and computing resources. Of note, the solutions presented in most of the works are based on deep learning or reinforcement learning algorithms to optimize the long-term system performance. However, these algorithms are considered computationally expensive. Moreover, deep learning algorithms require large datasets volumes to achieve high performance.
Motivated by these remarks, we develop a solution that aims to minimize the total energy consumption of the system by optimally offloading the tasks to the MEC-enabled roadside units (RSUs), taking into account the latency requirements and the availability of wireless and computing resources. In particular, the solution aims to jointly optimize the task offloading decision, the allocation of power and bandwidth resources, the assignment of tasks to MEC-enabled RSUs, and the frequency scaling of MEC-enabled RSUs. In more detail, the contributions of this work are as follows: • We present a scenario consisting of multiple vehicles that are served by a number of RSUs. In the considered scenario, each vehicle may choose to compute its task locally or offload a portion of it in a MEC-enabled RSU. Additionally, the scenario supports task migration, meaning that a task can be migrated from a RSU to another, based on the computation requirements and availability of resources. • We formulate the minimization of the total energy consumption as a joint optimization of the task offloading decision, the allocation of power and bandwidth, the assignment of tasks to MEC-enabled RSUs, and the frequency scaling of MEC-enabled RSUs. We also discuss the convexity of the original optimization problem and transform it into convex equivalents. • As the joint optimization problem is challenging to solve, we decouple the original optimization problem into three problems and solve each one in an iterative way by leveraging the block coordinate descent (BCD) method. • Particularly, for optimizing the task offloading decision, we derive closed-form expressions taking into account each task's latency constraints. Towards optimizing the power and bandwidth allocation, as well as the task assignment and frequency scaling, the Lagrange multipliers and subgradient methods are employed. • We evaluate the performance of the proposed approach through system-level Monte Carlo simulations in terms of total energy consumption and outage probability. • To highlight the impact of the allocation of wireless and computing resources on the total energy consumption, we designed three evaluation scenarios. Particularly, in the first scenario, only the offloading decision is optimized, whereas in the second scenario we optimize the offloading decision and the allocation of wireless resources. Finally, in the third scenario, the allocation of both wireless and computing resources is optimized in addition to the offloading decision. The remainder of the paper is structured as follows: In Section II we develop the model and the problem formulation, while, in Section III, we present the proposed solution. We provide the evaluation results in Section IV and we conclude the work in Section V. Additionally, all notations used throughout the paper are summarized in Table I. II. SYSTEM MODEL AND PROBLEM FORMULATION Fig. 1 depicts the considered system model. In particular, a number of vehicles are served by RSUs equipped with MEC capabilities. Each vehicle is served by its nearest RSU via a wireless link, while RSUs are interconnected using high-capacity optical backhaul links [41]. The wireless communication between the RSUs and the vehicles can be enabled by a mobile network (e.g., B5G or 6G), while the optical backhaul links can be enabled by the latest optical communications standards, such as the 10-Gigabit Symmetrical Passive Optical Network (10GS-PON) or the Next-Generation PON 2 (NG-PON2), that are able to provide data rates up to 10 Gbps [42], [43].
Let N = {1, . . . , N } denote the set of vehicles, while S = {1, . . . , S } denotes the set of RSUs. To mitigate the energy required for the wireless data transmission, each vehicle is assumed to be connected to the closest RSU in its proximity and the corresponding distance is denoted by d n,s . We assume that the optimization process takes place in cycles, in which the vehicles may offload a portion of a task. Consequently, the terms vehicle and task can be used interchangeably. In addition, for the duration of the cycle, the vehicle position is assumed to remain steady. The offloaded tasks can be part of various IoV applications, including navigation assistance, image or video recognition, collision or obstacle detection, or autonomous driving [44], [45]. In addition, task profilers can be leveraged to provide valuable insights to operators about the computing and delay requirements of each task [46], [47].

A. Communication Model
The wireless link capacity between a vehicle and an RSU is calculated by where w n denotes the bandwidth portion allocated to vehicle n, while W is the total available bandwidth. The respective signal-to-noise ratio (SNR) is obtained by where p n is the power of the transmitted signal, d −δ n,s is distance pathloss based on δ coefficient, and σ 2 is the noise variance. Orthogonal frequency-division multiple access (OFDMA) is selected for minimizing interferences among vehicles.

B. Computation Model
Each task n is described by the tuple (L n , C n , T max n ), where L n denotes the data length in bits to be processed and C n (cycles/bit) denotes the number of cycles required to process a single bit of the task [21], [48]. Consequently, the total number of cycles required for processing the task can be obtained by L n C n . Also, T max n denotes the maximum tolerable latency for the task.
1) Local Computation: The total time for the local computation is obtained by where f loc n (cycles/s) denotes the computing capability of the n-th vehicle. As in [48], [49], and [50], we model the energy consumption of the processor as φ loc n (f loc n ) 3 (joules per second), where φ loc n stands for the processor's chip energy coefficient [49]. By multiplying the aforementioned equation with the right hand side of (3), we obtain the energy consumed for the processing of the n-th task as 2) Offloaded Computation: The total time for the offloaded computation consists of the time required for the vehicle to upload the data to the nearest MEC-enabled RSU and the time required for the RSU to process the data. Moreover, the nearest RSU may not have enough available computing resources and thus, the task will be migrated to another RSU through the backhaul optical link. Also, since that the size of the offloaded task is much smaller than the backhaul link capacity, we can assume that the task migration time is zero in order to simplify the optimization process. To indicate where each task is processed, we use the binary variable a n,s as follows: an,s = 1, the n − th task is proccessed at the s − th node 0, otherwise Based on the aformenioned remarks, the upload time is calculated as The processing time of the n-th task at the s-th node can obtained by where f n,s is the frequency scaling coefficient that denotes the utilization ratio of the processor. For example, when f n,s = 1, the current processor frequency will be equal to F max s , where F max s denotes the maximum computing capability of the s-th RSU (in Hz).
Assuming that the downlink transmission delay is considered negligible, due to the result of the computation being very small ( [51], [52]), the total time for the offloaded computation of the n-th task is The total energy consumed in the offloaded computation includes the energy consumed at the vehicle for the task upload and the energy consumed at the RSU for the processing. In particular, the energy consumed for the task upload will be the transmission power of the n-th vehicle multiplied by the time required to upload the task and can be calculated by Using the same energy consumption model as in local computation, the energy consumed for the processing of n-th task at the s-th node is obtained by where φ mec s is the energy consumption coefficient of the RSU.
For the communication between RSUs, a high-capacity passive optical network is utilized [53], [54]. As a result, in case of task migration, the respective data can be promptly transferred among RSUs, with minimal delay, leading to a small energy overhead.

C. Problem Formulation
We aim to minimize the total power consumption of the system by jointly optimizing the task offloading decision, the allocation of power and bandwidth, the assignment of tasks to MEC-enabled RSUs, and the frequency scaling of MEC-enabled RSUs. Moreover, we adopt a partial offloading scheme, meaning that a task can be concurrently computed locally and in a MEC-enabled RSU. The portion of local and offloaded computation is denoted by x n . Specifically, when x n = 0 the whole task is computed locally at a vehicle, whereas when x n = 1, the whole task is offloaded to a MECenabled RSU. Combining (1) -(11), the total computation time is expressed as Similarly, the total energy consumption is formulated as Consequently, the optimization problem is expressed as follows: E n x n , p n , w n , a n,s , f n,s (14a) subject to: In P0, x x x denotes the vector of the task offloading decision, while p p p and w w w denote the vectors of the transmission power and bandwidth allocation, respectively. Furthermore, a a a denotes the task-MEC assignment vector, while f f f denotes the frequency scaling coefficient vector. Constraint (14b) enforces that the total computation time of the task does not exceed the maximum tolerable delay. Additionally, constraint (14c) enforces the task offloading portion in the range [0, 1], while (14d) enforces the transmission power between 0 and the P max n . Similarly, (14e) and (14f) are employed to limit the bandwidth coefficient up to 1. Furthermore, (14g) enforces binary values to a n,s , while (14g) limits the tasks computed in a single RSU up to two. Finally, (14i) and (14j) are imposed to limit frequency scaling coefficient up to 1.
In P0, the objective function and constraint (14b) are non-linear due to the logarithm in (1). Moreover, there are product relationships between the optimization variables in the objective function. For example, for the offloaded computation case, x n , p n , f n,s , a n,s are multiplied based on (11). Additionally, (14b) and (14g) make the feasible set non-convex. Therefore, P0 is a non-convex mixed-integer non-linear problem.

III. PROPOSED SOLUTION
This section presents the solution to the formulated optimization problem. In this direction, the original problem is decoupled into three problems, which are iteratively optimized through the BCD method. Particularly, closed-form expressions are derived for solving the task offloading decision. Moreover, the Lagrange multipliers and subgradient methods are employed for solving the wireless and computing resource allocation problems.

A. Optimizing Offloading Decision While Fixing the Rest Optimization Variables
In P0, constraint (14b) makes the feasible set non-convex. Therefore, to transform the feasible set into a convex one, we propose Lemma 1.
Lemma 1: The equivalent of (14b) is expressed as (15) Proof: The proof of Lemma 1 is provided in the Appendix.
Assuming fixed p p p, w w w , a a a, f f f , P0 can be decoupled into N subproblems that can be independently optimized. By leveraging Lemma 1, the following optimization problem is formulated for each task: The first derivative of P1's objective function is According to (17), the objective function is monotonically increasing or decreasing based on the sign of the first derivative. By exploiting this monotonocity, x n can be set to the lower/upper bound of (16b) when the objective function is increasing/decreasing. Therefore, the following Theorem is proposed: Theorem 1: The optimal offloading decision is obtained as

B. Optimizing Wireless Resources While Fixing the Rest Optimization Variables
After obtaining the optimal offloading decision for each task, we consider x x x , a a a, f f f to be fixed in order to determine the optimal power and bandwidth allocation. Consequently, P2 is formulated as subject to: To find the optimal power and bandwidth allocation, we employ the Lagrange multiplier and subgradient methods. Consequently, the respective Lagrangian of P2 is obtained by (22) shown at the bottom of the page. In (22), set X p,w = {β n , λ n , μ, π n } denotes the non-negative Lagrange multipliers. Therefore, the dual function is written as follows: Consequently, the dual problem is expressed as max βn ,λn ,μ,πn In accordance to the Karush-Kuhn-Tucker (KKT) conditions, the derivative of the Lagrangian function with respect to p n is provided in (23), shown at the bottom of the page. Since it is challenging to obtain a closed-form expression for (23), we utilize the bisection method for finding the root. The bisection method for finding p n is presented in Algorithm 1.
To obtain the optimal bandwidth allocation, we calculate the first derivative of the Lagrangian with respect to w n . The result is provided by (24), shown at the bottom of the page. Solving for w n , the root can be obtained by After obtaining the solution for problem D 1 through Algorithm 1 and (25), the Lagrange multipliers are updated as where s 1 , s 2 , s 2 , and s 4 are the positive step sizes. The subgradient method for optimizing the wireless resource allocation is presented in Algorithm 2.
∂L p,w p n , w n , X p,w ∂p n = π n + x n L n log 2 1 + pn d −a n,s σ 2 − pn +λn ln 2 pn +d −δ n,s σ 2 w n W log 2 1 + pn d −a n,s Algorithm 2 Subgradient Method for Optimizing p p p, w w w Input: Maximum transmission power P max n , ∀n and system bandwidth W Output: Optimal p p p, w w w 1: Initialize p n = P max n and w n = W |N | , ∀n 2: Initialize the Lagrange multipliers: β n , λ n , μ, π n 3: set t = 0 4: repeat 5: for n = 1 to N do 6: Calculate p n using Algorithm 1 7: Calculate w n according to (25) 8: Update the Lagrange multipliers using (26) -(29) 9: end for 10: Set 12: Return p p p, w w w

C. Optimizing Computing Resources While Fixing the Rest Optimization Variables
Having obtained the optimal offloading decision and wireless resource allocation, we will determine the optimal MEC assignment, as well as the optimal MEC frequency allocation to each task. Therefore, in this step, x x x , p p p, w w w are assumed to be fixed. Also, to address the non-convexity introduced by the binary constraint (14g), we relax it by setting it in [0, 1] range. This relaxation can be perceived as dividing the offloaded portion of the task into multiple parts and processing them in different RSUs. Consequently, P3 is expressed as P3: min a a a,f f f N n=1 E n a n,s , f n,s (30a) subject to: T n a n,s , f n,s ≤ T max n , ∀n (30b) 0 ≤ a n,s ≤ 1, ∀n, s (30c) N n=1 a n,s ≤ 2, ∀s (30d) To solve P3, the Lagrage multiplier and subgradient methods can be employed. The Lagrangian of P3 is given by (39), shown at the bottom of the next page. In (39), set X a,f = {κ n , λ n,s , μ s , ξ n,s , τ s } denotes the non-negative Lagrange multipliers. Thus, the dual function is written as follows: if ∂L a,f (w UB n ) ∂f n,s · ∂L a,f (X ) ∂f n,s < 0 then 5: f UB n,s = X 6: else 7: f LB n,s = X 8: end if 9: until |f UB n,s − f LB n,s | < 0.001 10: Return f n,s To obtain the optimal task assignment, we take the first derivative of L a,f (a n,s , f n,s , κ n , λ n,s , μ s ) with respect to a n,s . According to (40), shown at the bottom of the next page, the n-th task can be assigned to the s-th RSU as follows: a n,s = 1|s = arg min s ∂L a,f a n,s , f n,s , X a,f ∂a n,s Using (33), binary values for a n,s can be obtained without introducing errors due to the relaxation of (14g).
On the other hand, we utilize the bisection method shown in Algorithm 3 to obtain the optimal frequency scaling.
After problem D 2 is solved and the optimal task assigment and frequency vectors are obtained, the Lagrange multipliers are updated as κ t+1 n,s = κ t n,s + s 2 a n,s − 1 where s 1 , s 2 , s 3 , s 4 and s 5 are the positive step sizes.The subgradient method for optimizing the computing resource allocation is presented in Algorithm 4.

D. Iterative Optimization Using Block Coordinate Descent
The solution to the joint optimization problem is achieved by iteratively optimizing the subproblems. The employed BCD method is presented in Algorithm 5. During the initialization phase, the initial values for the optimization variables and the Lagrange multipliers are set. In each step, the corresponding optimal value for each optimization variable is calculated and the algorithm ends after t max iterations or if the energy consumption improvement is lower than 1%. Algorithm 4 Subgradient Method for Optimizing a a a, f f f Input: Maximum RSU frequency F max s , ∀s Output: Optimal a a a, f f f 1: Initialize a n,s = 1 and f n,s = 1, ∀n, s 2: Initialize the Lagrange multipliers: κ n , λ n,s , μ s , ξ n,s , τ s 3: set t = 0 4: repeat 5: for n = 1 to N do 6: Calculate a n,s using (33) 7: Calculate f n,s using Algorithm 3 8: Update the Lagrange multipliers using (34) - (38) 9: end for 10: Set Find p n and w n using Algorithm 2 6: Find a n,s and f n,s using Algorithm 4 7: Set E[t] = N n=1 E n (x n , p n , w n , a n,s , f n,s ) 8: p p p, w w w , a a a, f f f

IV. PERFORMANCE EVALUATION
To evaluate the performance of our proposed solution, we utilize system-level Monte Carlo simulations. Table II summarizes the simulation parameters. The numbers of vehicles is set to {5, 10, 15, 20, 25}, while the number of RSUs is set to {1, 5, 10, 15, 20}. The maximum available transmission power of each vehicle is 36 dBm, while the available system bandwidth is 20 MHz. Additionally, the path loss coefficient is set to 2 and 4, while noise variance is set to 10 −8 . Regarding the computation model, the task size is uniformly distributed in the range [500, 3500] Kbits, while the required cycles to process 1 bit and maximum latency are respectively set to 297.6 cycles/bit and 0.5s-3.5s ( [30], [37], [46]). The energy consumption coefficients for the vehicles and RSUs are set to 10 −28 . Furthermore, the computing frequency of vehicles ranges from 500 MHz to 800 MHz, while the maximum computing frequency of RSUs is set to 10 GHz.
Three evaluation scenarios are designed in order to highlight the impact of the allocation of wireless and computing resources on the total energy consumption in addition to the optimization of the offloading decision. In more detail, Scenario 1 is focused on optimizing only the offloading decision is optimized, whereas Scenario 2 is focused on optimizing the offloading decision and the allocation of wireless resources. Finally, in Scenario 3, the allocation of both wireless and computing resources is optimized in addition to the offloading decision. Fig. 2 shows the total energy consumption as a function of the number of vehicles, for various numbers of RSUs. The task size is randomly selected in the range [500, 3500] Kbits, while the maximum delay tolerance is randomly selected in the range [0. 5,3] seconds. In particular, Fig. 2-(a) shows the total energy consumption when the path loss exponent is set to 2, whereas Fig. 2-(b) shows the corresponding energy consumption when the path loss exponent is set to 4. It is apparent L a,f a n,s , f n,s , X a,f = N n=1 E n a n,s , p n,s + N n=1 λ n T n a n,s , p n,s + ∂L a,f a n,s , f n,s , X a,f a n,s = κ n,s + μ s + x n L n C n λ n + f n,s F max ∂L a,f a n,s , f n,s , X a,f f n,s = ξ n,s + τ s + a n,s x n C n L n −λ n + 2 f n,s F max that as the number of vehicles is increased, the total consumption also increases. This is expected because there exist more tasks to be computed leading to increased energy consumption. Furthermore, for a given number of vehicles, the total energy consumption is slightly increased as the number of RSUs increases. Thus, the number of RSUs has a small effect on the energy consumption for fixed vehicle numbers. Note that, the MEC-enabled RSUs have the capability to scale the allocated computing resources, therefore increasing the energy efficiency. With respect to the path loss exponent, for a given number of vehicles and RSUs, the total energy consumption of the system is increased as the path loss exponent increases. This is expected as additional power will be needed for uploading the respective tasks. Also, higher path loss will lead to lower channel capacity. Therefore, additional processing resources will be employed in order to timely process the task, resulting in higher energy consumption. Fig. 3 presents the outage probability as a function of the number of vehicles, for various numbers of RSUs. The maximum latency is randomly selected in the range of [0.5, 3.5] seconds, while the outage probability is calculated as the number of tasks that have not been computed in the required time to the total number of tasks. According to the results, the number of vehicles does not have a considerable impact on the outage probability. On the other hand, when there exist more RSUs, more tasks can be offloaded, leading to a reduced outage probability. Fig. 4 shows the total energy consumption as a function of the maximum tolerable latency. The numbers of vehicles and RSUs are set to 20 and 10, respectively. Also, the energy consumption is evaluated for two cases of path loss exponents, particularly when δ = 2 and δ = 4. Based on the results, the total energy consumption is decreasing as the maximum tolerable latency is increased. This is due to the fact that lower computing resources are allocated, leading to reduced energy consumption. As far as the task size is concerned, it is expected that when the task size is increased, more computing resources should be allocated, leading to increased energy consumption. Regarding the path loss exponents, the total energy consumption is increased for higher values of δ because of the additional transmission power and computing resources that will be employed. Fig. 5 depicts a comparison between the three scenarios in terms of the total energy consumption for a varying number of devices. The task sizes are randomly selected in the range [500, 3500] Kbits, while the maximum delay tolerance values are randomly selected in the range [0. 5,3] seconds. Also, the number of RSUs is set to 10 and 20. In all cases, when the  Comparison between three scenarios in terms the total energy consumption for varying number of devices. number of vehicles is increased the total energy consumption is also increased since there are more tasks to be processed, thereby consuming more energy (both for the upload and processing). Particularly, Scenario 1 results in the highest energy consumption as only the offloading decision is optimized. As a result, the vehicles transmit with the highest power (i.e., 36 dBm), while the RSUs process each task by assigning all available computing resources. On the other hand, Scenario 2 results in lower energy consumption as the allocation of wireless resources has been optimized, thus, lower levels of vehicle transmission power are required. Finally, Scenario 3 features the lowest total energy consumption since, in addition to the offloading decision, it optimizes the allocation of both wireless and computing resources.
Finally, Fig. 6 shows a comparison between the scenarios with respect to the total energy consumption as a function of the maximum tolerable latency. The numbers of vehicles and RSUS are respectively set to 25 and 20, while the task Comparison between three scenarios in terms the total energy consumption for varying tolerable delay.
sizes are randomly selected in the range [500, 3500] Kbits. Similarly to Fig. 4, the total energy consumption is decreased as the maximum tolerable latency of each task is increased. However, Scenario 1 features the highest overall energy consumption followed by Scenario 2, while Scenario 3 results in the lowest overall energy consumption. This is expected, as in Scenario 3 all the system variables are optimized, in contrast to Scenarios 2 and 3 where only a subset of the variables is optimized.

V. CONCLUSION
In this work, we considered the energy consumption minimization of a vehicular network. Specifically, we formulated the optimization problem as a joint optimization of the task offloading decision, the allocation of power and bandwidth, the assignment of tasks to MEC-enabled RSUs, and the frequency scaling of MEC-enabled RSUs. Since the optimization of the aforementioned problem is challenging, we decoupled it into three problems and leveraged the BCD method to iteratively optimize them. For the performance evaluation, we carried out system-level Monte Carlo simulations and evaluated the total energy consumption and the outage probability. The simulation results show that the proposed BCD-based approach can minimize the system energy consumption while maintaining a low outage probability. Moreover, three evaluation scenarios have been designed in order to highlight the impact of optimizing the allocation of both wireless and computing resources in addition to the offloading decision.
In the future, we aim to extend this work towards minimizing the average energy consumption over time, taking into account the mobility of the vehicles, as well as the arrival of new tasks. Furthermore, we aim to leverage our previous work in [55] in order to incorporate UAVs to provide on-demand computation offloading. In this direction, the design of the offloading policy and resource allocation should also consider the limited energy reserves of the UAVs. Finally, in light of the exponential increase of Internet of Things devices, we will also evaluate the impact of novel multiple access methods in a scenario consisting of numerous devices and vehicles sharing the same wireless and computing resources. APPENDIX For a given x n the required time for the local computation is expressed as On the other hand, the required time for the offloaded computation is expressed as Combining (42) and (43)  He is the Director of the ITHACA Lab, the Co-Founder of the 1st spin-off of the University of Western Macedonia: MetaMind Innovations P.C., and an Associate Professor with the Department of Electrical and Computer Engineering, University of Western Macedonia, Kozani, Greece. He has published over 260 papers in international journals, conferences, and book chapters, including IEEE COMMUNICATIONS SURVEYS AND TUTORIALS,  IEEE TRANSACTIONS ON COMMUNICATIONS, IEEE INTERNET OF  THINGS, IEEE TRANSACTIONS ON BROADCASTING, IEEE SYSTEMS