Multicast-Oriented Task Offloading for Vehicle Edge Computing

Vehicle edge computing (VEC) is expected to be widely used in 5G wireless network in the future owing to its low latency, high reliability, and easy deployment. Unfortunately, the computation resources of roadside units (RSUs) equipped with computing ability are limited, which makes us need to seek other computing nodes. In this article, we explore an offloading model to meet this challenge. Future vehicles equipped with computing abilities can provide VEC services in vehicular fog networks (VeFN), which greatly reduces task delay and improves the efficiency of transportation system. We consider that a package consisting of several tasks is offloaded to vehicles with different computing abilities via multicast, and these vehicles process tasks at the same time. First, we use the mathematical method to construct the system utility function, aiming to obtain low delay and low computing cost, and produce a 0–1 programming problem with inequality constraints. We then solve the slack form of the primary problem based on the interior point method. Finally, a low-complexity algorithm is proposed to optimize the task delay and cost. Numerical results show that the algorithm has fast convergence speed and superior performance.


I. INTRODUCTION
In recent years, with the development of Internet of Things (IoT) and wireless communication technology, Internet of Vehicles (IoV) has been developed more and more, on-board sensors (e.g. cameras and radars) can play a key role in efficient and safe transportation systems [1], [2]. Smart vehicles accessing popular content and sharing transportation information are evolving as an emerging paradigm to support the advanced driver assistance systems and self-driving [3]. However, computationally intensive applications can put pressure on resource-constrained vehicles, causing bottlenecks and making it difficult to ensure the required level of quality of service (QoS). Fortunately, mobile edge computing (MEC) The associate editor coordinating the review of this manuscript and approving it for publication was Xijun Wang. technology can provide computing services at the edge of wireless access networks and near mobile users, thereby alleviating the heavy computing needs of individual users. Inspired by MEC technology, vehicle edge computing (VEC) network has emerged at the historic moment, which can offload computation tasks to the edge of the network, e.g. roadside computing servers (RCS).
VEC as an efficient technology, alleviates the computing pressure of vehicle users and improves QoS [4], [5]. Wang et al. [6] proposed a permissioned vehicular blockchain in VEC, called Parking-chain, where the parked vehicles (PVs) could share their idle computation resources with service requesters (SRs). Zhang et al. [7] studied parked vehicles assisted edge computing with Stackelberg game, where the overall cost of a task publisher in terms of monetary cost and subjective dissatisfaction caused by un-offloaded VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ workloads was minimized. Toward the flexible and finegrained resource usage, Huang et al. [8] integrated the container-based virtualization with parked vehicle edge computing for ensuring the task execution in parked vehicles with fast response, increased scalability, and high efficiency. Li et al. [9] investigated an energy-efficient parked vehicular computing paradigm, and they designed a contract-based incentive mechanism to motivate parked vehicles to contribute their idle on-board computation resources. The aforementioned work focused on the parked vehicles with idle computational resources providing edge computing services, which is applicable in the VEC network.
Wang et al. [10]- [12] proposed a wireless powered MEC system, where a multi-antenna access point (AP) (integrated with an MEC server) broadcasts wireless power to charge users and each user node relies on the harvested energy to compute tasks. Building on this proposed model, they developed an innovative framework to improve the MEC performance, by optimizing the energy transmit beamforming at the AP, the central processing unit frequencies, the numbers of offloaded bits at the users, and the time allocation among users. Wang et al. [13] considered the utilization of multi-antenna non-orthogonal multiple access (NOMA) technology for multi-user computation offloading, and then different users could simultaneously offload their tasks to the multi-antenna base station (BS) over the same time/frequency resources. The aforementioned work focused on the multi-antenna base station (BS) providing edge computing services with great energy efficiency, which was applicable and innovative in the MEC network. However, the computation resources are limited, which makes the traditional VEC technology not always applicable. In order to meet the emerging needs of autonomous driving, future vehicles will not only be equipped with a wealth of on-board sensors such as cameras and radars, but will also be equipped with the powerful computing capability to process sensor data and make driving decisions. Vehicles can contribute their computation resources and act as fog nodes in the fog computing environment, which can be considered as a vehicle fog network (VeFN) [14]- [21]. Unlike traditional mobile crowdsensing, fog-based vehicular nodes are introduced specifically to meet the requirements for location-specific applications and location-aware data management in vehicular ad hoc networks, including parking navigation, road surface monitoring, traffic collision reconstruction, and V2V energy swapping. Although edge cloud is beneficial in terms of low delay and less backhaul bandwidth consumption, its computing capability is relatively limited in the busy phase of computing business, and the vehicles with powerful computing capabilities can compose a novel cloud, which is referred to as vehicular cloud [14]- [17].
Through vehicular cloud computing (VCC), not only the vehicle-related computation-intensive tasks can be fulfilled efficiently, but also the capacity of edge cloud can be enhanced by offloading computing tasks to the vehicular cloud. Sun et al. proposed an efficient task scheduling scheme in vehicular cloud by jointly considering the instability of resources, the heterogeneity of vehicular computing capabilities, and the inter-dependency of computing tasks [17]. Specifically, the vehicles with idle computation resources provided computing service in this scenario. In [18], the authors explored the possibility of moving vehicles as computing nodes. The authors considered that mobile vehicles, parked vehicles, RSUs, and even pedestrians' smartphones with idle computing resources can provide computing services to other clients if necessary. For example, while the load of the data center or the traditional VEC servers (RSUs, base station, etc.) is relatively heavy, some tasks can be properly offloaded to the mobile vehicles to meet the intensive computing requirements of customer clients. The model proposed in [20] was different from the traditional VEC model, and the mobile nodes can provide computing services as traditional VEC servers. Hou et al. [19] studied the feasibility of vehicle fog computing and provided a quantitative analysis between capacity, vehicle mobility and connectivity. It is worth emphasizing that the authors believed that mobile cloudlets could perform remote cloud computing tasks and moving vehicles, especially slowly moving vehicles, would be an important part of vehicle fog computing infrastructures. The emergence of VeFN makes IoV realize computation resources sharing through task offloading, which will provide a wide range of fog applications.
The task offloading in VeFN is classified into three major modes: vehicle-vehicle offloading, vehicle-RSU-vehicle offloading, and pedestrian-RSU-vehicle offloading [18]. The authors proposed an energy-saving task offloading scheme to minimize the energy consumption of the MEC offloading system, which comprehensively optimized the offloading decision and the wireless resource allocation strategy [22]. A layered VEC offloading framework for vehicle network based on cloud computing was proposed in the work of Zhang et al. [23]. The authors studied on the task offloading mechanism and described it as a Stackelberg game model, and then adopt a distributed algorithm to obtain the optimal strategy of VEC servers. In the work of Dai et al. [24], a joint load balancing and offloading problem was proposed to maximize the system utility in the VEC network, and the authors introduced a joint optimization for selection, computing and offloading algorithm with low complexity, which jointly optimized selection decision, offloading ratio, and computation resources.
Sun et al. [25] proposed a new algorithm combining task replication and sequential learning to minimize average task offloading delay. In the work of Sun et al. [26], a task offloading scheme for a vehicle cloud computing system was designed, and a learning-based adaptive variable upper confidence bound (AVUCB) algorithm was proposed, which minimized the average offloading delay only based on historical delay observations. These two articles investigated vehicle-to-vehicle offloading issues and optimize the delay performance of offloading. In article [27], the author proposed a hierarchical offloading model for VEC: (i) RSUs offloaded the tasks to the cluster-head vehicles in proportion, and then (ii) the cluster-head vehicles offloaded these tasks to the neighbor vehicles in proportion. The authors jointly considered the delay and cost by studying the resource allocation problem of multi-vehicle users.
The aforementioned task offloading schemes, such as the works [22]- [26] focused on the scenarios that VEC servers or cloud servers provide edge computing for vehicles, without considering that vehicles with idle computation resources can provide offloading service for busy RCSs. Although the article [27] studied the similar scenario, the authors assumed that the task package can be offloaded at any proportion and they did not fully consider the mobility of the vehicles. Therefore, in this article we fully consider the mobility of the vehicles and assume that a single task cannot be split into multiple subtasks proportionally. In this article, we establish a novel offloading model: RSUs offload task package to the vehicles simultaneously by multicast, and the vehicles act as fog nodes to provide computing services.
Wu et al. introduced a multi-hop broadcast scheme in IoV to improve the routing performance, since cellular communications are not sufficient to provide a high quality of service (QoS) due to the limited resources and radio interference [28]. As the vehicle density increases, it becomes more difficult to provide all the services through infrastructurebased wireless communications (including cellular networks and wireless access point-based communications) due to the cost of deploying and maintaining the infrastructures. V2X communication refers to communications in many vehicular applications including road safety, traffic efficiency, local services and others. In general, a V2X application requires a communication system to provide communication between two stations of the system (e.g., two vehicles or a vehicle and a road side unit), stations in a group (e.g., vehicles in a platoon), and dissemination of information (typically a warning) in a geographical area. The capability of communication system to disseminate information in a geographical area is critical for V2X services. The geographical broadcast of pointto-multipoint services offer exactly this critical functionality for V2X systems [29].
Future traffic systems will play an important role as the information provider and usually need to send traffic information to the vehicles on the road. Since the public traffic information may be required by every vehicle, the data could be transmitted in the form of broadcast/multicast, which greatly saves increasingly tight communication resources. The traffic information sometimes needs to be further processed, such as coordinate calculation, path planning, traffic prediction, etc. If the resources of traditional edge/cloud computing servers are in short supply and the data could not be processed in time, the vehicles that request these traffic data will provide computing services, and then return computing results to the traffic system. In this way, vehicles will not only get the traffic information, but some vehicles with free CPU resources could also provide computing services. This novel task offloading mode can greatly improve the efficiency of the transportation system and save the tight computation resources of traffic central servers or traditional edge/cloud servers.
Multicast mode can not only transmit data to multiple users who need (join the group) at the same time, but also ensure that it does not affect other communications of users who do not need (join the group). Users who need the same data stream join the same group to share a data stream, reducing the load on the server. Since the multicast protocols copy and forward the data stream according to the needs of the recipient, the total service bandwidth of the server is not limited by the bandwidth of the user access terminal. The multicast mode greatly reduces the complexity of the process for the offloading users, because the offloading users (e.g. RSUs) do not need to establish communication links for many times, but offload the package to the vehicles via multicast at a time.
The traditional VEC communication mode is always pointto-point communication, which has a high requirement for communication resources, and usually causes many unnecessary signal interference and equipment overload. Compared with the traditional VEC communication mode, the multicast mode has the characteristics of low load, small interference, and favorable network resource management, which effectively saves increasingly tight communication resources. In addition, this offloading mode can avoid the risk of interruption/failure in the process of task computing, since multiple vehicles have the copies of offloading packages. Once the original serving vehicle interrupts the task computing due to unexpected conditions, other vehicles can take over the leaving vehicle's job and continue computing services, which greatly improves the system reliability and QoS.
Our model makes full use of idle computation resources of vehicles, which greatly improves the efficiency of the system, relieves computing pressure of the RCSs, and processes tasks with lower delay and fewer computing cost. The main contributions of this article are as follows: 1) We establish a novel multicast-oriented VEC offloading model. RSUs offload a task package to vehicle users in the form of multicast and vehicles process these tasks simultaneously. 2) We introduce a discrete-time system, which takes full account of the real-time position of vehicles in each time slot. Hence, we construct an optimization problem which jointly consider the average delay and cost of these tasks. 3) We propose an algorithm based on interior point method to solve the optimization problem. Numerical results prove that the algorithm has superior performance and can quickly obtain satisfied results. The remainder of this article is organized as follows. Section II presents the system model and problem formulation. Section III presents an algorithm jointly optimizing the delay and cost for multicast-oriented tasks offloading (JDCM). Section IV provides simulation results to validate the performance of the proposed algorithm compared with other benchmark schemes. Finally, Section V concludes our contribution and discusses future work. There are K tasks to be offloaded, written as K = {1, . . . , K }. These K tasks make up one offloading package which will be offloaded from RSUs to vehicles in a multicast manner. We assume that vehicles begin to process these tasks only after this package has been completely received. These tasks can be described by 3 parameters: (i) data size of tasks, denoted as x 1 , . . . , x K (bits), (ii) data size of results (ignored in this article), (iii) computation intensity of the task, denoted as λ k (CPU cycles per bit). Then, the computation resources (i.e., CPU cycles) required by task k, k ∈ K = {1, . . . , K }, can be expressed as λ k x k [30].
We assume that N vehicles have mutually different computing abilities, written as: F = {f 1 , f 2 , . . . , f N }, depending on the computing capabilities of on-board computers and working status of vehicles. Assume that this road has three lanes: lane A, lane B, lane C, and in different lanes the speeds of the vehicles are different. We consider that the speed of the vehicles in each lane is v laneA , v laneB , v laneC , respectively and remain constant during a certain period of time. According to the design and planning of roads in our country, a road may usually have multiple lanes in a same direction to meet different requirements of vehicular speeds. This phenomenon is especially common on urban arterial roads or highways and in order to make our model more suitable for the practical traffic scenario, we respectively set the different speeds in three lines.
In our scenario, tasks are generated by many devices, such as pedestrians' smartphones or wearable devices, traffic infrastructures generating computing data (RSUs, nearby traffic data centers, monitoring equipment, etc.), stationary or mobile vehicles. If these tasks cannot be completed by traditional VEC servers (usually VEC servers are integrated with RSUs), we consider that RSUs would offload these tasks to vehicles with idle computation resources. This process can be simplified as 3 parts: (i) tasks generated by smart devices; (ii) RSUs collection tasks; (iii) RSUs offload some tasks to mobile vehicles. We will focus on part (iii) in this article.
Since service continuity is important in the IoV, it is worthwhile to explore the problem of the handover between the adjacent RSUs. The authors considered that service continuity can be provided by using the radio access network (RAN) mechanisms such as multicast flows in handovers or more advanced techniques like RAN multicast area management [31]. They also proposed an end-to-end architecture for 5G multicast that goes beyond the current considerations for LTE broadcast, which exists solely as an isolated service [32]. Point-to-multipoint (PTM) transmissions could then be implemented in a flexible and dynamic manner as an essential RAN delivery tool such that PTM transmissions become a built-in RAN functionality without any special considerations in the core network, making it possible to dynamically and seamlessly switch between point-to-point (PTP) and PTM transmissions over the dynamically configurable RAN multicast area (RMA). Through such enhancements, 5G could provide a unified framework for PTM and multicast content delivery for relevant verticals and applications, including automotive, airborne, IoT, media and entertainment, and public warning and safety services. Nevertheless, the handover between the adjacent RSUs in our scenario is still an actual problem that required more consideration, and we will look for a more effective approach in the future work. For convenience, we introduce a discrete-time system and divide the process into multiple identical time slots, with d donating the length of each time slot, and each d is small enough to approximate a continuous-time system. The speed of vehicle n can be written as v n , n ∈ N = {1, 2, . . . , N }, therefore the distance traveled by vehicle n in each time slot is v n d, and then the coordinates of each vehicle can be expressed as: where, p n [0] is the initial coordinate of vehicle n, and p n [t] is the coordinate of vehicle n in time slot t. We let q n [t] and q RSU m respectively represent the vertical positions of vehicle n and RSU m. Therefore, in slot t, the distance between vehicle n and RSU m can be expressed as: In our scenario, these vehicles always communicate with the nearest RSU, that is, vehicle n in cellular m communicates with RSU m and if this vehicle arrives at the next cellular (i.e., cellular m + 1), vehicle n will continue to communicate with RSU m + 1. Since the Doppler effect may have a certain influence on the communication quality, the channel quality of the moving vehicle and the RSU in the front may be better than the RSU in the rear. So, the channel between the nearest RSU and the user may not be better than the other RSUs. As shown in several existing works such as [18], the mobility of nodes in an intermittently connected network can be beneficial since mobility creates more chances of contacts between nodes, and thus the probability of communication and task offloading between the nodes also increases. The speeds of vehicles are limited in our scenario and we simply consider an ideal communication model. Therefore, the actual communication distance D off n between vehicle n and RSUs can be written as: Let P RSU denote the transmission power of RSU, so the transmission rate between RSUs and vehicles can be written as: where, α is the path loss factor, and σ 2 is the white Gaussian noise power. W is the transmit power of RSUs communicating with vehicles and they are identical. The channel small-scale effects are really common during the communication in urban environments. However, we only considered the ideal channel model for simplification. And the frequency interference is neglected since the OFDM sheme is used in our scenario. Then, we need to obtain the offloading delay T off n of each vehicle. Firstly, according to formula (2), the longest transmission distance is defined as: Based on formula (5), the lower bound of the transmission rate between RSUs and vehicles can be written as: These K tasks will be packaged into a multicast package and offloaded to the vehicles with idle computation resources, that is, these vehicles will receive these K tasks at onetime. Then, the data size of the offloading package is k∈K x k , So, the upper bound of the transmission delay can be written as: Therefore, we can express the upper bound of the transmission delay in a manner of time slot, which is given by: And then we define t * : Finally, the time slots used by package offloading can be approximately expressed by:

B. TASK COMPUTING
It is assumed that the idle computation resources of the vehicles remain fixed for a period of time, that is, the value of F = {f 1 , f 2 , . . . , f N } is consistent during the task processing. Therefore, the time for computing each task can be written as: where λ k is the computation intensity of task k, k ∈ K. Then, the time slots required by vehicle n to compute task k can be expressed by: C. TASK DELAY The task delay consists of three parts: task offloading, task computing, and result feedback. If the RSUs receive the first feedback of task k from a vehicle, the time spent by this vehicle is the delay of task k. In this article, the time of task return is ignored, owing to the result data of task is usually very small. In order to reduce unnecessary resource consumption, not all the vehicles will provide computing services, since providing computing services consumes resources. We define y ∈ {0, 1}, that is, if y n,k = 0, vehicle n will not participate in computing task k; if y n,k = 1, vehicle n will participate in computing task k. Thus, we obtain a decision matrix Y = y n,k N * K , where the rows of the matrix represent different vehicles and the columns represent different tasks.
In our model, we consider that K tasks are packaged into a multicast package and offloaded to vehicles with idle computing resources, that is, these vehicles may receive the K tasks for one time. Only if the vehicles completely received the offloading package, the CPUs would start to compute tasks. Therefore, the first part of the delay is the transmission time VOLUME 8, 2020 of package with K tasks. These K tasks will be selectively computed by vehicle n, that is, vehicle n processes 0, 1 or more tasks in this offloading package. We consider that a vehicle cannot compute multiple tasks simultaneously, and if it processes multiple tasks, it will process them sequentially (since the order of tasks does not affect the overall computing delay). For vehicle n, the delay of task k includes the transmission time of the multicast package with K tasks and the cumulative working time of CPU until task k completed. Therefore, for the whole system, the delay of task k is the time from the beginning of transmission until the earliest accomplishment of task k. However, not every vehicle take part in computing task k, and if vehicle n did not participate in computing task k, its CPU occupied time would be 0 (only for task k). Therefore, in a cumulative time system, the minimum term obtained by the Minimum function is most likely not the delay of task k, since the time of many vehicular CPUs for computing task k is 0. In order to deal with this problem, we assume that if vehicle n did not compute task k, its CPU occupied time would be infinite. For convenience, we introduce a constant Q with a large value to achieve this effect. Therefore, the delay of task k can be obtained by: In this article, we jointly optimize the overall delay and computing cost of these K tasks, that is, we aim to obtain great delay performance with few computing cost. First, the overall delay of these K tasks can be written as: where (14c) guarantees that at least one vehicle provides computing service for each task. Moreover, since these tasks occupy the computing resources of these vehicles, and the vehicles should be paid some compensation, which can be regarded as the computing cost of tasks. The computing cost can be defined as a function logarithmically related to the vehicle's computing time, and can be given by: where u is a constant, it can be adjusted according to the task's tolerance for time delay. If these tasks are very sensitive to delay, that is, the system would like to pay more expenditure for computing service, and then u can be smaller. If these tasks are not sensitive to delay, that is, the system would like to pay fewer expenditure for computing service, and then u can be larger. Next, the system' s consumption can be defined as the weighted sum of t sum and E, and then we formulate the optimization problem as follows:

III. PROPOSED ALGORITHM FOR PROBLEM A. PROBLEM TRANSFORMATION
The key challenge in solving this problem is the integer constraint y n,k ∈ {0, 1}, which makes problem (16) become an integer non-linear programming problem. Additionally, the min {x 1 , . . . , x n } is a non-convex or non-linear function, so problem (16) is non-convex and NP-hard. In order to solve problem (16), we first transform it into an equivalent form as shown in Lemma 1.
Note that k∈K n∈N y n,k log 1 + t comp n,k is linear, and in order to make the convexity of objective function better, we can square the 0-1 variable y n,k . And this transformation will not change the value of original objective function, since y 2 n,k = y n,k , which can be written as: Therefore, problem (17) is equivalent to problem (16).
Considering that problem (17) is a 0-1 integer programming problem, the decision variable y n,k ∈ {0, 1} can be transformed into a slack variable z n,k ∈ [0, 1], and the variable matrix is denoted as Z. Thus, the original problem (16) is relaxed into a continuous variable problem: Lemma 2: Problem (20) can be transformed into a convex optimization problem [33]: Proof: Note that the Maximum function is very difficult to handle, so we can focus on its analytic approximation [33]. In problem (21), the analytic approximation of the Maximum function can be written as: Therefore, problem (22) is equivalent to problem (21).

B. INTERIOR POINT METHOD
In this article, problem (22) can be solved by using interior point method. First, the penalty function (Z, r) [33] can be constructed as: Obtain the penalty function (Z, r) according to the current matrix Z and r, such as (24).

4:
Use the gradient method, z n,k = z n,k +b ∂ (Z,r) ∂z n,k , to find the extreme point Z.

C. ALGORITHM DESIGN
The feasible matrix Z of problem (22) needs to be converted to a matrix of integers, since each z n,k is integer (0 or 1, i.e., y n,k ∈ {0, 1}). According to the slack variable z n,k obtained by each iteration of the interior point method, we can firstly set a decision threshold β, that is, if z n,k < β, let y n,k = 0; if z n,k ≥ β, let y n,k = 1, and then, we can get the matrix Y of original problem. Finally, the optimal matrix Y can be obtained by using the iteration method above until convergence, which is summarized in Algorithm 1. The time complexity of our algorithm can be calculated as: T (n) = 1+n(NK +NK +1+NK +1) = (3NK +2)n+1, which could be denoted as O (n). It can be seen that the proposed algorithm has the characteristics of low complexity, good convergence and high efficiency. In the scenario of VEC, in order to ensure low latency and high reliability of the system, it is very important to design efficient and low complexity algorithms.

IV. SIMULATION AND DISCUSSION
In this section, we give some simulation results to evaluate the effectiveness of our JDCM algorithm. We consider a one-way road where vehicles move at a constant speed and their positions are approximately uniformly distributed. As shown in Figure 1, RSUs are uniformly distributed along the road and near the side of the slow lane (i.e. lane C). We choose a certain cellular as the starting cellular and the vehicles in the cellular as the target vehicles. RSUs offload the package containing several tasks to these vehicles based on multicast. With the iteration of the algorithm, in order to obtain a low delay with as little computing cost as possible, the vehicles participating in computing are constantly changing, and the system finally reach optimization.

A. SIMULATION PARAMETER
In the simulation scenario, we consider a one-way road without curves and intersections. The road has three lanes, that is = 10m/s. Without loss of generality, we assume that the vehicles will not change lanes during a short period of time, that is, it will not change its driving state during the progress of task offloading. The computation resources of these vehicles are denoted as F = {f 1 , f 2 , . . . , f N }, and subject to uniform distribution in the range of (50,100). The main simulation parameters are shown in Table 1.

B. EVALUATION OF SIMULATION RESULTS
In this section, we will verify the performance of the JDCM algorithm proposed in Algorithm 1, and compare it with other methods to evaluate its superiority. Figure 2 compare the performance under different offloading schemes. As shown in this figure, the task consumption is rapidly reduced by using the JDCM algorithm proposed in this article. It converges within 6 iterations and eventually obtains the same optimal solution as the Exhaust algorithm. Compared with the JDCM algorithm in this article, the classic Genetic algorithm shows poor performance and it converge slowly and easily fall into local optimization.
We also compare the performance of the JDCM algorithm in this article with different vehicles' speeds. We set up several speed sets, namely    Figure 3, our algorithm has very good performance with different vehicles' speed. And, speed will affect the transmission delay of tasks, that is, proper increase in speed may increase transmission efficiency, but too fast speed is counterproductive [18].

As shown in
Next, we consider the performance of the JDCM algorithm when the number of vehicles increases, that is, the vehicle density increases. From Figure 4 we can see that the performance of the JDCM algorithm is still superior. Due to the increasing dimension of solution space, the convergence speed of our algorithm may finitely decrease. Compared with JDCM algorithm, the other algorithms are very poor, and they are easy to fall into local optimization.
In Figure 5, we verify the effectiveness of our algorithm with different numbers of tasks. As shown in Figure 5, the results obtained by JDCM algorithm in this article are very close to the optimal solution. In addition, as the number of tasks increases, the performance of JDCM algorithm remains superior. On the contrary, the performance of Genetic algorithm becomes worse as the dimension increasing.   Finally, we compare the performance with different computing intensity of tasks. As shown in Figure 6, the results obtained by JDCM algorithm in this article are very close to the optimal solution. From this figure, we observe that as the computing density of tasks increases, the consumption of tasks increases linearly. Compared with JDCM algorithm in this article, the classic Genetic algorithm shows poor performance it is easy to fall into local optimization.
Without loss of generality, we consider the scenario that the instantaneous speed of each vehicle changes with respect to time and subjects to the Gaussian distribution with the average value is 20(m/s). As shown in Figure 7, the algorithm we proposed still has superior performance as vehicles driving at variable speeds. Based on the results above, JDCM has the characteristics of low complexity, excellent convergence, and high efficiency, which can effectively reduce the task consumption. Therefore, we believe that the algorithm proposed in this article can solve problem (16) with superior performance.

V. CONCLUSION
In this article, a novel vehicle edge computing model is proposed, and the joint optimization problem of delay and VOLUME 8, 2020 cost for task offloading is proposed in this model. In VeFN, we used a multicast-oriented method to offload a package with multiple tasks to vehicles on the road. Vehicles can choose to compute one or several tasks in this multicast package, and even not compute tasks. With the JDCM algorithm proposed in this article, task consumption is optimized, and some vehicles do not provide computing services due to high task delay or computing cost. Numerical results show that this scheme has superior performance, which can significantly reduce task consumption compared with other schemes. In future work, we will consider more general scenarios, such as more complex roads, direction changes of vehicles, and timeliness of tasks, and in this case, the model can be transformed into a stochastic model, which remains to be further explored.