A Scalable VRU Protection System Based on Edge Servers

Various vulnerable road user (VRU) protection systems have been proposed based on the edge server paradigm to take advantage of the reduced latency as well as computational offloading to servers. In most existing studies, the authors presume that each edge server receives data from its associated users and takes care of the collision risks among them. Because of this presumption, the collision risks between users associated with different edge servers can be overlooked until one of the users at risk crosses the boundary of the server. Therefore, users located at or near the boundary of the edge server domain can receive late alerts or, more seriously, miss the alert entirely until a collision occurs. To address this hazardous scenario, we propose a scalable VRU protection system (SVPS) with an edge server cooperation mechanism. SVPS minimizes additional communication and computational overhead while maintaining satisfactory service accuracy even if users are moving. The numeric results demonstrate that SVPS effectively predicts users’ risks associated with different edge servers. Furthermore, SVPS is demonstrated to be scalable: The larger the edge server coverage area, the lower the overhead. Therefore, the coverage area should be set as large as possible while still satisfying latency requirements.


I. INTRODUCTION
According to the World Health Organization (WHO), more than half of road traffic deaths occur among vulnerable road users (VRUs): pedestrians, bicyclists, and motorcyclists [1].To address VRU-vehicle accidents, various collision sensorand communication-based prediction and warning systems have been actively studied in academia and industry.Recently, advanced artificial intelligence (AI) technologies have enabled real-time, high-accuracy, sensor-based collision prediction and object detection even under dark or poor video conditions [2], [3].
However, even with advanced AI technologies, the miss rate for object detection is still at least 15% in poor video conditions such as fog, dust, or dark-clothed VRUs [3].
The associate editor coordinating the review of this manuscript and approving it for publication was Oussama Habachi .
Furthermore, the inherent limitation is that both sensors and cameras can only detect objects within their line-ofsight (LOS) field of view.In a non-line-of-sight (NLOS) scenario, obstacles, such as street trees, buildings, and other vehicles, can hinder the ability of sensors and cameras to detect objects [4].Therefore, relying solely on sensor-based approaches has inherent limitations in accurately predicting collisions between the VRUs and vehicles.Communicationbased approaches use collision detection algorithms (CDAs) to alert users of potential risks via the periodic transmission of user data, including position, velocity, and angle, and the virtual location can capture regardless of weather conditions, obstacles, or other features of the surrounding environment, leading to enhanced reliability in predicting potential risks.Communication-based approaches can be classified into two types depending on whether the CDA is processed by users or by remote servers.
CDAs require that user data be exchanged among all participants using direct device-to-device (D2D) communication [4], [5], [6], and there is generally less latency than that when remote servers perform the CDA.Despite its effectiveness, there are still some areas that can be improved.First, D2D communication is prone to higher packet loss than infrastructure-based communication [7].Second, as the number of users increases, the communication and computation load on user devices can increase rapidly, potentially shutting down the service; this is particularly problematic for users with limited battery power and computational resources.It is necessary to adjust the frequency of data updates for slowmoving VRUs, such as pedestrians, to that of fast-moving vehicles, even though the rate of change in pedestrian status is considerably slower; otherwise, the algorithm might not recognize a slow-moving pedestrian and alert a vehicle that suddenly appears near the pedestrian [8].However, these frequent user data updates can increase the energy use overhead for user devices.Mobile phones are commonly assumed to be the most frequently used communication devices for VRUs, and because energy is a strict constraint for these devices, it is essential to conserve power whenever possible, particularly for always-on applications [9].
In contrast, if a remote server is deployed to perform CDA, the user's data has to be transmitted to the server.Infrastructure-based communication is typically assumed in this scenario; specifically, the execution of the CDA is offloaded to the server, which alerts relevant users as needed.With a centralized cloud server, the computation time for risk prediction is virtually negligible, owing to its abundant computing power and storage resources [10].However, the data and/or alert delivery latency can be inappropriately long, depending on the distance and network status between the user and server.The accuracy of the collision prediction service decreases as the delay in data and alert delivery increases [11].
Recently, extensive research has been conducted on edge servers located close to end users to address this problem [12].Edge servers are especially receiving attention in time-sensitive applications because they enable offloading computation and reduce latency in data delivery by decreasing the distance between servers and users.With the emergence of the 5G mmWave band, the latency of the Radio Access Network (RAN) can be reduced to 1 msec; this is made possible by the variable frame structure, which leads to a significant decrease in response time [13].Applying edge-driven approaches to the VRU protection system and utilizing 5G mmWave technology can significantly improve the delay, which is critical for the system's effectiveness.
In edge-server-based VRU protection systems, the service area is divided into multiple domains of edge servers.Thus, user mobility can potentially lead to hazardous situations, as illustrated in Fig. 1.Specifically, users located at or in close proximity to the edge server domain boundary can receive a late alert or, more seriously, not even receive the alert until a collision occurs.Most existing research on VRU protection systems that utilize edge servers assumes that each server is responsible for receiving data from its associated users and managing potential collision risks among them.Hence, the risks among users associated with different edge servers can potentially be overlooked until at least one at-risk user finally moves across the server boundary, resulting in both users being associated with the same server.
Therefore, we propose an edge-server-based scalable VRU protection system (SVPS) to effectively address the problem illustrated in Fig. 1.The edge server cooperation inevitably requires additional communication among the servers as well as an increase in computational loads to execute the CDA for users who belong to neighboring edge servers.SVPS aims to minimize additional communication and CDA computation overhead while ensuring satisfactory service accuracy.In order to analyze the overhead and effectiveness of SVPS, we conducted a course of simulation experiments using the open-source network simulator ns-3 and the traffic simulator Simulator-Urban-MObility (SUMO).We measured the service accuracy, total communication overhead, and CDA computation overhead of SVPS, and the experimental results showed that SVPS provides high service accuracy in various scenarios, even when users are associated with different edge servers.The inter-server communication overhead incurred by SVPS is approximately 15% compared to the communication overhead of systems with independent operation of edge servers.In addition, the mechanism deployed in SVPS for the conservation of CDA computation overhead results in a reduction of up to 80% of the total computation overhead compared to previous schemes.Furthermore, we found that SVPS was scalable in terms of the size of an entire service area as well as the overhead for an increasing number of users.Specifically, we found that overhead was lower as we increased the edge server's coverage area.Therefore, it is desirable to maximize the coverage area of the edge servers, as long as it can satisfy the delay constraint.
The rest of this paper is organized as follows: Section II discusses related works and background.The details of SVPS operation are presented in Section III.Section IV explains the setup of the simulation experiments and analyzes the numerical results of the simulation experiments.Finally, Section V concludes the paper.

II. RELATED WORKS
In this section, we present an overview of the existing studies on VRU protection systems, where the systems can be classified as sensor-based or communication-based depending on the mechanism utilized for collision prediction.In Section II-A, we provide an overview of sensor-based systems.Communication-based systems are described in Section II-B.

A. SENSOR-BASED SYSTEMS
Rapid advances in AI technology have enabled real-time video processing, even for devices with limited computing resources, such as edge devices.Furthermore, numerous researchers have conducted studies to detect objects under dark or poor video conditions [2], [3].Applying AI technologies, active research is underway on systems that can predict possible collisions and provide warnings even in unfavorable conditions for detecting VRUs or vehicles on roads.The systems that have been studied have been either in-vehicle systems or infrastructure-based depending on the devices equipped with sensors.
The authors of [14] proposed an in-vehicle system for detecting and tracking pedestrians in videos collected using vehicle-mounted cameras; the system estimates the vulnerability of the VRU and provides more sophisticated services than previous sensor-based systems.However, the information used to predict vulnerability is limited; the system considers information such as the distance from the road or vehicle but not more detailed information such as the pedestrians' velocity or angle.A pedestrian who runs quickly on a sidewalk that is not adjacent to the road could be assigned low vulnerability even though this scenario is potentially dangerous.
In [3], the authors also utilized an in-vehicle camera to detect pedestrians with the aim of improving the detection performance in dark environments by using thermal sensing images along with video.Their system significantly reduced the inference time by quantizing the AI model and deploying it on an edge server for real-time analysis.However, owing to the LOS requirement for pedestrian detection, the miss rate remained high at 14%.
Placing cameras on high infrastructure such as traffic lights greatly increases the chances of meeting LOS conditions.In [15], cameras at intersections were used to detect pedestrians and vehicles in order to predict risks.However, because of the nature of infrastructure-based systems, no identifiable information is available for each detected user in a video, such as the level of danger, the distance between the target pair, and the time remaining until the collision; consequently, an alert has to be broadcast to all users at the intersection and cannot include any user-specific information, and as a result, it is difficult for users to respond appropriately to the level of risk they are facing.
Because sensors can only work well when the LOS is guaranteed, the service range is limited to the sensor's field of view; therefore, the dense installation of devices is necessary to ensure satisfactory safety-related services.The limitations of LOS can be alleviated by using infrastructure-based systems.However, advanced computing hardware such as graphic and neural processing units are the primary assumptions for real-time object detection, even for lightweight deep learning algorithms, and that hardware is very costly to deploy.Therefore, infrastructure-based systems are more appropriate for intersections and should be complemented with more scalable distributed methods in order to cover large areas, such as an entire city.It should be noted, however, that infrastructure-based approaches can require considerable enhancement to provide user-specific alerts.In summary, current state-of-the-art sensor-based approaches have an intrinsic scalability issue when covering a large area.Therefore, supplementary mechanisms are required to provide accurate service at a reasonable cost.

B. COMMUNICATION-BASED SYSTEMS
Communication-based systems can be classified as either user based or server based depending on the entity responsible for performing the CDA.In these systems, users periodically transmit data such as position, velocity, and angle that are necessary inputs for the CDA; either the servers or the users who receive the information can calculate the risk and make a decision for each VRU-vehicle pair.These systems effectively address significant challenges such as NLOS limitations and inherent scalability issues in sensor-based systems.
In [16], the author proposed a centralized cloud server-based system whereby all users send updates of their data to the cloud server, and the server sends alerts when necessary.Vehicles, which in contrast with mobile phones have abundant power, update every fixed interval of 100 msec to keep up with their high velocity.On the other side, the cloud server adaptively determines the update interval for pedestrians to restrain the quick drain on their mobile phone batteries.There were, however, several limitations to this mechanism.
First, resetting the update interval requires Cloud-to-Pedestrian (C2P) communication because changes in the update interval are determined by the remote cloud server.The server considers the number, speed, and distance of nearby vehicles to determine the update interval of a pedestrian.Therefore, using an optimal update interval can require a significant amount of C2P communication because the changes in nearby vehicles are highly dynamic over time.Second, the interval might not be optimal because it is determined based on nearby vehicle information rather than on the pedestrian.This is because the information about the vehicles is more up-to-date (note that only the vehicles use a fixed 100 msec update interval) than the pedestrian information stored on the server, especially if with a long pedestrian update interval.However, it is important to note that the update interval for pedestrians was adjusted to provide the cloud server with accurate data while conserving power.It is more straightforward and effective to adjust the update interval for pedestrians based on their status rather than that of vehicles.
A centralized cloud server has the benefits of abundant computing power and storage resources.However, the long latency in delivering updates/alerts due to the distance between the server and users could be problematic.Recently, researchers have been investigating distributing edge servers near users and have applied such systems in various fields.Compared with the centralized cloud server-based structure, latency can be reduced due to the shorter distances between the server and users.Edge-server-based systems also enable users to offload computation-intensive tasks, and with 5G mmWave, the RAN delay can be reduced to 1 msec because of the flexible frame structure [13].Edge server systems and 5G are promising environments for safety systems with stringent delay requirements [17].
The authors in [18] proposed a VRU protection system using an edge server structure to solve the problem of energy constraints on end-user devices.The authors found that the main causes of high energy consumption were data preprocessing and execution of the CDA, and they proposed offloading these two phases to edge servers.Their system placed an edge server at each base station (BS), and because every user was supposed to transmit their data to the BS server, data preprocessing at the edge server could be readily provided to all users; the preprocessed data were then relayed back to the users so that the CDA could be executed on the users' devices.
At this point, a mobile phone could offload its CDA execution to the edge server instead of performing it for itself, but to offload the CDA execution, the preprocessed data had to be transmitted back to the edge server.Because of the system's structure, which required data collection at the edge server, the main concern in their solution was the offloading of data preprocessing, but the option to offload the CDA execution was only available for mobile phones.Although locating edge servers at every BS makes preprocessing user data more convenient, the extra communication overhead required for CDA offloading is often overlooked.
The authors of [8] also leveraged an edge server to enhance the energy efficiency of pedestrians' mobile phones in the VRU protection system.In contrast to [16] and [18], the edge server only collected data from pedestrians and broadcast it to vehicles every 100 msec so that the vehicles ran the CDA and could effectively detect potential risks with VRUs in their proximity.Owing to this characteristic, long update intervals for slow-moving pedestrians did not compromise the service accuracy as long as the information stored at the edge server remained unchanged after the last update.Exploiting this feature, the authors proposed an algorithm for dynamically adjusting the update interval of pedestrian mobile phones to conserve energy.The mechanism utilized locally accessible data from a mobile phone, including the current movement status and condition of the surrounding road, to dynamically adapt to the update interval.The study's numeric results demonstrate that the mechanism effectively improved the energy efficiency of mobile phones while maintaining service accuracy; however, with their system, pedestrians could not independently perceive or react to imminent danger because only vehicles had the pedestrian information and could perform CDA.
The authors of [7] proposed a hybrid approach that integrated user-and server-based systems: Their system used a broadcast channel to transmit the update data to both the edge server and nearby users.Transmitting the update data through a broadcast channel allowed users who were within the broadcast range of the sending user to receive the data and perform the CDA for themselves, thereby avoiding collision risks even if they were associated with different edge servers.Delays were shorter when user device directly performed CDA, but there was the disadvantage that the edge server and the user devices performed CDA redundantly.Furthermore, service accuracy could only be guaranteed among the users within the broadcast range of each other when they participate in listening to the broadcast channel and executing the CDA.Unlike the mechanism in [8], the edge server in [7] obtained its positioning data from both vehicles and VRUs and hence could generate user-specific alerts for both types of users who are at risk.The authors of most existing studies on edge-serverbased VRU protection systems have primarily focused on conserving energy while ensuring high service accuracy.Their systems have the advantage of better latency compared with cloud-server-based systems.However, it is important to note that such systems typically consist of multiple edge servers that offer services across large geographic areas such as entire cities.Therefore, addressing the problem illustrated in Fig. 1 is crucial for providing the required service accuracy, especially when considering user mobility.

III. A SCALABLE VRU PROTECTION SYSTEM
In this study, we propose an SVPS based on edge servers that includes a novel mechanism for edge server cooperation to address the problem illustrated in Fig. 1.The purpose of SVPS is to maintain high service accuracy despite user mobility over multiple edge server domains while reducing additional communication as well as the CDA computation overheads incurred by the mechanism.Fig. 2 shows the architecture of SVPS.
The system comprises three entities: edge servers, VRUs, and vehicles.An edge server is connected to one or more BS, and users transfer their data to the edge server through the BS they are currently associated with.Our system assumed that radio access networks were using 5G to link user equipment to BSs [17].In the system, each vehicle is presumed to be equipped with a Global Positioning System for gathering positioning data.VRUs and vehicles transmit the collected positioning data, velocity, and angle to their respective local edge servers via a unicast, and edge servers receive data from users within their domain or from a neighboring edge server according to the cooperation mechanism explained in Section III-B.Upon receiving a user update, an edge server updates its database (DB) with new data.Then, it computes the CDA to predict collisions between the new data and other users' data stored in its DB; if necessary, the system sends an appropriate alert to users.An edge server also sends some of its user data to neighboring edge servers based on the mechanisms explained in Section III-B for determining which user data should be sent to which neighboring servers.
In Section III-A, we first explain the proposed collision prediction mechanism to reduce the CDA computation overhead.Then, we describe the edge server cooperation mechanism to address the problem caused by having a service area comprising multiple domains of edge servers in Section III-B.

A. COLLISION PREDICTION MECHANISM
In SVPS, users send updates (position, velocity, and angle) to their respective edge servers.Specifically, vehicles update their data based on a dynamic safety message generation algorithm, which was standardized by the European Telecommunications Standards Institute (ETSI) [19].According to the algorithm, the frequency of updates varies based on the vehicle's velocity, moving angle, and distance, and the vehicle update interval can range from 0.1 to 1 second; VRUs, in contrast, update their data every second.The edge server can receive updates not only from users within its own domain but also from neighboring edge servers, as described in the cooperation mechanism in Section III-B.Whenever the edge server receives an update from a VRU, vehicle, or neighboring server, it first updates its DB with the received data and then computes the CDA to predict potential collisions between new data and other user data in its DB.To minimize the CDA computational overhead, users are selected to execute the CDA based on their proximity to a user who has just received new data.
A spatial DB and an R-tree index can be deployed to efficiently store spatial data and provide appropriate replies to queries related to spatial data [20], [21].Systems assume that the edge server maintains the user data in a soft state; if no update is received during a specific period, the user is assumed to be inactive or has moved away from the edge server domain and is deleted from the DB.
To configure a subset of users to perform CDA with newly received data, a spatial query is used to find a set of users located within a fixed radius r from a newly received user.The r is determined based on the vehicle speed limit v VEHmax and expected highest VRU moving velocity v VRUmax for each edge server domain, as shown in (1): 97594 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.That is, the fixed radius r is the distance that can be moved by the maximum relative velocity of the vehicle and VRU during the threshold time t subset , which is the target time from the alert delivery to the moment of a possible collision.As a result, even if both the received user and the user in the DB are moving at the velocity of the speed limit, the edge server can consider them approximately t subset seconds before the two entities collide.Fig. 3 shows an example of configuring the subset when the edge server receives an update from VEH 1 .
First, the edge server updates its DB and performs a spatial query to find any VRU within r meters of VEH 1 .The reply to the query is the subset that includes VRU 2 , VRU 3 , and VRU 4 , for which the CDA is to be executed with VEH 1 .We use the CDA proposed in [22] in our study, although the algorithm is not limited to a specific CDA.If the results of the CDA computations predict a risk between certain VRU and vehicle pairs, the edge server accordingly sends alerts to each user.
The main objective of the proposed collision prediction mechanism is to reduce the excessive execution of the CDA when the edge server receives new data by configuring a subset of users who are in close proximity to the newly received data.However, computing the subset using a spatial DB query incurs a new sort of computational overhead to compute the subset using a spatial DB query.The overall time complexity of the proposed collision prediction mechanism is therefore, determined by the combined overhead of the following two steps: 1) finding a subset using a spatial DB query and 2) executing the CDA algorithm.
In step 1), we presume that the edge server uses a spatial DB and an R-tree index.This combination allows for the efficient and rapid retrieval of spatial data, including geographic location [20], [21].The search time complexity on a spatial DB using the R-tree index is known as O(logN ) when the amount of data in the DB is N [23].Therefore, the time complexity of step 1) is O(logN ).
In step 2), the CDA execution is iterated on the edge server, specifically between the newly received data and the elements of the subset obtained from the outcome of step 1).
Assuming the utilization of CDA from [22], the execution involves several steps with a constant time complexity.These steps include computing the Euclidean distance and relative velocity, estimating the Time to Collision (TTC), and comparing the TTC with the threshold value.Thus, the time complexity of a single CDA is O (1).If the size of the subset obtained from step 1) is M , the time complexity of step 2) is O(M ).Thus, the overall time complexity of the proposed mechanism when the edge server receives new data can be expressed as O(logN )+O(M ).If the system does not leverage the spatial DB and the query, step 1) is skipped, and step 2) is performed for all users in the DB.In this case, the overall time complexity is O(N ).
The efficiency obtained by the proposed mechanism increases as the ratio of M to N decreases.In practice, the size of the outcome from step 1), M , is supposed to be equal to or smaller than N .Furthermore, as the size of the coverage area of an edge server increases, M is lower than N .In contrast, in a mechanism that does not leverage a spatial DB, CDA is performed N times for each update.The proposed mechanism can also be more beneficial when applied to a more complex CDA, as it effectively reduces the number of CDA executions.

B. THE EDGE SERVER COOPERATION MECHANISM
Deploying an edge server structure is a good compromise between the pros and cons of direct user communication and centralized cloud-server-based approaches.However, the intrinsic characteristics of the edge server structure, that is, dividing the entire service area into different edge server domains, can cause performance failures such as that illustrated in Fig. 1.If user data are only sent to the currently associated server and not to neighboring servers until the user actually enters a new edge server domain, there is a possibility of delayed or missing alerts because the CDA might not predict risk until at least one at-risk user crosses over into the server domain.Therefore, it is necessary that the edge servers share information regarding users near the boundaries of any given server domain among all the servers.Both the selection of users for data transfer and the determination of neighboring edge servers to which user data should be transferred have an impact on the service accuracy and overhead of the edge server cooperation mechanism.
In SVPS, when a server receives an update from its associated VRU, which is connected to the BS adjacent to the other edge server domain, it checks whether the current location of the VRU is close enough to the boundary of its neighboring edge server.If it is, the edge server delivers the data of that VRU to those edge servers, and the neighboring servers can predict the risks between the VRU and vehicles in their domain.If a risk is predicted, the edge server forwards the alerts to the associated vehicle and the neighboring edge server relaying the VRU data.
Each edge server has to process two phases: 1) deciding which user's data to export and which neighboring edge servers to deliver the information to and 2) sending alerts to the users.Cooperation between edge servers can significantly  improve service accuracy, but it also has the drawback of additional overhead in both communication and computation.SVPS tries to restrain the amount of inter-server communication and additional CDA computations.The details of each phase are described as follows.

1) DECIDING THE BORDER USERS AND EXPORTING THE INFORMATION TO THE TARGET SERVER
As shown in Fig. 4, if neighboring servers exchange information about both the VRU and the vehicle, they will calculate the risk for the same VRU-vehicle pairs, resulting in redundant CDA computations; to avoid this, SVPS should only export VRU or vehicle data.In SVPS, the servers share VRU data to reduce the CDA computation load and inter-server communication overhead.Note that the CDA must be computed whenever an update arrives at SVPS.Given that fact, sharing VRU rather than vehicle data has a better chance of reducing CDA computations because the number of VRUs on the road tends to be smaller than the number of vehicles.Moreover, the VRU update interval is generally longer than that for vehicles because the VRUs use a fixed 1 second interval, whereas vehicles adopt a dynamic interval ranging from 0.1 to 1 second.The amount of inter-server communication can also be reduced compared with the case of sharing vehicle data or data from both types of users.
SVPS defines the domain of an edge server as a set of connected cellular network cells, and Fig. 5 illustrates the structure of edge server domains.BS ij represents the BS j of edge server i, and the BSs adjacent to other edge server domains are defined as border BSs.The target BSs of a border BS are defined as neighboring BSs that are one hop away and belong to different edge servers; for instance, in Fig. 5, BS 25 , BS 31 , and BS 38 are the target BSs for BS 15 .The users associated with the border BSs are located on the outskirts of the edge server domain, and those VRUs can move over to a new domain.The edge servers take into account the location and velocity of the VRUs to determine whether and which neighboring edge servers should receive their information.
When the edge server receives data from a VRU located in one of the border BS cells, it calculates the d VRUtoTarget , which is the distance from the VRU to the target BS, as shown in Fig. 6, for each of the target BSs of that specific border BS.Thus, d VRUtoTarget can be obtained from the current location of the VRU and the location of the target BS, denoted by (x VRU , y VRU ) and (x TargetBS , y TargetBs ), respectively, as shown in (2): Then, the distance from the VRU to each target BS boundary d remain is calculated for each target BS using (3), where r BS represents the coverage radius of the BS that the VRU is currently associated with (see Fig. 6): The relative velocity of the VRU-to-vehicle v rel is obtained by assuming that the vehicle is approaching from the opposite direction at the speed limit, as defined in equation ( 4): Using d remain and v rel , t remain , which is the remaining time for the VRU to enter the cell of the target BS, is computed as shown in (5): Hereinafter, we refer to a VRU as a 'border VRU' if t remain to one or more target BSs is below a threshold θ remain and the correct edge servers take charge of those target BSs as 'target servers'.For border VRUs, the probability of collision with a vehicle in a neighboring edge server domain is not considered  negligible; therefore, an update for the border VRU is relayed to the target edge servers.Fig. 7 shows an example of how to decide on a border VRU and select the target edge servers for that VRU.
As shown in Fig. 7, the VRU is currently associated with BS 1 , a border BS of Edge Server 1 .Among the 1-hop neighboring BSs of BS 1 , BS 2 , BS 3 , BS 4 , and BS 5 are the target BSs of BS 1 .t remain for the border VRU is computed for each of the target BSs, that is, for BS 2 , BS 3 , BS 4 , and BS 5 , and t remain for BS 2 and BS 3 are less than the threshold θ remain .As a result, Edge Server 2 and Edge Server 3 , to which BS 2 and BS 3 are connected, respectively, are determined to be the target edge servers for the border VRU.Edge Server 1 , therefore, relays the update of the border VRU to Edge Server 2 and Edge Server 3 .
At the target edge servers, the VRU data transferred from a neighboring edge server are treated in the same way as the data received from one of the users in its domain: store the received VRU data in the DB and perform the CDA with the selected subset of vehicles that are within r from the received VRU position.For the border VRU data received, the edge server of the border VRU is also stored in the DB to send alerts to that edge server if necessary.Note that the cooperation is carried out entirely on the server side; hence, the users incur no additional overhead.2) SENDING THE ALERTS Fig. 8 shows an example of alert delivery to a border VRU.When a target edge server detects a risk involving a border VRU from a neighboring edge server domain and one or more vehicles within its own domain, it sends the alert to both the border VRU in the neighboring edge server (Edge Server 1 ) and vehicles within its domain, as shown in the figure.The edge server of the border VRU receives an alert and relays it to the corresponding border VRU within its domain.

IV. SIMULATION EXPERIMENTS AND NUMERICAL RESULTS
We evaluated the performance of SVPS through simulation experiments using the network simulator ns-3 and the traffic simulator SUMO.We used the 5G mmWave module for communication between the BS and users and conducted two different sets of experiments to arrive at the following results: 1) The proposed edge server cooperation mechanism of SVPS showed increased service accuracy, and 2) and SVPS showed promising overhead and scalability in terms of the number of users and the size of the edge server coverage area.We specifically tested the performance of SVPS in comparison with the following two edge-server-based VRU protection schemes: 1) Unicast No Cooperation (UNO): Similar to SVPS, all vehicles and VRUs transmit their updates to the edge server via the unicast channel, and CDA is only performed on the edge servers.Unlike SVPS, each edge server operates independently.2) Broadcast (BR) [7]: All vehicles and VRUs transmit their updates to edge servers; however, the key difference from UNO is that the system transmits updates via broadcast channels.Thus, the CDA can be executed on both edge servers and user devices that listen to the broadcast channel.In particular, for the BR scheme, we modify the probability p of the user listening to the broadcast channel and conducting CDA for itself.The simulation settings, parameters, and numerical results of each set of experiments are presented in Section IV-A and IV-B.

A. SERVICE ACCURACY
The latency of an alert determines the timely control of and reaction to the risk.To estimate the latency of an alert for a possible collision, we measured the elapsed time from the alert till the collision incident that may occur if no alarm is provided, and it is denoted as the TTC hereinafter; note that a longer TTC implies an earlier alert.Adopting the CDA and alert criteria proposed in [22], we have the edge servers send an alert to the users when the TTC is less than 10 seconds.The braking time until the vehicle's velocity reaches zero (t brake ) depends on the vehicle's velocity.The driver's reaction time to recognize the alert and brake (t react ) depends on the human reaction time, and we use the statistical value presented in [18].Specifically, t brake is defined by (6), where d brake is the vehicle's braking distance, and v veh is the vehicle's current velocity.d brake is calculated using (7), where µ is the coefficient of friction between the tires and road, and g is the gravitational acceleration [24].
Based on t brake and t react , we define t late , as shown in (8), as the criterion to determine whether the alert is late.Note that if TTC is smaller than t late , the driver does not have enough time to recognize the alert and brake the vehicle to stop before a collision occurs, and the vehicle by itself does not have sufficient time to avoid a collision even if the driver fully First, we considered two cases scenarios, when it was a VRU or a vehicle that crossed the edge server boundary.We also considered four different collision points for each boundary-crossing user type: collisions occurring at distances of 0, 10, 50, 100, and 200 meters from the edge server boundaries.Finally, we tested three different vehicle velocities (45 km/h, 60 km/h, and 90 km/h), as well as the walking (1 m/s) and running (5 m/s) velocities of the VRUs, for each combination of boundary-crossing user types and collision points.Consequently, 54 different scenarios were tested.In particular, for BR, we varied p, the probability of the user listening to the broadcast channel and conducting the CDA for itself, to 0, 0.5, or 1.By randomly changing the update instance within the duration of the update interval (between 0 and 1 second or 0 and 100 msec for a VRU and a vehicle, respectively), the TTC was measured 100 times for each scenario, and the average and the worst-case TTCs were obtained.
According to the WHO's national urban vehicle speed limit statistics, approximately 70% of countries have adopted a speed limit of 60 km/h (16.67 m/s) or less as the urban speed limit [25].Cyclists are assumed to be the fastest among the various types of VRUs, and they travel at an average velocity of 16-18 km/h in urban areas [26], [27].Therefore, in this experiment, we set v VEHmax and v VRUmax to 60 km/h and 17 km/h, respectively.Table 1 summarizes all parameter settings for this simulation experiment.For UNO, the average TTC decreases as the expected collision point gets closer to the edge server boundary in both the VRU and the vehicle crossing cases (see Fig. 10(a), (b), and (c) and Fig. 11(a), (b), and (c), respectively).Especially for collisions that occur at the boundary (0 meters) of two edge servers, the alert is not received until the collision occurs in all the scenarios; this is because the first update is received right at the moment of collision at best given that an edge server can receive an update from a user only after the association with that user is completed.In addition, in UNO, the average TTC is shorter when the In short, as a boundary-crossing user moves faster, the time to react to the risk becomes shorter because the faster the boundary-crossing user moves, the earlier it reaches the collision point.Finally, the average TTC is shorter in the vehicle crossing cases than in the VRU crossing cases (compare Fig. 10(a) and 11(a), 10(b) and 11(b), and 10(c) and 11(c), respectively).This is because vehicles approach collision points much faster than VRUs because of their higher velocities.
These results imply meaningful real-world concerns because in real-world environments, vehicles are expected to pass through domain boundaries more frequently than VRUs because of their higher velocities.We showed that for the VRU crossing cases (see Fig. 10(a), (b), and (c)), when the VRU velocity is 1 m/s, the average TTC is less than t late only when the expected collision point is right at the boundary of the edge server.In contrast, if the VRU velocity is 5 m/s, the average TTC is less than t late for the expected collision point 10 meters away as well in the 0 meter case.Furthermore, for the vehicle crossing cases (see Fig. 11(a), (b), and (c)), the average TTC is less than t late for the expected collision points at 10 meters, 50 meters, and 100 meters away from the edge server boundary as the velocity of the boundary-crossing vehicle increases from 45 km/h to 60 km/h to 90 km/h, respectively.
For BR, the average TTC decreased as p decreased for both the VRU and the vehicle crossing cases (see Fig. 10(d), (e), and (f) and 11(d), (e), and (f), respectively).When p is 0, the result is the same as in UNO because only the edge server detects the risks and issues the alert.When p is 0.5, about 50% of the users receive nearby user updates and perform CDA directly.Therefore, the users can actively detect and avoid the risks, and more importantly, the risks at or near the edge server boundary can be avoided by detecting the users in the neighboring edge server domain through the broadcast channel as long as they are within the broadcast range.Therefore, the average TTC of BR with p = 0.5 is higher than that of UNO for each of the scenarios.
However, the other 50% of the users are at the same risk as in UNO with regard to collisions at or near the edge server boundaries.Appendix A shows the worst-case TTCs among 100 repeated experiments for the VRU and the vehicle  crossing cases.We showed that users could receive a late alert in BR with p = 0.5 even when the average TTC was higher than t late .When p was equal to 1, the worst-case TTC as well as the average TTC were higher than t late in all scenarios because all users received the nearby users' updates and performed CDA for themselves.Therefore, to guarantee desired service accuracy, it is imperative that all users listen to the broadcast channel and perform CDA directly.
However, with p less than 1, there remains a risk of late or missed alerts for users who do not directly perform CDA calculations.Similar to UNO, the average TTC was shorter in the vehicle crossing cases than in the VRU crossing cases (compare Fig. 10(d  risk once user enters the broadcast range; consequently, as the user's velocity increases, the time required to travel a fixed distance decrease, resulting in a shorter TTC.In addition, similar to UNO, the average TTC decreased as the expected collision point approached the edge server boundary.Note that decreasing TTC implies that the time to react to the risk becomes shorter even if it is not lower than the critical threshold t late . In contrast to UNO or BR, with the proposed SVPS, not only are the worst-case TTC as well as the average TTC higher than t late for all of the scenarios, but they can also be maintained at approximately 10 seconds in all scenarios (see Fig. 10(g), (h), and (i) and 11(g), (h), and (i) and Appendix A).This is because SVPS uses the remaining time to the boundary instead of a fixed distance as a criterion for exchanging VRU updates with neighboring servers.As a result, SVPS can track risks ahead of a certain time threshold regardless of a user's velocity.
In summary, Fig. 10 and Fig. 11 show that an edgeserver-based VRU protection system without edge server cooperation can result in hazards such as delayed or even missed alerts for users located near and/or moving toward the edge server boundary, which can in turn result in collisions.With SVPS, the risks can be completely avoided by ensuring that the edge servers are updated with the information of those users in advance.Furthermore, TTC can be controlled to specific target levels.

B. OVERHEADS AND SCALABILITY OF THE EDGE SERVER COOPERATION MECHANISM
We measured the communication and computation overhead incurred by SVPS.We measured the computational overhead as the number of CDA executions and measured the communication overhead as the total number of data updates in the system, including both border user data exchanges between the edge servers and data updates from the users to their respective edge servers.The simulation network area for this experiment was an 8.2 km × 8.2 km grid, and we assumed that users moved randomly within the area following the Gauss-Markov mobility model.The number of users ranged from 500 to 5,500, with a VRU-to-vehicle ratio of 2:3.The simulation time was 1,000 seconds, and we present the simulation result as the average value per second.We tested three different sizes of edge server coverage area based on the number of BSs connected to a single server, specifically, 7, 3-4, or 1 BSs, which we called Types A, B, and C, respectively.Fig. 12 illustrates these three different types, and Table 2 summarizes the parameters we used in the experiments.
Fig. 13 compares the total communication overhead incurred by the systems.For all the compared schemes, the communication overhead increases linearly as the number of users increases.Whereas the size of an edge server coverage area is irrelevant to the amount of communication overhead in UNO and BR, it is subject to the size of the edge server coverage area for SVPS.Specifically, in SVPS, there are more communications in smaller edge server coverage areas owing to the more frequent edge server boundary crossings.We found that the communication overheads incurred by SVPS were approximately 15%, 28%, and 71% of the entire communication overhead incurred in UNO and BR for coverage Types A, B, and C, respectively.Fig. 14 compares the number of CDA computations for SVPS when spatial DB and query are leveraged (''subset'') versus when the CDA is applied to all users in the edge server domain (''entire'') for the three different types of edge server coverage areas.
For all three types of edge server coverage areas across the entire mechanism, the CDA computation overhead increased exponentially as the number of users increased, although the degree varied depending on the size of the edge server coverage area.The overhead was more significant when the coverage area was larger because the number of users involved in CDA computations for each user data update increases as the coverage area of the edge server expands.
In contrast, the subset mechanism used in SVPS not only significantly reduced the CDA computation overhead but also made almost negligible the difference in the amount of CDA computation overhead for different sizes of edge server coverage area.This is because the set of users involved in the CDA for a newly received update is limited to those located within a certain proximity to the updated user, regardless of the size of the edge server coverage area.Furthermore, the increase in the number of updates for the growing size of an edge server coverage area is offset by the decrease in the number of updates generated by edge server cooperation when using a larger coverage area (see Fig. 13).This effect also exists in the entire mechanism, but it is difficult to see because the entire overhead load is too large to allow for seeing the consequences.Meanwhile, Fig. 15 compares the CDA computation overheads for UNO, BR, and SVPS for three different sizes of edge server coverage area.
For UNO and BR, the larger the coverage area of an edge server, the greater the increase in the CDA computation overhead.Therefore, the scalability of those schemes is limited when covering a larger area, owing to the tradeoffs between the CDA computation overhead and the number of edge servers to be installed to provide the service over the entire service area.In particular, the number of CDA computations for BR is greater when p is larger.This is because more users in the system execute CDA themselves in addition to the edge server's CDA computation.However, the increase in CDA computation overhead for SVPS is very limited for all three server coverage areas, only approximately 20% of the other schemes when the edge server coverage area is the largest (Fig. 15(a)).

V. CONCLUSION
In this study, we proposed an SVPS based on edge servers with a novel server cooperation mechanism.We aimed for the SVPS to maintain high service accuracy even when users moved across multiple edge server domains while minimizing the additional communication overhead incurred by cooperation among the edge servers and reducing the number of CDA computations.A simulation experiment demonstrated that the proposed server cooperation mechanism can effectively offer early alerts for anticipated collisions, even when users cross the domains of the edge servers.In the scenarios we tested in our simulations, a larger edge server coverage area was better for reducing the overhead associated with SVPS.We demonstrated that SVPS was more scalable than the other schemes.
The additional communication overhead incurred by the proposed edge server cooperation mechanism is limited to 15% of the entire overhead when the edge server coverage is sufficiently large.In addition, by configuring a subset of users to compute the CDA for a newly received update, the CDA computational overhead was reduced to only 20% of the other compared schemes.To implement SVPS in a more cost-effective manner over a wide service area, the coverage area for an edge server needs to be set to the maximum value at which the latency requirement can be met.
For future studies, we will explore a VRU protection system with a hybrid architecture that combines cloud and edge servers to achieve more flexible operations and efficient computation offloading.The coverage area of the edge servers, as well as where to offload the computations, can be flexibly determined taking into account the real-time traffic distributions across the entire service area.

APPENDIX A THE WORST-CASE TTC A. THE SCENARIO WHEN VRU CROSSES THE EDGE SERVER BOUNDARIES
See Table 3.

B. THE SCENARIO WHEN VEHICLE CROSSES THE EDGE SERVER BOUNDARIES
See Table 4. VOLUME 11, 2023

FIGURE 1 .
FIGURE 1. Hazard scenario in edge-server-based VRU protection system.

FIGURE 3 .
FIGURE 3. Configuring a subset of users for CDA computations.

FIGURE 4 .
FIGURE 4. Redundant CDA computations when both border VRU and vehicle data are exchanged.

FIGURE 5 .
FIGURE 5. Structure of edge server domains.

FIGURE 6 .
FIGURE 6. Calculating the distance from VRU to target BS boundary.

FIGURE 7 .
FIGURE 7. Deciding the border VRU and selecting target servers.

FIGURE 8 .
FIGURE 8. Delivering the alerts to a border VRU.
brakes immediately after receiving the alert.Fig. shows the simulation scenarios and parameters.

FIGURE 10 .
FIGURE 10.Comparison of the service accuracy when a VRU crosses the edge server boundary.

Fig. 10
and 11  show the SVPS service accuracy and the compared schemes for the scenarios where a VRU crosses the edge server boundary and a vehicle crosses the edge server boundary, respectively.TTC is repeatedly measured 100 times for each of the vehicle and the VRU velocity combinations with varying distances between the collision point and edge server boundary.Note that the figures depict the average TTC of the experimental results.The worst-case TTCs are provided in Appendix A for the VRU and vehicle crossing cases.Fig.10(a), (b), and (c) and Fig.11(a), (b), and (c) show the TTC for UNO; Fig.10(d), (e), and (f), and Fig.11(d), (e), and (f)show the TTC for BR; and Fig.10(g), (h), and (i), and Fig.11(g), (h), and (i)show the TTC for SVPS, respectively.In each graph, we marked t late for the given vehicle velocity for comparison purposes.

FIGURE 11 .
FIGURE 11.Comparison of the service accuracy when a vehicle crosses the edge server boundary.

FIGURE 12 .
FIGURE 12. Three different sizes of an edge server coverage area.

FIGURE 13 .
FIGURE 13.Total communication overhead for different sizes of edge server coverage areas.
) and 11(d), 10(e) and 11(e), and 10(f) and 11(f), respectively), and the average TTC tended to decrease as the velocity of boundary-crossing users increased (compare the cases of 1 m/s and 5 m/s VRU velocities for each p in Fig.10(d), (e), and (f) for the VRU crossing cases and in Fig.11(d), (e), and (f) for the vehicle crossing cases, respectively).This result occurred because BR predicts the

FIGURE 14 .
FIGURE 14.Comparison of CDA overhead with and without the spatial DB and query.

FIGURE 15 .
FIGURE 15.Comparison of CDA overhead between SVPS and other schemes.

TABLE 1 .
Simulation parameters for the experiment to show the service accuracy enhancement of SVPS.

TABLE 2 .
Parameters of the simulation for the overhead and scalability analysis of SVPS.

TABLE 3 .
The worst-case TTC when VRU crosses the edge server boundaries.

TABLE 4 .
The worst-case TTC when vehicle crosses the edge server boundaries.