An Architecture for Experiments in Connected and Automated Vehicles

Rapid prototyping of Connected and Automated Vehicles (CAV) is challenging because of the physical distribution of vehicles. Furthermore, experiments with CAV may be subject to external influences which prevent reproducibility. This article presents an architecture for the experimental testing of CAVs, focusing on decision-making. Our architecture for experiments of CAV is strictly modular and hierarchical, and therefore it supports an easy and rapid exchange of every single controller as well as of optimization libraries. Additionally, the architecture synchronizes the whole network of sensors, computation devices, and actuators. Thus, it achieves deterministic and reproducible results, even for time-variant network topologies. Using this architecture, we can include active and passive vehicles and vehicles with heterogeneous dynamics in the experiments. The architecture also allows for handling communication uncertainties, e.g., data packet drop and time delay. The resulting architecture supports performing different in-the-loop tests and experiments. We demonstrate the architecture in the Cyber-Physical Mobility Lab (CPM Lab) using 20 vehicles on a 1:18 scale. The architecture can be applied to other domains.


I. INTRODUCTION
A. MOTIVATION

T ESTING Connected and Automated Vehicles (CAV)
in real-world environments is challenging. Multiple The review of this article was arranged by Associate Editor Xudong Jia. vehicles perform individual computations and actions to fulfill a common task. The vehicle's computations may require a non-deterministic amount of time; hence, the computation times vary in each test run. Furthermore, in tests that use distributed hardware for distributed computations, the time synchronization may contain inaccuracies and the communication time may be stochastic. This non-determinism affects the reproducibility of the experiments. Fig. 1 illustrates the requirements of reproducible networked trajectory planning. The requirements for reproducible networked trajectory planning are as follows. Best-Effort Computations: Fig. 1(a) sketches an example timing of best-effort computations. Each vehicle starts its trajectory planning as soon as its previous computation is finished. Such best-effort computations are not enough to achieve reproducible experiments, since multiple vehicles apply their trajectories at different and non-deterministic points in time. Moreover, the next trajectory planning step may be triggered without an update on other vehicles' trajectories. Computations at Constant Sample Time: Fig. 1(b) sketches an example timing of best-effort computations with a constant sample time. Each vehicle starts its trajectory planning at a constant sample time T. The best-effort computations with a constant sample time are not enough to achieve reproducible experiments, since multiple vehicles apply their trajectories at different and non-deterministic points in time. Nevertheless, each vehicle can update other vehicles' trajectories before the next computation step. Synchronization of Time: Fig. 1(c) sketches an example timing of best-effort computations with a constant sample time T and time synchronization. Each vehicle starts its trajectory planning at the same time. The best-effort computation with a constant sample time and time synchronization is not enough to achieve reproducible experiments, since multiple vehicles apply their trajectories at different and non-deterministic points in time. Nevertheless, each vehicle can update other vehicles' trajectories before the next computation step.
Logical Execution Time: Fig. 1(d) sketches an example timing of best-effort computations with a constant sample time T, time synchronization, and a logical execution time. Each vehicle starts its trajectory planning at the same time. Furthermore, each vehicle applies the planned trajectory at the same time, i.e., after the sample time T. The combination of constant sample time, time synchronization, and logical execution time achieves deterministic timing and reproducible experiments.
An architecture for testing CAV has to enable reproducible experiments and, thus, has to achieve determinism of the timing of all the systems' actions. A modular architecture allows for easy and fast updates and rapid prototyping. Furthermore, we require the architecture to support parallel, sequential, and hybrid computations and to handle time-variant network topologies. Our focus is on measuring the computation times of the decision-making algorithms and the decision quality.

B. RELATED WORK
Several test-beds for CAV exist, e.g., [1], [2], [3], [4]. These test-beds differ, e.g., in the vehicle scale, positioning system, and costs. However, the aforementioned test-beds have architectures that do not explicitly focus on a framework for networked computations and their applicability in other domains. Experiments using networked computations require deterministic timings to achieve reproducible experiments. Various research has addressed experimental architectures and testing of networked systems in multiple domains. A testbed for the development, deployment, testing, and analysis of networked systems is introduced in [5], [6]. The test-bed is able to test the fault tolerance and reconfiguration capabilities of the algorithms, as well as test the system's stability. The authors of [7] propose an architecture for sensor fault detection and isolation in a chemical plant. An architecture for fault injection in highly automated vehicles is presented in [8]. The authors of [9] propose a test-bed that selects hardware components that fit the hardware requirements of the control algorithms in a rapid prototyping manner. A test-bed that focuses on human supervision of the automation process is presented in [10]. The test-bed architecture presented in [11] enables to manage data from multiple users of the test-bed. To this end, it uses a modular and service-oriented architecture for flexibility and adaptability to multiple data sources of different users. An architecture for human-in-theloop teleoperation of fully autonomous vehicles is presented in [12]. This architecture focuses on flexibility and the ability to reconfigure to different settings of teleoperation. Other test-beds, e.g., the test-bed in [13], focuses on model-based systems engineering to find new patterns and trends, investigate the reusability of models and components, and provide a scenario repository. Nevertheless, these test-beds do not support experiments with physical systems. Example test-beds that include the physical systems are the works in [14], [15]. These test-beds focus on ambient intelligence applications. The work in [16] presents a test-bed for performing cyberattacks and evaluating their effects on networked systems.
In [17], an architecture for resource-aware computing is presented. A client-server architecture with a time-triggered client and an event-triggered server is presented in [18]. This architecture focuses on a specific hardware and software setup to hold a steel ball in position using an electromagnet and an optical sensor. An example test-bed which focuses on the communication network to simulate packet losses and communication delays is the test-bed in [19]. Nevertheless, these test-beds also use architectures that do not explicitly focus on a framework for networked computations and their applicability in other domains.
More flexible architectures make use of middleware. Middlewares introduce an abstraction layer that makes the architecture available to multiple domains, use-cases, or scenarios. An early-developed middleware is the Common Object Request Broker Architecture (CORBA) middleware [20]. This middleware was introduced to make software available to a wide range of applications, such as business, facility, or embedded applications. CORBA is an object-oriented platform that is able to interact with other platforms that are not object-oriented. Moreover, CORBA abstracts from the operating systems and programming languages and enables the communication between systems that use diverse platforms. There exist multiple extensions of CORBA, e.g., for mobile systems [21] and real-time systems [22]. Nevertheless, one implicit assumption of realtime CORBA is that communication overheads are tolerable by applications. Since real-time CORBA does not consider communication delays, it is mostly specified for real-time applications that run on a single node. Another middleware that focuses on wireless control networks is Etherware [23]. Etherware makes use of event-based communication over the User Datagram Protocol (UDP). The authors used the UDP [24] instead of the Transmission Control Protocol (TCP) [25] to avoid retransmissions, since they can occupy the network with outdated data. This middleware is flexible due to the ability to change parameters at run-time. This flexibility makes it useful in a wide range of applications. The Cyber-Physical Systems Lab in [26], [27] uses Etherware. They present an architecture for testing cyberphysical systems. The authors demonstrate their architecture using scaled vehicles. The hardware architecture includes multiple scaled vehicles on a driving field, two cameras to sense the positions of the vehicles, and one laptop per vehicle for external computations. The vehicles themselves do not perform any computations, but write data on their actuators and read their sensor data. The sensor data are communicated to the laptops which send the actuator inputs to the vehicles. The laptops compute the actuator inputs for a trajectory which is given by a central trajectory planner. This architecture enables rapid prototyping of the centralized trajectory planning algorithms, due to the possibility to easily and rapidly change software modules in one place without the need to adapt the rest of the architecture. However, the architecture is vulnerable to communication delays and packet losses, since the vehicles require frequent updates of their actuator signals. Moreover, it only supports centralized algorithms. It is not possible to test distributed algorithms without adaptions to the architecture.
The UPBOT [28] test-bed provides an architecture for networked algorithms for cyber-physical systems. The architecture is layered in a body, nerves, brain, and supervisor layer. The body layer has no intelligence and only reads sensor data and writes data on actuators, similar to the vehicles in the Cyber-Physical Systems Lab [26]. The body layer provides the sensor values to the nerves layer, which provides the actuator signals to the body layer. The nerves layer translates decisions from the brain layer into commands for the body layer. Furthermore, the nerves layer formats the sensor data and communicate them to the brain. The brain layer makes decisions and communicate them to the nerves layer. Optionally, a centralized supervisor can be used to externally make centralized decisions. In this case, the brain layer receives the decisions from the supervisor and forwards them to the nerves layer. The authors demonstrate their architecture in a test-bed with three robots. The body layer consists of sensors, actuators, and a microcontroller provided by the robot itself. The nerves and brain layer share a hardware platform that is placed on the robots. The primary use-case of this architecture is to test security threats and to study points of security attacks on CAV. This architecture is not able to perform distributed computations of decision-making algorithms. More related work is presented in the overview papers in [29], [30], [31]. They underline the need for architectures and test-beds for rapid prototyping, networked computations, and reproducible experiments.

C. CONTRIBUTION OF THIS ARTICLE
To the best of our knowledge, there is no flexible architecture for experimental testing of CAV in the literature that addresses networked computations, reproducible experiments, computation times, communication problems, and time-variant network topologies. This article presents an architecture to test CAV and focuses on networked control in a reproducible manner on real-world hardware. Each vehicle in the CAV uses its sensors, actuators, and computation devices and shares information over a communication network. The vehicles compute in a synchronized way following a logical execution time approach [32]. This leads to deterministic and reproducible tests. Our architecture supports sequential, parallel, and hybrid computations, depending on the chosen decision-making algorithm. The architecture adapts the mode of operation to the needs of the decision-making algorithm. Furthermore, the architecture is modular and achieves an experimental environment which is suitable for the rapid prototyping of decision-making algorithms. The modular architecture consists of four layers. Our demonstration setup in [33] inspired this work.
The four layers of the architecture are the High-Level Controller (HLC), Middleware (MW), Mid-Level Controller (MLC), and Low-Level Controller (LLC). The architecture is capable of rapid prototyping of decision-making algorithms. Our architecture enables multiple vehicles are able to make their decisions using sequential, parallel, and hybrid computations. Our MW ensures that the vehicles synchronously compute their decisions and synchronously apply the decisions to achieve determinism and reproducible tests, even with non-deterministic computation and communication times. The MLC implements a decision-following controller and state estimation. The LLC writes data on the actuators and reads the sensor data. This modularity achieves the reusability of the architecture and enables adaptions to specific domains.

D. ORGANIZATION OF THIS ARTICLE
The rest of this article is structured as follows. Section II defines important terms which are used in this article. Section III introduces our architecture for experiments, starting with a basic architecture for a single vehicle and extending it to an architecture for sequential, parallel, and hybrid computations. Section IV presents our evaluation on a demonstration platform. At the end, Section V concludes this article.

II. DEFINITIONS
This section introduces definitions which are used in this article.
Definition 1 (Architecture): According to [34], an architecture divides a system into modules, interactions between modules, and properties of modules and their interactions. We use the term architecture to refer to software and hardware modules, interactions, and properties. We use the terminology hardware architecture, if we refer to an architecture that contains only hardware modules and software architecture, if the architecture contains only software modules.
An architecture consists of interacting modules. We use the term vehicle for an encapsulation of modules to an autonomous subsystem within the CAV.
Definition 2 (Vehicles): We divide the set of vehicles into active vehicles and passive vehicles. a) Active vehicle: An active vehicle implements networked control. Active vehicles exchange trajectory forecasts with other vehicles and consider the trajectory forecasts of other vehicles in their own trajectory planning. b) Passive vehicle: Passive vehicles do not implement networked control. However, they may be able to communicate data, e.g., current and future states. Otherwise, the states of passive vehicles can be measured by active vehicles.
An example of active vehicles are CAV in mixed autonomous and manual traffic. Passive vehicles are manualdriven vehicles. All vehicles have to consider manual-driven vehicles to not collide. Nevertheless, manual-driven vehicles may not explicitly communicate with the CAV.
When multiple vehicles run their control tasks, the networked control may require communication between the vehicles. Definition 3 introduces different communication schemes for networked control.
A coupling graph is represented by an adjacency matrix A with elements There are different definitions for XiL, e.g., in [36], [37], [38], [39]. We define model, software, processor, and hardware in-the-loop as follows.
• In Model-in-the-Loop (MiL) testing, the controller and plant are modeled and executed on a regular desktop computer in a simulation loop. This testing mode is able to test the functionality and logic of the controller. • In Software-in-the-Loop (SiL) testing, the controller and plant are executed on a regular desktop computer. In contrast to MiL, the control software is not modeled, but consists of the actual code in SiL. This testing mode is able to test software and implementation-related functionality. • In Processor-in-the-Loop (PiL) testing, the control software is executed on the destination hardware. This allows testing the integration of the controller's hardware and software and analyzing the runtime, e.g., the runtime of the decision-making. The plant is simulated on a regular desktop computer. • In Hardware-in-the-Loop (HiL) testing, the control software is executed on the destination hardware, as in PiL. The plant is simulated to be able to test the realtime capabilities of the controller. To this end, the plant model and the computer running the plant model need to be real-time capable. The plant may also be simulated by a real-time demonstration platform. XiL testing refers to tests using all these testing methods; networked XiL refers to the testing of CAV. An important aspect of testing CAV is determinism. If determinism is ensured, simulations and experiments become reproducible.

III. EXPERIMENTAL ARCHITECTURE
This section introduces our architecture for experimental testing of CAV. Section III-A presents the architecture for a single vehicle and Section III-B extends this architecture to a networked architecture for multiple vehicles.

A. VEHICLE ARCHITECTURE
Our experimental architecture consists of multiple vehicles which share the same hierarchical architecture and additional elements for interaction. Each vehicle possesses a HLC which is connected to a MLC using a MW. A LLC implements the hardware abstraction layer and basic functionalities, e.g., reading sensor data and writing data to the actuators. Fig. 4 shows the vehicle's architecture. The following subsections introduce the architecture along its hierarchy.

1) HLC
The HLC runs high-level computations of the decisionmaking just when it is triggered by the MW. It receives the data required by the decision-making algorithm from the MW, runs the algorithm as fast as possible, and sends the computed decisions back to the MW. It then waits for the next trigger and new data to start its computations. The HLC runs on a development computer running any operating system and does not need to be real-time capable. Therefore, the HLC depends on the programming language. Our architecture currently supports MATLAB/Simulink and C++. An advantage of the HLC is that the decision-making algorithm and depending libraries can be rapidly replaced or updated. Therefore, the architecture is capable of rapid control prototyping of MiL and SiL.

2) MW
The task of the MW is to synchronize the computations of the decisions and to perform the communication between the HLC and the MLC. It triggers the HLC after each MW period time T MW . The MW runs on a real-time computer, which may be hardware separated from the HLC's hardware, and achieves logical execution time [32] to enable deterministic experiments. The MW achieves reproducible results due to determinism. For decentral communication, the MW uses a publish-subscribe mechanism [40], which is commonly used in distributed systems, e.g., in the Robot Operating System (ROS) [41], ROS2, the Message Queuing Telemetry Transport (MQTT) [42], and service-oriented architectures [43]. However, ROS requires a designated entity for service discovery or binding. In contrast, ROS2 and our MW use the Data Distribution Service (DDS), a standardized protocol for publish-subscribe communication. The protocol is widely used in safety-critical systems, e.g., in medical devices and air traffic control [44], and in the AUTOSAR Adaptive platform [45]. DDS offers a variety of configurable Quality-of-Service (QoS) parameters, e.g., the transport protocol.
In contrast to ROS and MQTT, our MW uses UDP instead of TCP, leading to lower communication latencies. This is because UDP does not require an acknowledgment for each data packet. We do not allow retransmissions, because the data are time-critical and become obsolete when a data packet is lost. Furthermore, DDS allows the deployment of a variable number of vehicles in the experiments, without having to adapt the underlying communication topology. Additionally, it is easy to extend, adapt, and change the architecture for experiments due to the dynamic coupling of components in the communication topology.

3) MLC
The MLC runs on a real-time capable microcontroller and performs light-weight mid-level computations, e.g., lightweight state estimation. The MLC sends the control inputs to the LLC at a predefined MLC period time T MLC . The MLC also receives the sensor signals from the LLC and performs a state estimation. The estimated states are communicated to the MW at each MLC period.

4) LLC
The LLC provides the hardware abstraction layer. It handles the access to the physical system, i.e., it writes data on the actuators and reads the sensor data in real-time. The LLC receives the control inputs from the MLC and sends sensor data to the MLC.
The LLC handles all hardware-dependent implementations. Therefore, exchanging the physical system requires just modifying the LLC software and not that of the MLC and HLC. This makes changes of the physical system fast and easy and allows for HiL testing. Fig. 5 shows the timing of our layer composition. The MW gathers the estimated states from the MLC and triggers the HLC in a predefined frequency. The HLC then computes decisions in a best-effort strategy. The variable τ c 1 denotes the time from reading the sensor data in the LLC to starting the HLC computation. The variable τ HLC denotes the computation time of the HLC. τ HLC may vary for different time steps. After computing the decisions, the HLC communicates the decisions to the MW, which forwards them to the MLC. The MW labels when the MLC should use the new decision and overwrites the old decision. This ensures a logical execution time, which makes sure that even for different and non-deterministic communication and computation times at different time steps in the HLC, the MLC deterministicly applies its decision in a predefined frequency. The MLC then sends the computed control inputs to the LLC, which writes data on the actuators and reads the sensor data. τ c 2 denotes the time from the made decision to the actuators. The sensor data are then used for state estimation in the MLC which communicates them to the MW.

B. ARCHITECTURES FOR CONNECTED AND AUTOMATED VEHICLES
The architecture for testing CAV consisting of multiple vehicles follows the vehicle architecture of Section III-A. Each vehicle has the same MW period time T MW , MLC period time T MLC , and LLC period time T LLC . There are multiple possibilities for networked architectures, depending on the communication scheme for networked control. The resulting networked architectures are introduced in the following sections. Section III-B1 presents the centralized architecture and Section III-B2 presents distributed architectures for sequential, parallel, and hybrid computations. Fig. 6 shows the architecture for testing centralized decisionmaking algorithms. This architecture consists of one HLC and MW for all vehicles. Each vehicle only implements its MLC and LLC. The vehicles communicate their estimated states from their MLCs to the central MW. The MW aggregates these states and triggers the HLC. After the HLC computations, the MW sends each computed decision to the corresponding vehicle's MLC. In each vehicle, the MLC and LLC work as described in Section III-A. Fig. 7 illustrates the workflow of the centralized architecture for two vehicles. For each vehicle, only the MLC is shown. The workflow between the MLC and the corresponding LLC is the same as in Fig. 5. The MLCs communicate their estimated states to the MW. The MW triggers the HLC in the predefined frequency 1/T MW by sending the aggregated states. After the HLC computations, the HLC sends the computed decisions for all vehicles to the MW. The MW then forwards each decision to the corresponding vehicle labeled with further information on when to apply the decision. All MLCs apply their decisions at the same time after the MW period.

2) ARCHITECTURE FOR DISTRIBUTED CONTROL
There are different architectures for testing distributed control systems. Compared to the centralized architecture in Section III-B1, all distributed architectures consist of one HLC, MW, MLC, and LLC per vehicle. Each LLC communicates with its corresponding MLC. The MLCs send their states to all MWs. Each MW communicates with its corresponding HLC and the HLCs share data between each other depending on the chosen decision-making algorithm. After the HLC computations, the MLCs receive the decisions of their corresponding HLCs. We now introduce the architectures for sequential, parallel, and hybrid computations. a) Architecture for sequential computations: Fig. 8 shows the sequential architecture. In sequential computations, only one HLC computes its decision at each time. To this end, the vehicles are ordered. Each vehicle communicates the computed decisions to all successors and receives all decisions of its predecessors in the coupling graph. The communication of decision-making related data is required only between the HLCs.   Fig. 9 illustrates the workflow of the sequential architecture for two vehicles. The MWs of the two vehicles trigger the corresponding HLCs at the start of the MW period. Then, only HLC 1 starts immediately to compute decisions as fast as possible. Afterwards, HLC 1 communicates the decisions to MW 1 and HLC 2 . HLC 2 is triggered by receiving the decisions from HLC 1 and starting its computations and communicating the resulting decision to MW 2 . Both MWs label information for the MLCs when the decision has to be applied. After the MW period ends, both MLCs apply their decisions at the same time.
b) Architecture for parallel computations: Fig. 10 shows the architecture for parallel computations. It differs from the architecture for sequential computations in the way that the communication between the HLCs is possible in both directions. In parallel computations, all HLCs compute their decisions simultaneously and communicate with one another according to the coupling graph. Depending on the networked control strategy (cooperative or noncooperative), each HLC may receive the states of multiple MLCs (cooperative), or of only one MLC (non-cooperative). The HLCs communicate their decisions with one another (non-cooperative). The HLCs may also share other algorithm data (cooperative). Fig. 11 illustrates the workflow of the parallel architecture using an example of two vehicles. The MLCs send their estimated states to the MWs, which trigger the HLCs at the same time. The HLCs compute their decisions in parallel. After the computation of the decisions, the corresponding HLC sends its decision to its MW. The MW labels information when the MLC should apply its decision. The MW forwards the decision to the corresponding MLC. At the beginning of the next MW period, the MLCs apply their decisions and the MWs trigger the next HLC computations. c) Architecture for hybrid computations: The architecture for hybrid computations combines the architectures for sequential and parallel computations. The HLCs are grouped by their computation dependencies. All HLCs in a group compute at the same time, as in the parallel architecture. The groups, nevertheless, compute sequentially, as in the sequential architecture. Fig. 12 illustrates hybrid computations using an example with 3 HLCs which are assigned into 2 groups. The overall computation time is the sum of all the maximum computation times of each group. Fig. 13 shows the workflow of the example in Fig. 12. The MW triggers HLC 1 and HLC 2 to start their computations. Since they are both members of group 1, they compute in parallel. When HLC 1 and HLC 2 finished their computations, they trigger HLC 3 , since it is in the second group. Since HLC 3 is the only member of group 2, no other HLC computes at the same time. The MWs label the information for the MLCs when to apply their decisions. At that time, all MLCs synchronously apply their decisions.

IV. EVALUATION
This section evaluates our architecture using our Cyber-Physical Mobility Lab (CPM Lab) [46]. The next subsections introduce our demonstration setup and timing analysis of our architecture.  the vehicles and communicates them to the µCars. Each µCar is equipped with a Raspberry Pi which runs the MLC and an ATmega microcontroller which runs the LLC. Fig. 15 shows the architecture of the µCar. It makes use of the architecture presented in Section III-A. The MLC implements a trajectory-following controller and computes the system inputs for the vehicle. The system inputs, consisting of torque and steering values, are then applied in the LLC, which writes data on the actuators and reads the sensor data. The measured sensor data are communicated to the MLC. The MLC performs a state estimation and communicates the estimated states to the MW, which triggers the HLC at its next period. The HLCs run on the computation devices. Each computation device is assigned to one µCar and represents the HLC and MW for this µCar. Due to space and weight requirements, the µCars implement only the MLC and LLC. However, logically, each µCar is assigned to an HLC for high-level trajectory planning. Such remote decision-making is commonly used in rapid prototyping approaches, e.g., in experimental tests of algorithms for the mars rover [49], [50]. The HLC plans the trajectory for the vehicle and communicates the trajectory via the MW to the MLC. Multiple HLCs can be used for sequential, parallel, or hybrid computations, as described in Section III-B.

A. DEMONSTRATION SETUP
In order to demonstrate the architecture in the CPM Lab, the µCars drive in an intersection and motorway scenario,  see Fig. 16. The map contains a highway, on-and off-ramps and a four-way intersection. The HLCs are responsible for the trajectory planning and collision avoidance. The HLCs distributively compute the µCar's trajectories in a hybrid manner, as described in Section III-B. The µCars follow the trajectories planned by the HLCs. Our methods presented in [51], [52], [53], [54], [55] are further examples of distributed applications. We measure the timing of the complete pipeline from initiating the computations of the HLCs to applying the trajectories at the LLCs.

B. TIMING ANALYSIS
This section demonstrates a case study with 10 vehicles using a hybrid of sequential and parallel computing HLCs. Table 1 shows the time-stamps of two consecutive time steps recorded in the case study experiment to show the effectiveness of our architecture. For clarity, we present the timings of only 5 trajectories per time step. The MW period T MW has a length of T MW = 400ms and the time t ∈ R starts at t = 0, when the MWs trigger the HLCs. The columns show the following information:  The computation times of HLC 1 to HLC 4 in time step 1 varies by 4 ms. The maximum computation time in time step 1 is 81 ms required by HLC 5 for its computation because it has to wait for HLC 3 to start its computations. All trajectories have the same valid-after time-stamp which is 200 ms. The MWs received all trajectories about 1 ms after the corresponding HLCs finished their computations. The received times of the MLCs show a higher variation than the MW received times, due to the non-deterministic communication times of the HLCs. The MLCs have an on-board cycle time of T MLC = 20 ms. At the next MLC cycle with t ≥ 200 ms, i.e., when the trajectories are assigned to be applied, the MLCs synchronously apply the new trajectories by forwarding the trajectories to the LLCs. At this time, if the valid-after time would not be used, vehicle 5 would apply its new trajectory at t = 100 ms, i.e., two cycle times after vehicle 1 would apply its trajectory at t = 60 ms and one cycle time after vehicles 2, 3, and 4 would apply their trajectories at t = 80 ms. With a higher variance in the computation times of the HLCs and a higher variation of the communication times, the difference of MLC cycles of the time when the trajectories are applied may be higher. As this may lead to unexpected behavior, we achieve a logical execution time by the common valid-after time. To this end, all vehicles synchronously apply the new trajectory at the same point in time. This leads to deterministic timing of experiments; hence, the experiments are reproducible. In case the valid-after time already passed the MLC receive time, the MLCs immediately apply the new trajectory. In the second time step, the MWs trigger the HLCs at t = 200 ms. HLC 5 has to wait for HLC 2 and starts its trajectory planning after receiving the trajectory of HLC 2 . All vehicles apply their trajectories at time t = 400 ms. Without the deterministic timings of our architecture, the times when the vehicles apply their trajectories would range from t = 260 ms for vehicle 1 to t = 300 ms for vehicle 5. This case study shows that our experimental architecture achieves deterministic timings and therefore reproducible experiments. The modularity of our experimental architecture guarantees deterministic timings and reproducible experiments, while the HLCs are implemented in a best-effort manner. This improves the rapid prototyping of networked trajectory planning algorithms.

V. CONCLUSION
This article presented an architecture for experimental testing of CAV. We introduced an architecture for a single vehicle and extended it to a central, sequential, parallel, and hybrid architecture for CAV. The modular architecture consists of four layers. This layered approach enables rapid prototyping and experimental evaluation of CAV for different network control schemes and time-variant network topologies. We synchronize networked computations using a logical execution time approach. The vehicles synchronously apply their decisions due to our architecture. Due to the use of a constant sample time, time synchronization, and logical execution time, this architecture achieves deterministic and reproducible experiments, especially when dealing with external influences like non-deterministic computation times and non-deterministic communication delays. Our evaluation on a demonstration platform which follows the proposed architecture underlines these properties.