Self-Calibration of a Network of Radar Sensors for Autonomous Robots

Radar sensor networks are today widely used in the field of autonomous driving and for generating high-precision images of the environment. The accuracy of the environmental representation depends to a large extent on the accurate knowledge of the sensor's mounting orientation. Both the relative orientation of the sensors to each other and the relative sensor orientation in relation to the vehicle coordinate system are determining factors. For the first time, the orientation estimation of the radar sensors of a network is possible exclusively on the basis of radar target lists without additional localization and orientation devices, such as an inertial measurement unit or global navigation satellite systems. In this work, two algorithms for determining the orientation of incoherently networked radar sensors with respect to the vehicle coordinate system and with respect to each other are derived and characterized. With the presented algorithms, orientation accuracies up to $0.25 \mathrm{^{\circ }}$ are achieved. Furthermore, the algorithms do not impose any requirements on the positioning or the orientation of the radar sensors, such as overlapping field of views or the detection of identical targets. The presented algorithms are applicable to arbitrary driving trajectories as well as for point targets and extended targets which enables the use in regular road traffic.


I. INTRODUCTION
MODERN high-level systems for autonomous driving are based on highly precise environmental recognition as well as accurate speed and position information, which can be estimated very robustly with lidar sensors, cameras, or radar sensors, respectively [1], [2].The focus of research in this area has changed only in recent years from the use of a single sensor system to the use of multiple sensors working cooperatively, especially in the field of radar sensors [3], [4].In order to ensure high precision, both intrinsic error influences, such as incorrect array calibration and extrinsic error influences, such as incorrect orientation of the sensors, must be minimized.While there are various effective calibration methods for intrinsic error sources, [5], [6], [7], methods for extrinsic calibration are often costly or linked to certain prerequisites [8].They exist in great diversity especially for both cameras [9], [10] and lidar sensors [11], [12].
In contrast, the approaches for radar sensors are usually based on many requirements.It has already been shown in [13] that the orientation of radar sensors mounted on a car can be determined with high precision based on ego-motion estimation.Furthermore, the orientation estimation can be extended for radar sensors with a 2-D angle estimation [14].However, this orientation estimation method requires an additional time-synchronized inertial measurement unit (IMU).A different approach to estimate the orientation of the radar sensors is based on the usage of high-precision environmental maps [15], wherein the current vehicle position and high-precision environmental maps must be available, which are generated by lidar and camera sensors [16], [17], [18] and global navigation satellite systems (GNSSs) [8].In [19], it was shown that the accuracy of orientation estimation can be significantly improved using corner reflectors as radar targets.However, the position of the targets must be known.Approaches to estimate the orientation of the radar sensors on the car without using additional systems are based on the detection of identical targets and lead to rather inaccurate orientation estimates.For this purpose, overlapping field of views (FoVs) of multiple radar sensors are required [20], [21] which limits the positioning flexibility of the individual radar sensors.
Using the algorithms presented in this article, it is possible to estimate the orientation of N radar sensors (N ≥ 2) relative to the vehicle coordinate system and in relation to the other radar sensors mounted on the same car.The algorithms are the first ones achieving this solely based on target lists (TL) and known relative sensor positions.The benefit of this approach is on the one hand, that no additional systems like GNSS or IMU are required for the estimation.On the other hand, the ego-motion-based approach has no constraints regarding the positioning of the radar sensors, thus no overlapping FoV of the sensors and no multiframe target detection are required.
The rest of this article is organized as follows.The sensor setup and the basic steps of signal processing are presented n to be estimated.Vehicle coordinate system [x c , y c ] in blue, dashed and corresponding velocity components in red, solid lines.
Sensor coordinate system of S 5 in black [x s , y s ] and an exemplary detected target t 1 with an AoA of φ s 5,1 and a range of r s 5,1 .
in Sections II and III.The fundamentals of ego-motion estimation are presented in Section IV.Based on this, Section V describes two algorithms to determine the orientation of all sensors.The evaluation based on measurements as well as various robustness analyses are described in Section VII.
The result of a Monte-Carlo simulation to analyze the sensor position dependent estimation accuracy of the sensor orientation is described in Section VIII.Finally, Section IX concludes this article.

II. CONCEPT AND SYSTEM ARCHITECTURE
The general concept consists of distributed chirpsequence modulated multiple input multiple output (MIMO) radar sensors, which are used as an incoherent but time-synchronized network.The chirp-sequence radar sensors used transmit sequences of fast frequency-modulated continuous-wave (FMCW) ramps.Each chirp sequence consists of R FMCW ramps, which have a bandwidth of f B , a ramp time of T c , and a ramp repetition time of T r [22], [23].The time synchronization of all radar sensors of the network enables a simple interference avoidance as soon as the center frequency of all radar sensors is shifted by at least the IF bandwidth with respect to each other [24].This time synchronization also enables a joint evaluation of all TL for the orientation estimation.
The most relevant measurement quantities, which can be determined with such radar sensors are radial velocities v r n,m , angles of arrival φ s n,m and distances r n,m in the sensor coordinate system ( s ) for the n-th sensor and the m-th target.These quantities can be determined with N radar sensors (n ≤ N) for all M n detected targets (m ≤ M) of each individual sensor n.
The system setup comprises multiple radar sensors, which are installed around a vehicle with an unknown orientation ϕ c 1 , . . ., ϕ c N in the vehicle coordinate system ( c ).A possible sensor configuration is shown in Fig. 1 for N = 7 radar sensors and one target t 1 .A wide spatial distribution of sensors provides a more accurate ego-motion estimation

III. PROCESSING CHAIN
An overview of the signal processing chain is depicted in Fig. 2. The first steps are performed individually for each sensor and are illustrated as green boxes in Fig. 2. The raw data from N time-synchronized incoherently connected radar sensors are stored, processed independently, and transformed into the frequency domain using a fast Fourier transform (FFT).The relevant target information is extracted by using a constant false alarm rate (CFAR) algorithm, subsequent peak detection, and angle of arrival (AoA) estimation and is stored in individual TL.These basic signal processing steps for all N sensors are shown in Fig. 2 in green.In contrast, the blue elements describe the procedure of the orientation estimation algorithm and are described in more detail in the later part of this work.
Based on the individual target list of each sensor, stationary targets are filtered out and used for a precise estimation of the vehicle's proper motion, which is depicted in Fig. 2 as red box.The intrinsic velocity of a vehicle is generally described by the two velocity components v x and v y as well as the yaw rate ω, which is shown in Fig. 1.
Based on ego-motion estimation and random sample consensus (RANSAC)-filtering, a genetic optimization algorithm (GA) is applied afterward to estimate the orientation ϕ c 1 , . . ., ϕ c N of all N radar sensors which is depicted as orange box in Fig. 2. Therefore, first the basics of the ego-motion estimation are derived.

IV. EGO-MOTION ESTIMATION
The three velocity components v x and v y as well as the yaw rate ω are estimated using a radar sensor network with at least two radar sensors (N>2) based on TL [25], [26], [27].The following model describes the relationship between one sensor and the measured target information v r n,m , φ s n,m and the ego-velocity Here, x c n and y c n denote the sensor positions and φ c n,m the AoA relative to the vehicle coordinate system.These relative AoAs are calculated with in order to transform the angles from the sensor coordinate system φ s n,m to the vehicle coordinate system, in which case ϕ c n is the z orientation of the sensors on the vehicle, as shown in Fig. 1.
If more than one radar sensor (N>1) is used, the equations can be converted into a system of linear equations, and thus, the velocity vector of the vehicle's motion V p can also be unambiguously determined with the help of Equation ( 3) can be solved in the least square sense for the desired velocity vector V p with the Moore-Penrose inverse ( + ) The vehicle speed is determined based on stationary targets using the dependency described in (3).For estimating the vehicle's speed, at least two sensors must detect a total of at least three targets to allow velocity estimation for all three degrees of freedom (DoFs).Since (3) is only valid for stationary targets, nonstationary targets as well as false detections have to be filtered out.These outliers, which do not satisfy the searched motion model, are filtered out in the following using a random sample consensus (RANSAC) algorithm [28].

A. RANSAC-Filtering
To provide a reliable and robust ego-motion estimation outliers must be filtered out for example with an iterative RANSAC-algorithm with I iteration steps.For this purpose, in each iteration step i ≤ I of the RANSAC algorithm, the velocity model V p i is estimated based on three randomly chosen targets from the TL (TL 1 , . .., TL N ) and subsequently the difference D to all measured radial velocities V r is determined Subsequently, the quality of the velocity estimate is assessed on the basis of the number of inliers K in (6).Inliers indicate targets that have a smaller velocity error than a suitable threshold T regarding the currently estimated motion model.The threshold has to be chosen in a way that a majority of all real stationary targets are within the threshold despite measurement inaccuracies and noise.To ensure this, a threshold of T = 0.1 m/s is chosen for the evaluation.This iterative RANSAC process is described by where D j describes the j-th element ( j ≤ J, J = O(D), where O describes the cardinality) of the vector D with and 1 specifies the indicator function with The model which fits best to the current measurement, and thus, has the most inliers, corresponds to the most probable velocity vector V p K of all I iteration steps.Since the RANSAC filtering is based on I iteration steps and three randomly chosen targets, V p K only describes the best estimate based on three stationary targets but not based on all stationary targets.To improve the velocity estimation, the vehicle velocity is estimated again based on all targets that satisfy the model within the threshold tolerance T for the velocity vector V p K .

V. ORIENTATION ESTIMATION
According to the equation of the velocity model ( 1) and the subsequent conversion to the vehicle coordinate system (2), it is evident that the velocity estimate is significantly dependent on the orientation of the sensors.Thus, the quality of the velocity estimation described in (1) is used to determine the most likely velocity for a given combination of sensor orientations ϕ s 1 , . . ., ϕ s N .Therefore, the algorithm can be extended to estimate not only the velocity vectors but also the orientation of all radar sensors installed on the vehicle.The approach is shown as a schematic in Fig. 2 in the orange-filled area and is explained in more detail in the following.

A. Basic Orientation Estimation (BOE)
For the purpose of orientation estimation, a function is initially set up that reflects the quality of the velocity estimation for different sensor orientations ϕ c 1 , . . ., ϕ c N .This can be achieved by extending (5) for arbitrary sensor orientations ϕ c 1 , . . ., ϕ c N , which determines the difference of all targets to a given velocity vector V p K and given sensor orientation Based on (9), identical to the ego-motion estimation, the most probable vehicle speed V p K for a specific orientation of the radar sensors ϕ is estimated by using the RANSAC algorithm described in (6) another time However, since the most likely sensor orientation ϕ c 1 , . . ., ϕ c N is now of interest rather than the most likely vehicle velocity, (10) must be extended by a further minimization over all possible orientation angles ϕ c 1 , . . ., ϕ c N .The most probable sensor orientation is given once the number of inliers is maximal, which is described by the following equation: ) This principle is illustrated by Fig. 3(a) and (b) based on a radar sensor network with N = 2 sensors.The two sensors have a ground truth orientation of and detect arbitrary targets.The x-axis represents the angle of arrival in relation to the vehicle coordinate system, and the y-axis represents the measured radial velocity.Each point represents a target, detected by sensor S 1 (black) or sensor S 2 (blue).
A RANSAC-based ego-motion estimation with the ground truth sensor orientations, as shown in Fig. 3(a), provides a robust velocity estimate based on many inliers (red filled), which lie within an appropriate threshold (turquoise).These inliers represent stationary targets (black for sensor 1, blue for sensor 2) that correspond to the velocity model.Once the estimation is performed based on identical targets but with incorrect sensor orientations, as shown in Fig. 3(b), an ego-motion estimation, depicted as green line in Fig. 3(b), can still be performed in this scenario, but it differs from the correct velocity estimation which is depicted as violet line in Fig. 3(b).The blue targets are the same one as the ones depicted in Fig 3(a) but with a different calculated AoA in relation to the vehicle coordinate system due to the orientation of the sensor.
In addition, the estimate does not have the same quality because the number of inliers has been significantly reduced since the inliers only include targets of the second sensor which is depicted in blue.As a result of the reduced number of inliers, the value for the error function of g in ( 12) increases from g = 1 /57 to g = 1 /33, resulting in a less accurate and more implausible ego-motion, and thus, a more improbable orientation estimate.
Since the orientation does not change during a measurement run over several frames F , multiple frames can be used for improved robustness.Therefore, the optimal velocity is estimated for each frame based on the sensor orientations ϕ c 1 , . . ., ϕ c N and the resulting number of inliers is determined.Numerical minimization of the function g with a GA yields the orientation of the sensors.To solve the general mathematical problem unambiguously, the vehicle velocity estimation must be restricted to a maximum of two degrees of freedom ω and v x , whereby v y is not estimated and is assumed to be 0. Therefore (1) is simplified to This equation of motion can still be used to describe both a straight-line and a curved trajectory.Once the trajectory has no curves and is only linear, the equation of motion (13) can be further simplified to estimate only the v x component, reducing the DoFs even further Since a straight-line trajectory with ω = 0 is difficult to realize in reality, this simplification is used exclusively for evaluation and comparison of straight-line trajectory and curved trajectory in Section VII.
The restriction to a maximum of two DoFs results from the fact that as soon as there is only a linear motion of the car (only v x and v y ) an identical sensor orientation offset of h degree for all N sensors still leads to a linear motion of the car, but the cars movement vector is rotated by h degree.This results in different velocity estimates for different sensor orientations.The problem in this case is that this incorrect speed cannot be distinguished from the correct speed, since the number of inliers is identical in both cases.
This problem can be clearly seen in Fig. 1.For a vehicle having only v x velocity, a local v s x velocity in sensor coordinates is estimated for the sensor S 4 according to (1).Once the vehicle velocity is not restricted to v x and ω only, and the orientation of the sensors has to be estimated, the sensor S 4 could also have the orientation of S 6 , and thus, be rotated by 90 • .In this case, a local v s x velocity of the sensor is still estimated in sensor coordinates, but this results in a v y velocity in vehicle coordinates.This ambiguity can thus only be resolved by restricting the vehicle motion in v x and ω.Since vehicles drift only minimally even while cornering [29], and thus, v y ≈0 holds, in the following the velocity vector to be estimated is restricted to v x and ω analog to (13) to eliminate the ambiguities of the orientation estimation.
As the orientation estimation is based on the quality of the ego-motion estimation the solution of the orientation estimation is unambiguous as soon as the local velocity of each sensor (v s x and v s y ) can be determined unambiguously.This is guaranteed as soon as each of the N sensors detects at least M = 2 targets [25], [26], [27].

B. Advanced Orientation Estimation (AOE)
This type of orientation estimation (BOE) described in ( 12) is limited with respect to its maximum achievable accuracy, as can be seen in Fig. 3(c).If the sensor S 2 has only a small orientation deviation from the ground-truth orientation (for improved illustration ϕ 2 = 10 • ), all targets are still detected as inliers (for the chosen threshold).Because of that, the function g has an identical quality factor with g = 1 /57 for both correct [Fig.3(a)] and incorrect orientations [Fig.3(c)].The minimization of the function g, and thus, the estimated sensor orientation is randomly distributed around the ground truth orientation within a small tolerance, the chosen threshold for ego-motion estimation, which is shown in the later measurement Section VII.In order to provide a more robust and consistent estimate of orientation, an even finer orientation search based on a least-square optimization of the velocity errors can be performed after applying the basic orientation algorithm described in Section V. Since the maximum number of inliers, which corresponds to the minimum of the function g, is known from the previous estimation calculated with (12), the errors between the estimated radial velocity and the measured radial velocity are minimized for all inliers.The boundary conditions are the number of inliers known from (12), which is described by the following equation: . (15) This deviation is exemplarily shown in orange in Fig. 3(c) for one target.This least square (LSQ)-based optimization yields the more precise motion estimation and therefore the more precise and reliable orientation estimation.Moreover, relative to numerical minimization deviations, a more robust and unambiguous orientation estimation can be provided, which reliably yields similar results and is shown in Section VII-A.

VI. MEASUREMENT SETUP
The measurement setup to verify the theoretical derivations is shown in Fig. 4 and consists of seven incoherently connected chirp-sequence radar sensors [24].The measurement setup corresponds to the sketch in Fig. 1.The sensors are mounted approximately equiangularly around the vehicle in 45 • steps, providing a 360 • FoV.The ground truth of the sensor positions relative to the vehicle coordinate system were determined using a Trimble tachymeter.
The measurement run was performed on a parking lot with not only stationary targets but also nonstationary targets, such as moving cars or pedestrians, which are filtered out using the RANSAC algorithm described in Section IV.No additional targets, such as corner reflectors were set up in the parking lot.Thus, the following evaluations are based exclusively on extended targets (vehicles, trees, wooden fences, branches, gravel, and lampposts).For the time synchronization of the radar sensors an external trigger is used.In combination with a small variation of the sensors' start frequencies, the time synchronization can be used for interference suppression between the sensors [30].Due to identical radar parameters, apart from the start frequency of each sensor, the transmit ramps of all sensors have a constant frequency offset between each other.Thus, no intersection between the ramps and therefore no interference in the IF band is to be expected.The radar parameters are listed in Table I.
The ground truth orientations of the sensors have been determined using an IMU-based algorithm.It was shown that with the help of an IMU the orientation of radar sensors can be estimated with an accuracy of about 0.05 • [13].Thus, the accuracies of the algorithm described in this work are compared to the orientation estimates from [13].These are assumed as ground truth values in the following.

VII. MEASUREMENT RESULTS
At first, the basic orientation estimation algorithm described in (12) and the advanced orientation estimation algorithm described in (15) are compared to each other.Afterward, the robustness with respect to the number of sensors, the number of frames, the sensor orientation, and sensor positioning are investigated and verified based on measurements and simulations.

A. Comparison of Orientation Estimation Algorithms: BOE versus AOE
In Section V, two different methods for the orientation estimation based on TL were presented.First, the BOE algorithm which maximizes the number of inliers, as described in (12), and second the AOE algorithm, an approach to minimize the squared error between the found inliers and the estimated velocity (15).
The results of a measurement run with a straight-line trajectory, N = 7 sensors and F = 1000 frames are shown in Fig. 5.
In order to show the advantages of the least square-based AOE-method (red curve) compared to inlier number-based BOE-method (blue curve) described in Section V, 50 trials of the identical measurement run were performed.In Fig. 5, it can clearly be seen that as soon as only the number of inliers is used for the orientation estimation, the results of the orientation estimation vary, although the same number of inliers was always found using the GA.This has already been visualized in Fig. 3(c).The variance depends on the chosen threshold T , which in turn has to be adapted to the quality of the measurements, and thus, to standard deviations σ v r and σ φ .Using the BOE algorithm, the orientation of the sensors can be determined with an accuracy of 0.34 • on average, with a standard deviation of about 0.1 • .In contrast to this, the least square-based AOE method (red curve) provides a reliable and constant (within the numerical minimization limits) solution, which is better with only 0.26 • deviation, and thus, about 33 % more accurate.
In relation to an orientation estimation based on highprecision maps, which achieves an orientation accuracy of about 0.5 • for each sensor [15], the error can be reduced by a half with the algorithm presented in this work.A comparison with the angle estimation accuracy of the radar sensors illustrates the quality of the orientation estimation algorithm.
The standard deviation of the angle estimation of the used radar sensors, which have an aperture size in azimuth direction of 4λ corresponds to σ ≈1.2 • for a target with an SNR of 40 dB and can be calculated with [31] The presented orientation estimation algorithm enables an orientation estimate exclusively based on TL which is significantly more accurate than the angular accuracy of the radar sensor.

B. Comparison of Different Trajectories
According to Section V, the described algorithms are only limited by the condition that the lateral vehicle velocity must be zero (v y = 0).Since this can be assumed corresponding to [29] also for curved trajectories, the algorithm is applicable for both straight-line and curved trajectories.
The average orientation errors for a different number of sensors and different trajectories is shown Fig. 6.In each case, 50 different trials with a length of F = 1000 frames were evaluated.
For the straight-line trajectory, only the x-component of the velocity vector V p was estimated according to (14), whereas for the curved trajectory, the velocity components v x and ω are estimated according to (13).
Analogously to Fig. 5, it is noticeable how the least square-based AOE solution method provides accuracy advantages in all categories compared to the inlier numberbased BOE solution method.Moreover, it can be seen in Fig. 6 that for a straight trajectory, as the number of N sensors increases, the estimation error improves only marginally from 0.30 • with N = 2 to 0.26 • with N = 7.For curved trajectories, the estimate improves significantly from 4.65 • with N = 2 to 0.28 • with N = 7 sensors.
In particular, the relatively poor estimation for N = 2 sensors with curved trajectories is remarkable, which is shown and investigated in more detail in the following Fig. 7.This represents the logarithmic error of the function g for straight-line and curved trajectories and different number of sensors.
All figures in Fig. 7 are based on identical measured raw data from a straight-line trajectory with F = 1000 frames.Fig. 7(a) and (b) plot the error function g logarithmically for, N = 2 and N = 7 sensors, respectively, with only sensors S 1 (x-axis) and S 2 (y-axis) shown.The minimum of the function in both cases can be clearly seen at approximately ϕ 1 = 234.5 • and ϕ 2 = 271.5 • .However, it is noticeable that once multiple sensors are used, as in Fig. 7(b), a much sharper and clearer minimum exists, thus resulting in a more precise and robust orientation estimate.
Once the identical straight-line trajectory is evaluated with the two DoF v x and ω, and thus, for arbitrary trajectories, significant differences can be detected.The results for N = 2 sensors is depicted in Fig. 7(c), whereby the result for N = 7 sensors is shown in Fig. 7(d).As soon as only two sensors are used for the self-calibration, there is no definite global minimum of the function g but several minima, which all have similar values of the function g.All sensor orientations with similar minima of the function g have the property that the relative angle between the sensors, is still estimated correctly, and thus, the angular offset ϕ is almost identical for both sensors.This is shown accordingly in Fig. 7(c) by mark A for the ground truth orientation, and mark B for a possible estimated orientation.Thus, even with two sensors, the relative orientation of the sensors to each other can be estimated very well for curved trajectories, but precise orientation estimation relative to the vehicle coordinate system is highly errorprone.Since these figures represent the error function of a straight-line trajectory for the solution based on an arbitrary trajectory with the two DoFs, v x and ω according to (13), the problem is of mathematical nature and not trajectory related.However, as soon as more than N = 2 sensors are used, as shown in Fig. 7(d) for N = 7 sensors, a definite minimum, and thus, a definite orientation of the sensors in relation to the vehicle coordinate system can be determined for curved trajectories as well.
It should be noted that this phenomenon is not due to curved trajectories, but due to ambiguities in the egomotion-based orientation estimation with 2 DoF.Therefore, as seen in Fig. 7(c), the phenomenon can also be detected for straight-line trajectories with 2 DoF velocity estimation.
Since the algorithm is based on ego-motion estimation and not based on the detection of identical targets, it is not necessary to evaluate multiple frames, which also allows online calibration.

C. Online Calibration
Since the orientation of all sensors on the vehicle does not change during a measurement of several frames, the orientation can also be performed on the basis of several frames, which leads to an integration gain.The average error ϕ for the joint estimation of N = 7 sensors is shown in Fig. 8 for 100 different subsequences.Since it was shown in Section VII-A that the accuracy can be improved using the AOE compared to the BOE algorithm, only the AOE method is shown.
The integration gain from evaluating multiple frames can be seen for both the straight trajectory with 1 DoF motion estimation and the curved trajectory with 2 DoF motion estimation.According to the previous findings from Section VII-A and VII-B, the evaluation of multiple frames provides significantly more advantages for the curved trajectory than for the straight trajectory due to the more ambiguous solution of the problem.However, depending on the required precision, the sensor orientation ϕ c n can be estimated independently of the trajectory even with only one frame to at least 1 • .Furthermore, it becomes apparent that the orientation estimation has a bias error of approximately 0.26 • regardless of the number of sensors (Fig. 6) and regardless of the frames evaluated (Fig. 8).There are three main factors causing this offset as follows.
Incorrectly measured sensor positions lead to incorrect conversion from the sensor coordinate system to the vehicle coordinate system, especially in the case of curved trajectories.Furthermore, incorrect internal calibration leads to a nonlinear error in the AoA estimation.Ultimately, the algorithm is based on the assumption of nonelevated targets (θ = 0 • ).With 1-D angle estimation as used for the radar sensors employed, incorrectly estimated angles of arrival are the consequence for elevated targets.Once TABLE II Used Simulation Parameters these errors do not occur a bias-free orientation estimation is possible, as shown in Fig. 9 for simulation results.

VIII. QUALITY CRITERIA FOR OPTIMAL SENSOR PLACEMENT
The accuracy of the orientation estimation does not only depend on the trajectory, the number of sensors, and the algorithm used but also on the sensor constellation.In order to investigate the influence of the sensor constellation, a total of four different sensor configurations with two or three sensors as well as straight and curved trajectories were evaluated according to Table II.Due to the large number of position and orientation possibilities, the valuation is based on simulation data.
The error of the orientation estimation by applying the AOE algorithm for multiple sensor configurations is illustrated in Fig. 9 for F = 200 evaluated frames and 250 Monte Carlo trials with randomly generated TL.Fig. 9(a)-(d) indicate possible sensor positions and sensor orientations and the corresponding logarithmic average orientation estimation error in degree of all N sensors of the network.The x-and y-axes correspond to the vehicle coordinate system according to Fig. 1, and thus, denote the x n and y n position of the respective sensor with respect to the origin of the vehicle coordinate system, at the center of the rear axle.Fig. 9(a)-(d) represent consequently a car with dimensions 5.5 × 2.5 m, which is clarified exemplarily with a smaller car in Fig. 9(c).
In order to reduce the complexity of the simulation, the orientation of the sensors is calculated according to the relative distance between sensor and vehicle rotation point The function atan2 is the four-quadrant two-argument ambiguity-free arctangent.This corresponds to an omnidirectional view of the sensors and is shown in Fig. 9(c) as an example for a sensor S 2,v in white.Thus, each x c n and y c n pixel corresponds to a sensor position and to a sensor orientation corresponding to a specific sensor orientation.
Since the structure of all subfigures is identical, the structure is described exemplarily according to Fig. 9(c).Fig. 9(c) describes the orientation estimation accuracy for a radar sensor network with N = 2 sensors, whereby the Nth sensor is variable with respect to position and orientation, and all other N−1 sensors are fixed positioned and orientated.The fixed sensors are always shown as triangles.In Fig. 9(c), this is sensor S 1 at the front of the hood.The Nth sensor S N [in Fig. 9(c) exemplary S 2,v ] is located at arbitrary x c n and y c n positions.The z-value at which the Nth sensor is located describes the averaged logarithmic orientation estimation error of all N radar sensors in this network.In that case, the average error for the fixed sensor S 1 , which is always located at the front of the vehicle in Fig. 9(c), and the variable sensor S 2 , which is located for example at the rear left bumper is −1 =0.1 • .Fig. 9(a) illustrates the average orientation error for a sensor network consisting of N = 2 sensors for a curved trajectory, where one sensor is located at position This transfer only works if at least N = 2 radar sensors are used, otherwise the system of equations is underdetermined.According to (1), the robustness of the partitioning of sensor speeds into vehicle speeds is proportional to the distance between sensors.Since v y = 0 holds for the vehicle velocity in y-direction, only the difference of the y n -component is significant.Similar y components lead to a broad minimum of the function g according to Fig. 7(c), which leads to less accurate absolute orientation estimates.
As soon as the sensor S 2 is positioned near the rotation point [0,0] of the vehicle, the orientation of both sensors can be estimated much better, since the yaw rate with respect to the sensor S 2 is decoupled according to (1).Furthermore, it can be seen that the estimation yields the best result as soon as both sensors are oriented in different directions with an orientation difference of ϕ 2 = ±90 • .
The identical phenomenon can also be seen in Fig. 9(b) for a curved trajectory in which the sensor S 1 is located on the left side of the vehicle [x 1 , y 1 ] = [0 m, 1 m] and exhibits an orientation of ϕ 1 = +90 • .As soon as the two sensor orientations to be estimated have similar y-positions, the accuracy of the orientation estimation decreases.
The orientation estimation of the sensors can be significantly improved as soon as the vehicle has only a v x velocity, and thus, only a straight trajectory exists.As soon as this is ensured, a simplified motion model can be used for motion estimation according to (13).This is shown in Fig. 9(c) for a fixed sensor S 1 at position [x 1 , y 1 ] = [3 m, 0 m].Here, almost independent of the position and orientation of the second sensor, a precise orientation estimate can be guaranteed with an accuracy smaller than 0.1 • .
Since a yaw rate of ω = 0 cannot be ensured in reality, and the algorithm is to be applied in arbitrary scenarios, there is also the possibility to estimate the orientation of all sensors in the network with higher accuracy with the help of a third sensor, which is shown in Fig. 9(d

IX. CONCLUSION
Two sequential algorithms for an efficient orientation estimation of distributed radar sensors on a vehicle have been presented in this work.It was shown that for both straight-line and curved trajectories, the orientation of all radar sensors in the network can be estimated solely based on target lists.The optimal sensor positioning to achieve the best orientation estimates was presented and evaluated using a Monte-Carlo simulation for different networks.In addition, it was demonstrated by measurement studies to what extent the estimation improves with increasing number of sensors or increasing number of evaluated frames.It was shown that the orientation of the sensors can be determined with an accuracy of up to 0.26 • exclusively based on arbitrary chosen targets for straight and curved trajectories.This is significantly more accurate than the standard deviation of the angle estimate of the radar sensor used.The independence toward all additional sensors like IMU or GNSS as well as the arbitrary positioning and orientation of the radar sensors without any restrictions or overlapping FoVs allows the utilization in many application areas.

Fig. 1 .
Fig. 1.Radar network with N = 7 radar sensors in violet with orientations ϕ cn to be estimated.Vehicle coordinate system [x c , y c ] in blue, dashed and corresponding velocity components in red, solid lines.Sensor coordinate system of S 5 in black [x s , y s ] and an exemplary detected target t 1 with an AoA of φ s 5,1 and a range of r s 5,1 .

Fig. 4 .
Fig. 4. Experimental system setup with seven chirp-sequence radar sensors mounted on a car.

Fig. 5 .
Fig. 5. Average orientation estimation accuracy for N = 7 sensors and F = 1000 frames based on the BOE algorithm (blue) and the AOE algorithm (red).

Fig. 6 .
Fig. 6.Average orientation estimation error for different number of sensors and trajectories.

Fig. 7 .
Fig. 7. Logarithmic representation of the error function g for straight-line trajectories (top) and curved trajectories (bottom) and N = 2 sensors (left) and N = 7 sensors (right).x-axis corresponds to the orientation ϕ 1 of sensor S 1 and y-axis corresponds to the orientation ϕ 2 of sensor S 2 in degrees for the measurement setup from Fig. 4. (a) Straight trajectory, N=2.(b) Straight trajectory, N=7.(c) Curved trajectory, N=2.(d) Curved trajectory, N=7.

Fig. 8 .
Fig. 8. Average orientation estimation error for different numbers of evaluated frames and trajectories.

Fig. 9 .
Fig. 9. Average logarithmic orientation error for different trajectories, sensor poses S 1 , S 2 , S 3 , and number of sensors.The position and orientation of all N−1 fixed sensors are indicated by arrows.The Nth sensor is variable and is located at the x c n and y c n position of the vehicle in the vehicle coordinate system and represents the logarithmic average orientation estimation error.(a) Curved trajectory with N=2 and S 1 = (3|0).(b) Curved trajectory with N=2 and S 1 = (0|1).(c) Straight trajectory with N=2 and S 1 = (3|0).(d) Curved trajectory with N=3 and S 1 = (3|0), S 2 = (0|1).
, 0 m] with orientation ϕ 1 = 0 • .The second sensor is arbitrarily positioned and oriented at position x c n and y c n according to Fig. 9(a).The average orientation estimation error for both sensors and all 250 Monte Carlo trials per combination is plotted logarithmically at the position of the second sensor.The colorbar of the z-axis reflects errors between 0.1 • and 3.1 • .Fig. 9(a) shows that as soon as sensor S 2 has a similar y component as sensor S 1 with y c n ≈0, the orientation error is highest.According to (1), the motion model of each sensor has only a v c x and v c y component, which in combination with N sensors can be transferred into the motion model of the car with V p = [ω, v x , v y ].
).Here, the two fixed sensors are located at positions [x 1 , y 1 ] = [3 m, 0 m] and [x 2 , y 2 ] = [0 m, 1 m].Especially compared to Fig. 9(a) and (b), an almost position-independent estimation quality can be obtained even for curved trajectories.The average orientation estimation accuracy for this sensor constellation with N = 3 sensors is approximately 0.25 • .

TABLE I Used
Radar Parameters for Ego-Motion-Based Self-Calibration