A Wireless Local Positioning System Concept and 6D Localization Approach for Cooperative Robot Swarms Based on Distance and Angle Measurements

In wireless sensor networks, spatially distributed nodes provide location-dependent sensor information. Therefore, knowledge about the 3D position of all nodes is crucial for the numerous applications that require autonomous mobility. Furthermore, to acquire the nodes’ poses and the complete 6D network constellation, the 3D orientation of each node is also required. While many theoretical localization concepts exist for wireless sensor networks, there is still a lack of reliable system and localization concepts which enable robust real-time tracking in real-world scenarios. Therefore, we present a system approach based on an advanced 24 GHz wireless local positioning system, providing distance and angle measurements between pairs of nodes. Furthermore, an extended Kalman filter based localization algorithm is proposed, which evaluates these measurements to track the time varying 6D poses of all nodes in the network. Because only relative measurements are available, one node is chosen to define a joint navigation system. Hence, the proposed system works without any previously installed infrastructure or prior information of the network. The system and localization algorithm are validated by measurements performed in a mobile wireless sensor network comprising six nodes in an indoor scenario with strong multipath propagation. However, despite the challenging environment, the system allows for a stable and accurate 6D pose estimation of all robots in the network with 3D positioning root mean square errors of 6 to 15cm.


I. INTRODUCTION
Wireless sensor networks (WSNs) comprise multiple nodes, each equipped with one or more sensors, with applications that include industrial monitoring, logistics, automotive safety [1], [2], and exploration by mobile robots in hazardous or global positioning system (GPS)-denied areas [3], [4]. Crucial to most of these applications is the knowledge of the The associate editor coordinating the review of this manuscript and approving it for publication was Weimin Huang. sensor nodes' position and orientation in a global coordinate system, making localization in WSNs a fundamental task.
Localization techniques include range-free algorithms, which evaluate the connectivity information in known topologies [5], and range-based algorithms that use distance and/or angle measurements [6]. Each sensor performs these measurements in its own environment and coordinate system. Hence, reference nodes, called anchors, are often defined to combine the distributed information in a common framework. Normally, the anchors' position is assumed to be known, for example, from GPS measurements [3] or because they are VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ installed at fixed locations [7]. Hence, they can serve as reference points for the localization of nodes with unknown poses. However, their usage requires additional sensors, which may not always be available [4], or the implementation of a measurement infrastructure, which limits the possible applications. Although most algorithms use few static anchors with multiple unknown nodes [3], [8], [9], variations include anchor free localization [10], a moving anchor with otherwise static nodes [11], and multiple anchors used to localize a single target [4], [12], [13]. Whereas in non cooperative WSNs the nodes are localized solely using the anchors, cooperative WSNs incorporate node-to-node measurements to further increase the localization accuracy [6]. Although localization in many WSNs relies solely on measurements containing distance information [7], [14], [15], therefore enabling only position and no orientation estimation, or solely on angle measurements [8], recently, hybrid approaches as in [3], [4], [9], [13], [16] have received more attention. Combining distance information and angle measurements leads to higher reliability and robustness, and reduced uncertainty because each node collects more information; therefore, fewer nodes are necessary for unambiguous localization [4], [9].
In some approaches the relative position [17] or complete relative pose [18] between nodes are assumed to be available. Thereby, the localization problem is drastically simplified, while the demand on the measurement system increases.
Measurements in WSNs can be performed by different measurement systems. Camera based systems can provide high localization accuracy, but are strongly impaired by non-ideal lighting conditions, dust, and rain, and provide a relatively low maximal measurement range [17]. Wave-based measurement systems are usually more robust and can perform different measurement operations. Distance information can be acquired using a variety of techniques [19], including time of arrival (TOA), time difference of arrival (TDOA), and received signal strength (RSS), and by using a variety of technical implementations, including RF systems and ultrasound systems [14], [19]. The angle of arrival (AOA) can be measured using directional antennas or antenna arrays [19]. Because angles are measured in the local frame of each node, the localization algorithm has to incorporate the node's orientation, resulting in a 6D pose estimation [8]. However, only few localization algorithms estimate both the position and orientation [4], [20]. Instead most works simply assume the orientation to be identical with the global frame for each node [9], [12], [16], thereby reducing the complexity of the estimation process drastically, or to be known from magnetic sensors [3], which provide a 2D orientation in certain environments. Hence, algorithms estimating only position while using angle measurements cannot be used in common realistic measurement scenarios.
The most commonly used algorithms solve the non convex one-shot localization problem of a fixed node constellation by solving a relaxed convex optimization problem, as in [14], [16], [21]. For distance only localization systems, FIGURE 1. Illustration of the mobile WSN at time step k comprising N robots as nodes. The mth node is located at p m,n,k with its local frame. Node N serves as the anchor; hence, its frame is the navigation frame. d ij ,k denotes the measured distance, ϕ ij ,k the azimuth angle, and ϑ ij ,k the elevation angle for a measurement from master node i to slave node j . multidimensional scaling (MDS) based techniques are well established because of their simplicity, efficiency, and broad applicability [22]. Other algorithms use recursive approaches, hence allowing the tracking of moving nodes in mobile WSNs. In these approaches, the state of a nonlinear system is estimated using Bayesian estimation techniques [9], such as the extended Kalman filter (EKF) [4], [12], [13]. Furthermore, for real-time tracking, recursive approaches are more suited because of their low computational effort.
Different approaches of existing localization concepts presented in literature are summarized in Table 1. Most algorithms require multiple anchors at known positions or existing infrastructure, while often only estimating the nodes' positions, although the nodes' orientations are required for many possible applications. Furthermore, only very few approaches present full localization systems and validate their algorithms by measurements. In [7], multiple base stations are installed to track a single mobile node using RSS. Measurement results for one-shot localization using only distance information are presented in [14], [15]. The authors of [4] and [13] used an EKF with multiple anchors to track a single node using RTOA and AOA in 6D and in 3D (2D position and heading), respectively. While these works did successfully perform localization tasks, all of them reduced the problem complexity by not estimating the orientation, tracking only a single target, using multiple anchor nodes at known locations, or a combination of these.
In this paper, we present a system concept for full 6D tracking of multiple nodes in a mobile cooperative WSN, as illustrated in Fig. 1, by evaluating the node-to-node distance and angle measurements provided by secondary radar systems in an EKF. Since a single arbitrary node serves as the anchor to define the navigation frame, no further infrastructure is necessary in our approach. The proposed system and localization procedure is validated by measurements with six robots representing the nodes in a challenging indoor localization setting of 50 m 2 , yielding stable and robust 6D pose estimates with 3D localization root mean square errors (RMSE) of 6 to 15 cm, depending on the node constellation. To the best of the authors knowledge, no wireless local positioning system and localization concept presented in literature performed network localization with comparable complexity and measurement results.
The remainder of the present paper is organized as follows: In Section II proposed system for a WSN and the distance and angle measurements are described. Then, localization algorithm is derived in Section III. The measurement results are discussed in detail in Section IV and compared to existing approaches, which provide experimental verification, see Table 1, followed by a conclusion.

II. PROPOSED SYSTEM CONCEPT
In this section, the proposed system concept for a mobile cooperative WSN, comprising N ≥ 3 nodes, will be described. The nodes in the WSN are represented by mobile robots, which may perform a certain task in a possibly unknown environment and may move freely and independently from each other. The simultaneous localization of all robots is based on the evaluation of cooperative distance and angle measurements between pairs of robots. Hence, first, the used measurement principle is presented, followed by a description of the WSN's organizational structure.

A. SECONDARY FMCW RADAR MEASUREMENTS
Every robot in the system setup is equipped with a secondary frequency modulated continuous wave (FMCW) radar, here using the available bandwidth of B = 250 MHz in the 24 GHz Industrial, Scientific and Medical (ISM) band. Although the primary FMCW radar principle [24] is well-known for passive measurements of the environment, the secondary FMCW radar [25] allows for the cooperative measurement between two radar units, which will be called master and slave in the following. At time t = 0, the master first creates and transmits a local FMCW signal of the form called up-ramp with the amplitude A m , the sweep duration T , and a starting phase φ m , as well as a similarly formed downramp. The slave receives this signal with a delay τ = d/c 0 , where d is the distance between the units, and c 0 is the speed of light. The slave unit then itself creates a local signal, consisting of the up-ramp and an equivalent down-ramp, which is similar to the received signal, but the frequency and time are slightly varied by f and t. These offsets can be determined by comparing the two signals' up-and down-ramps as in [26], allowing the slave to transmit a response signal, which is synchronized to the master in time and frequency. Then, the master receives the synchronized response signal, which is mixed with the original transmit signal (1). After low-pass filtering, the resulting beat signal is then given by where the phase and frequency are proportional to the round-trip time of flight (RTOF) τ rt , and thereby the distance between the units, apart from a unknown phase component φ and a known frequency offset f off which is added at the slave unit to distinguish the desired signal and the primary radar response from the reflecting environment. In the presence of multipath, additional frequency components occur. However, the line-of-sight (LOS) can be identified very reliably by evaluating the beat signal with the smallest frequency, but above a certain amplitude threshold, which corresponds to the closest distance. The use of the active slave unit leads to a high signal-to-noise ratio (SNR), especially for low distances as in an indoor scenario. This enables a reliable peak detection and leads to a low error variance σ d of the distance measurement. A more detailed description of the measurement principle and the synchronization process is given in [25] and [26]. Angle measurements can be performed with a FMCW radar by evaluating the beat signal's phases in (3) of multiple, spatially distributed receiver antennas [27]. For this purpose, the radars used in this paper operate using two types of antenna arrays. A planar sparse array comprising eight patch antennas, as shown in Fig. 2a, allows for azimuth angle ϕ and elevation angle ϑ measurements by using a standard 2D delay-and-sum beamformer [28]. The measurement geometry will be discussed in greater detail in Section III-A. While cost efficient, the planar antenna design limits the field of view of this array to ±80 • . Therefore, the second antenna type is a 16-channel ring antenna array, as shown in Fig. 2b, which allows for 360 • azimuth measurements, but does not provide elevation measurements. For this array, the antennas used for angle estimation are first chosen based on their received signals' amplitude to ensure LOS connectivity to the slave. Then, the azimuth angle is determined using a 1D delay-andsum beamformer. All used antenna arrays were calibrated using the procedure described in [27], whereas an adapted version of the calibration, which includes the receive channel selection, was used for the ring array.
Because of the high SNR, measurement errors caused by multipath are dominant in indoor scenarios, which will also be discussed in Section IV. Hence, the measurement error variance depends on the current nodes' poses and the possibly changing environment. Furthermore, the error variance for angle measurements depends on the measured angle itself, because the effective antenna array aperture is reduced for  waves inciding under large angles. Therefore, the resulting overall variances for distance, azimuth angle σ ϕ , and elevation angle σ ϑ are approximated by evaluating measurements at random positions of both, the master and the slave, within the area, which is subsequently used for system verification. The estimations for the two array types within this scenario are listed in Table 2. Note that σ ϑ,ring → ∞ holds because no elevation measurement is available at the ring array.

B. ORGANIZATIONAL STRUCTURE
In the proposed system, the measurement coordination and processing is performed by a central computer. It can be an independent unit or an arbitrary robot can be chosen to serve as the central computer, enabling maximum mobility. Each robot in the WSN can be identified by a unique ID. Measurements are performed in cycles, where consecutively every robot serves as a master and measures to all available slaves, one at a time. After each master-slave measurement, the results are sent to the central computer to update the real-time localization algorithm. Because the distance and angles are estimated directly on the FMCW radar platform, only the final measurement results need to be transferred to the central computer, leading to a minimal necessary data stream and approximately 12 radar measurements per second.
Generally, not every node pair can necessarily perform a master-slave measurement, because many possible node implementations will have a limited field of view or the LOS might be (temporarily) interrupted. Fortunately, since both angle and distance measurements are available, a node's position can be estimated by a single other node with a fully estimated pose. To determine its orientation, it must measure to at least two nodes with estimated positions. Hence, the WSN is assumed to consist of at least three nodes, including the anchor node.
If node pairs are known to be unable to perform measurements, they can easily be excluded from the measurement process. However, radar measurements may be performed, but suffer from systematic errors, especially if the LOS is interrupted only temporarily. Therefore, an outlier detection using the Mahalanobis distance where z denotes the deviation of a measurement from the expected value of a Gaussian distribution and V its covariance matrix [29], will be used during the localization algorithm in Section III-B3. A large Mahalanobis distance indicates an outlier, so if it lies above a threshold, the corresponding measurement is, therefore, discarded. This threshold is typically chosen to be approximately 2 − 3, i.e. measurements which deviate 2 − 3 times the variance from the expected value are classified as outliers. However, as described before, the measurement variances given in Table 2, are only an approximation of the variances occurring on average in the given setup. Hence, the measurement variance depends on the node's absolute and relative positions. Therefore, we found d M,thr = 10 to be a threshold for the given system, which reliably detects outliers, while not discarding measurements performed in settings, which naturally lead to a higher variance.

III. LOCALIZATION ALGORITHM A. PROBLEM FORMULATION
We consider a WSN comprising N ≥ 3 nodes, as depicted in Fig. 1. Our aim is to track all nodes over time, that is, to estimate the 6D pose, consisting of position and orientation, of every node in a joint coordinate system, for every time step k. For this purpose, both the angle and distance between pairs of nodes are measured consecutively as described above. Because these measurements are only relative within each node's local frame, no reference to an absolute coordinate system exists. Therefore, one arbitrary node serves as an anchor, and its local frame is defined as the navigation frame for the localization, denoted by the subscript n. Ideally, the anchor should be static because a fixed reference frame is preferred. Consequently, only the frames of M = N − 1 moving nodes need to be tracked relative to the anchor. The mth node's pose at time k is completely defined by its position p m,n,k = x m,n,k , y m,n,k , z m,n,k T in navigation coordinates and the rotation matrix C nm,k ∈ R 3×3 that describes the rotation from its local frame to the navigation frame [30]. For the anchor, whose local frame is identical to the navigation frame, consequently applies p N ,n,k = p N ,n = (0, 0, 0) T and C nN ,k = C nN = I 3 , where I n denotes the n × n identity matrix. In time step k, the ith node, called the master, measures the relative position of the jth node, called the slave, within its local frame, as shown in Fig. 1 and as described in Section II-A. RTOA measurements can approximately be modeled as with the normally distributed additive measurement error ). The distance uncertainty, reflected by the error variance σ 2 d ij,k , might depend on the distance, the transmit power P T , and the dominant type of measurement errors. Assuming thermal noise and external interferers being the main error source, the SNR decreases for larger measurement distances, whereby the variance of d ij,k increases. In contrast, assuming multipath propagation and phase noise to be the main error source, both the SNR and the variance of d ij,k are approximately independent of the distance.
The angles are measured with respect to the coordinate axes of the master's local frame, as shown in Fig. 1. With the relative position of node j in the local coordinate system of node i at time step k, we define the measured 2π-periodic azimuth and elevation angles as The measurement errors are assumed to be normally distributed with zero mean, leading to ϑ ij,k ∼ N (0, σ 2 ϑ ij,k ) and ϕ ij,k ∼ N (0, σ 2 ϕ ij,k ). Again, the noise variance might depend on the measurement environment. The complete measurement z k ∈ R 3 at time instance k is denoted by assuming just one node pair performed a measurement during this time step. In a measurement system enabling several node pairs to measure simultaneously, the measurement vector is stacked to include the other node pairs' measurements. Generally, not every node pair can necessarily perform a master-slave measurement, because many possible node implementations will have a limited field of view or the LOS might be (temporarily) interrupted. Fortunately, since both angle and distance measurements are available, a node's position can be estimated by a single other node with a fully estimated pose. To determine its orientation, it must measure to at least two nodes with estimated positions. Hence, the WSN is assumed to consist of at least three nodes, including the anchor node.
All non anchor nodes m = 1, . . . , M are moving and rotating freely over time. At timestep k, the mth node's movement can be described by its current state x m,k , which comprises the node's current position and orientation, and all necessary corresponding derivatives of the pose. Then, VOLUME 8, 2020 the state transition model can be approximated by a possibly nonlinear state model [31] of the form with the control input u m,n,k ∈ R u , which may include control signals and multiple impact factors, such as ground conditions and tire pressure, and the noise w m,k ∼ N o (0, Q m,k ), which depends on the previous pose and is transformed into the state space by a function g k . Note that (10) is assumed to be Markov, which is a commonly used approximation, although this might not be correct for most practical cases. Bayes filters like the EKF, which is applied in this work, are known to be robust against violations of the Markov property, as long as the state is defined carefully [31].
Because the localization is a non-convex optimization problem, at time step k = 0, we assume to have a rough preliminary estimation for the initial position p m,n,0 and/or orientation C nm,0 of some of the nodes. In practice, this can be achieved by for example applying conventional localization approaches. One possible approach will be described in Section III-C.

B. EXTENDED KALMAN FILTER
The problem at hand is to track the pose of multiple nodes in a WSN, given nonlinear state and measurement models with normally distributed noise. However, the models used in our proposed system can be well linearized without causing large model errors, even for large state and measurement deviations. Hence, we propose to estimate the nodes' state over time by using an extended Kalman filter (EKF), whose computational efficiency allows real-time tracking even in large WSNs, yielding a practically oriented approach to 6D localization.
An EKF typically consists of two steps called prediction and update [31], which recursively estimate the system state x k at time step k. In doing so, the EKF estimates the mobile nodes' current position and orientation.

1) STATE VECTOR
The full system state x k corresponds to the stacked states of all M non anchor nodes, as described by (10). For each node m it contains the complete pose, i.e. the position p m,n,k and the Euler angles roll, pitch, and yaw q m,n,k = (φ m,k , θ m,k , ψ m,k ) T as a representation of the orientation. By choosing Euler angles instead of e.g. quaternions, the number of variables is minimized, at the cost of risking the gimbal lock [30], which is unrealistic to occur in the given setup. The Euler angles can be converted into the rotation matrix C nm,k , as introduced in section III-A, with the abbreviations c α := cos α m,k , s α := sin α m,k [32]. In addition to the pose, we also estimate the nodes' current relative velocities v m,n,k =˙ p m,n,k with respect to the navigation frame. Consequently, for every node 1, . . . , M a total of 9 variables is estimated, and the state vector follows as 2) PREDICTION STEP The state transition model, which depends approximately only on the prior state and a known control vector, can be described as with the system noise w k ∼ N w (0, Q k ), which is assumed to be linearly transformed into the state space by G k ∈ R Mn x ×w . The prediction step [31] then follows aŝ withˆ x k|k−1 as the current prediction of the actual state x k , its state covariance Σ k|k−1 ∈ R 9M ×9M at time step k, and the Jacobian matrix of the state transition model For the system at hand, the robots are assumed to move at an almost constant velocity with an almost constant orientation. In this case, the motion model (13) is linear, leading to with the block diagonal matrix where 0 n denotes a n × n zero matrix and denotes the sub-transition matrix for a single node with time t k = t k − t k−1 . Changes in velocity and orientation are introduced by the noise vector w k , whose entries correspond to random accelerations [33] and angular velocities respectively. Therefore, the covariance of the normally distributed system noise is described by with σ a and σ q being estimates of the maximum occurring acceleration and angular velocity, respectively, which are both assumed to be the same in all dimensions and constant over time. Furthermore, G k then follows as consisting of M stacked blocks Note that further information such as the control vector or odometry data is well suited to improve the state transition model. However, we refrained from doing so to be independent of the used robots and the environment, thereby providing a more universally valid model.

3) UPDATE STEP
As described above, the robots measure sequentially to each other. Hence, during the update step at time step k, one measurement from master i to slave j is considered, as given by (9). Thus, the measurement model depends on which nodes serve as master i and slave j, and, therefore, can be modelled by a nonlinear function h ij,k as with the measurement noise v ij,k ∼ N 3 (0, R ij,k ). Because of moving nodes and a changing indoor environment, we assume the noise to be independent of the system state. Although this is generally not correct, the assumption should be justified enough for the robust EKF [31] to work properly. If this is not the case, the reader is referred to [34]. Furthermore, all nodes use the same secondary radar and only differ in the equipped receive antenna array. Therefore, the error covariance is independent of the slave node and only depends on the master's antenna array. Hence, the covariance matrix can be formed from Table 2 and is given by for the planar array or for the ring array, depending on which array type is used on node i. The measurement model h ij,k ( x k ) can directly be derived by stacking (5), (7), and (8). The update step for the EKF [31] then follows aŝ with the Jacobian of the measurement model and the Kalman gain Note that the innovation s k in (26) is assumed to be normally distributed with zero mean [33]. Hence, because the angle measurements are 2π-periodic, the innovation must be mapped to the interval [−π, π) to justify this assumption. Therefore, the nth entry of s k is given by if z k,n is an angle measurement and by otherwise.
One master-slave measurement yields information about the positions of both the master and the slave via the distance and angle measurements, and information about the orientation of the master via the angle measurements. Therefore, the Jacobian (28) will have only a few non zero values at the corresponding entries. However, the complete state may be adjusted during the update step because of the cross-covariances with the other nodes. If further node pairs perform measurements during the same time step, the measurement vector and the measurement model have to be extended accordingly.
During the update step of the EKF, the outlier detection using the Mahalanobis distance is performed, as introduced in Section II-B. Therefore, the current estimation and uncertainty of the state is incorporated in (4), leading to with the innovation (30) and (31), and the innovation covari- If d M,k lies above the threshold, indicating an outlier, the update step for this measurement is skipped.

C. INITIALIZATION OF THE STATE VECTOR
The 6D localization is a non-convex optimization problem. To track the global 6D localization solution using the EKF, the state vector must be initialized with a rough preliminary estimationˆ x 0|0 . For this purpose, any conventional localization approach that determines as many of the variables as possible can be used. Note that not necessarily all the variables have to be estimated during this initialization step. However, for uninitialized variables, the corresponding entries on the main diagonal of the covariance matrix Σ 0|0 must be set toward infinity. For the system at hand, this step is solved by simply assuming all available measurements to be correct, and starting at the anchor, initializing all nodes that can be reached by successive measurements. First, the anchor can determine the position of all nodes that perform a measurement with the anchor as the master. Now, if one of these nodes measures to at least two other nodes whose positions have also been initialized, its orientation can be estimated using a least squares approach, as in [35]. Once a node's whole pose is initialized, this node can also be used as a master to calculate the position of its slave nodes.

IV. MEASUREMENTS
The proposed system and algorithm is a practical approach to the localization in WSNs. Therefore, we performed measurement runs in a challenging indoor environment, to evaluate their performance in real-world scenarios. In this section, the measurement setup will be described, followed by the results and a discussion, which includes comparisons with existing approaches.

A. MEASUREMENT SETUP
The measurement setup is shown in Fig. 3 and comprises N = 6 nodes distributed in an area of approximately 50 m 2 . During the measurement, three nodes (1, 2, and 3) are moving around the area, while the remaining nodes including the anchor (node 6) remain static. When in motion, the robots moved with speeds between approximately 0.1 and 0.25 m/s. To cope with the limited field of view of the planar antenna arrays, the nodes' orientation have been chosen such that each node can measure to at least two other nodes. Node 1 is equipped with the ring array and moves freely in between the remaining nodes, so its 360 • antenna is exploited. For reference, a tachymeter measures one mobile node's trajectory, providing an accuracy of 3 mm. Unfortunately, no reference for the tracked node's orientation is available, so it can only be verified qualitatively. All static nodes are measured by the tachymeter, hence providing a quantitative reference. Furthermore, the mobile nodes without reference, move on an approximately piecewise linear trajectory. Their start and end poses are measured by the tachymeter, and thus, a qualitative reference for their trajectory and pose is available. Tables and ramps were used to ensure variation of the positions in the z n −direction.
As described in Section II-B, measurements are performed with a rate of approximately 12 Hz for a single master-to-slave measurement. However, because measurements are performed consecutively, the average measurement update rate is only 0.4 Hz for a specific master-slave combination. While a robot does not currently serve as the master, information about its position is only acquired during every fifth measurement, i.e. with an update rate of 2.4 Hz, while orientation information is only acquired when it serves as the master with a rate of 0.46 Hz. Assuming a robot moving at a peak velocity of 0.25 m/s, this corresponds to position information every 10 cm and orientation information every 54 cm. However, most of the time the robots move slower than 0.2 m/s, especially when performing turning movements, which result in large orientation changes.

B. RESULTS
We evaluate two different measurement runs I and II. They were performed as described above and will be evaluated consecutively in this section, followed by short evaluation of localization using only the available distance information.

1) MEASUREMENT RUN I
The localization results of measurement run I are depicted in Fig. 4a. Nodes 4 and 5 are static, nodes 1 and 2 moved on an approximately linear trajectory, starting their movement at time t k = 10 s. Node 3 was tracked by the tachymeter; it started moving at time t k = 35 s and stopped at a position far away from the starting position at t k = 80 s. In the node constellation, nodes 2 and 3, as well as nodes 2 and 5 did not see each other (see Fig. 3) and did therefore not measure to each other. The 3D-RMSE of the 3D localization and the RMSE for the three axes of the navigation coordinate system for all nodes with available references are listed in Table 3. Note that the first 3 s are the filter's settling time and, therefore, are not considered in the error evaluation. The cumulative FIGURE 4. Results for measurement run I: Node 6 serves as the anchor. Node 1 moved linearly, node 2 moved forwards and backwards linearly, both starting their movement at t k = 10 s. Node 3 moved during the time from t k = 35 s to t k = 80 s and was tracked by the tachymeter for reference. The other nodes remained static. TABLE 3. RMSE of the localization for the 3D error and for the three navigation axes, for the static nodes 4 and 5 and the moving nodes 3 and 1 in runs I and II, respectively. error distribution function (CDF) for the positioning errors is depicted in Fig. 5, showing a stable and reliable localization with a maximum 3D positioning error of 0.3 m. Fig. 6 depicts the CDF of the Euler angles' absolute error exemplarily for the static node 4. With a maximum angle error of 4 • , the CDF shows that the orientation is also estimated steadily and correctly.
To further evaluate the results, the positioning errors of the moving node over time are shown in Fig. 4c for the three spatial navigation coordinates. Given normally distributed and uncorrelated measurements and state errors, the residual error of the EKF estimate should also be approximately uncorrelated [36]. Here, however, the errors are strongly correlated over time and are superimposed by small random instantaneous variations. Fig. 4b shows the azimuth and elevation measurement errors from the static master nodes 4, 5, and 6 (anchor) to the moving slave node 3. When node 3 did not move during the periods in the beginning and end, the errors show a very small variance caused by noise. However, these uncorrelated errors are dominated by an error that is constant during each static period. Furthermore, the constant errors in the VOLUME 8, 2020  beginning and the end differ strongly, indicating a spatial dependency. From this, we concluded that the main error source in this setup are multipaths, which are spatially and environmentally dependent and, therefore, cause correlations for measurements between two static nodes in an almost constant environment. Accordingly, while the node moves between the static periods, the multipaths change over time, leading to more randomly distributed measurement errors. However, because multipaths only vary slowly, measurement errors at adjacent positions remain correlated, see Fig. 4b. Furthermore, although node 3 does not move after t k = 80 s, unlike the measurement errors, the localization errors in Fig. 4c are not all constant but are drifting. Hence, because of the correlated measurements, the EKF slowly converges to a faulty pose estimate without further robot movement.

2) MEASUREMENT RUN II
Now, we evaluate the second measurement run II, which is depicted in Fig. 7a. Here, node 1, equipped with the ring array, moved in the center and was tracked by the tachymeter for reference. Nodes 4, 5, and 6 again are static with slightly different poses than in run I; nodes 2 and 3 moved approximately linearly. All nodes started moving after t k = 10 s. Again, nodes 2 and 3, as well as nodes 2 and 5 did not see each other. For this run, the CDF for the position and orientation are also shown in Fig. 5 and 6, and the RMSEs in Table 3, respectively. Although the results for the static nodes are similar in both runs, node 1 in run II performed considerably better than node 3 in run I. Fig. 7c shows the positioning errors of node 1 over time. Although the errors are still correlated over time, the effect is smaller than for node 3 in run I, see Fig. 4c, which is consistent with the better 3D localization result. Furthermore, the errors for angle measurements from static nodes to the slave node 1 are shown in Fig. 7b. As in run I, a constant error because of multipath occurs while the node is static. However, comparing the errors of both runs shows that the multipaths are less spatially correlated for node 1 than for node 3, see Fig. 4b, therefore leading to better positioning results. This can be explained by examining the nodes' trajectories with respect to the static master nodes (see Fig. 4a and 7a). Node 1 is moving closer to nodes 4, 5, and 6, resulting in larger changes in the measured angles and, therefore, to an enhanced multipath randomization. Node 3 is moving at the edge of the measurement area. Hence, although the trajectory's start and end positions are sufficiently apart to cause the strongly differing offsets mentioned above, the higher distance to the static nodes results in a stronger correlation of errors at adjacent positions.

C. DISCUSSION
So far, the evaluation has shown that multipath errors cause large deviations from the ideal system model, which assumes uncorrelated measurement errors. Depending on the constellation of the robots in the WSN, this leads to correlated measurements and, therefore, offsets in the localization error, resulting in an degraded performance with higher RMSE (see node 3 in Table 3). Therefore, the impact of constant measurement errors caused by static node configurations should be reduced by moving multiple nodes at once and choosing paths that lead to high multipath randomization. In the measurement runs presented in this work, the effects could only be reduced partially, however, the benefits of these measures became clear. Overall, the presented system and localization algorithm provides stable and accurate results for the 6D localization even in the presented challenging indoor environment, hence proving to be robust against deviations from the ideal system model. Given the small instantaneous variance of the localization errors in Fig. 4c and 7c, it is likely that the overall tracking performance could be further improved by choosing a more dynamic robot constellation, measuring in an environment with fewer multipaths or using a measurement system that is less vulnerable to these types of errors.

1) COMPARISON TO DISTANCE ONLY
In literature, many approaches to WSN localization rely only on distance measurements. To demonstrate the effectiveness of using both angle and distance measurements, a localization based on the same EKF, but exploiting only the available distance information was performed for the same measurement runs I and II. Without angle information, no orientation can be estimated, reducing the problem to three dimensions per FIGURE 7. Results for measurement run II: Node 6 serves as the anchor. Nodes 2 and 3 moved linearly. Node 1, equipped with the ring array, moved in the center and was tracked by the tachymeter for reference. All nodes started moving at t k = 10 s. The other nodes remained static.
node. Now, a single node is no more sufficient to provide an unambiguous coordination frame, hence the number of necessary anchors is increased to three. Therefore, the static nodes 4, 5, and 6 were chosen to serve as anchors with known positions, further decreasing the number of estimation parameters. The localization results of the remaining three moving nodes is exemplary shown in Fig. 8 for run II, divided into the x n -y n -perspective and the y n -z n -perspective for a clearer visualization. Node 1, which was here tracked by the tachymeter for reference, is localized with a 3D RMSE of 18.1 cm, which shows an acceptable deterioration compared to the results above. However, without involving angle measurements, nodes 2 and 3 perform significantly worse, with most of the error occurring in the z n -direction.
These results can be explained when taking into account constellation of the WSN. Despite the efforts to ensure variety of the nodes' positions in the z n -direction, the constellation is spread mostly in the x n -y n -plane, leading to a large uncertainty in the z n -direction and therefore, the large localization error. Furthermore, this effect has a stronger impact on the localization of nodes 2 and 3, moving at the edge of the constellation, than on node 1 moving in the anchors's center.
The same findings can be obtained from the distance only results of run I, where the localization performed worse than in run I, but showed the same characteristics. Here, node 3 was tracked by the tachymeter, showing a 3D RMSE of 84.3 cm with a RMSE in z n -direction of 79.7 cm.

2) COMPARISON TO OTHER APPROACHES
To assess the proposed system's performance compared to other approaches, Table 4 summarizes details on the measurement setups and localization results of the works in Table 1with an experimental verification. The camera system proposed in [17] achieves the best localization results. However, no node orientation was estimated, and no measurements were performed between non-anchor nodes. Hence, the system only enables the precise localization of nodes which are directly seen by the anchor. In [7] and [14],

FIGURE 8.
Localization results in the x n -y n -plane (top) and the y n -z n -plane (bottom) for measurement run II using only distance information. Nodes 4-6 served as anchors at known positions. The estimated trajectories for nodes 1-3 are plotted as dots. They can be associated with the corresponding node using the coordinate systems, labeled with the node number and with color-coded axes (red = x, green = y , blue = z), marking the starting pose of the nodes, as given by the reference. The reference trajectory of node 3 is plotted as a dashed black line.
the localization was performed using only distance information, enabling coarse positioning without orientation estimation. Approaches closer related to this work are presented in [4] and [13]. In both works, multiple reference nodes were used to track the pose of a single unknown node with localization results comparable to the ones presented here. However, all measurements in these works were performed with respect to an absolute reference, eliminating the challenge of handling relative measurements between multiple nodes with unknown poses. The authors of [13] also identified multipath on the street as the main system limitation. However, because of their 2D scenario this only led to an offset in the distance measurements, while the azimuth angle measurement remained bias free. Additionally, they supported the localization by incorporating odometry readings. In [4], the same radar system as in this work is used. Though the localization task's complexity is significantly reduced, because only one node is estimated, all measurements are performed to an absolute reference, as mentioned above, and the tracked node was equipped with an inclinometer, providing highly accurate orientation information in two orientation dimensions. Hence, the 6D localization task for the RF measurements is actually reduced to a 4D localization, which is already possible using a single anchor-node pair. The remaining anchors then help to further improve this estimate. Nevertheless, the localization presented here leads to smaller positioning errors than in [4], showing the advantage of using multiple moving nodes over implementing static reference nodes.

V. CONCLUSION
In this paper, a system concept and EKF for 6D localization in WSNs was presented. The localization does not require any prior information on the network and relies only on relative cooperative AOA and RTOF measurements between the nodes, provided by secondary FMCW radars. The approach was shown to provide reliable and accurate localization results and proved to be robust against deviations from the ideal system model. Therefore, the proposed system and algorithm offer a practically orientated solution to the problem of reliable, robust and computationally efficient 6D localization in WSNs under real-world measurement conditions, which is easily applicable to many different scenarios because of its independence from available infrastructure.
The localized nodes can now be used to provide location-and orientation-dependent information, for example, by imaging the environment via a primary radar, as in [37]. Nevertheless, the localization performance can be further improved by decorrelating the distance and angle measurements. For the system used in this paper, this could be achieved by moving more robots at once, by directly incorporating the beat signals' phases into the EKF as in [38], and by using larger antenna arrays. For this purpose, a theoretical analysis of the proposed scheme including the impact of correlated measurements would enable the assessment of the localization performance and help estimating possible improvements. Additionally, the update rate of the filter could be improved significantly, by adapting the hardware to measure to multiple slaves at once. Furthermore, the localization algorithm is not bound to the proposed measurement system, but can be applied to any system that provides angle and distance information between pairs of nodes in a WSN. Also, the EKF can be extended to use additional sensors, such as IMUs and more advanced motion models. Finally, the initialization step of the localization algorithm could be skipped by using a multi-hypothesis Kalman filter, as in [39].