Direct and Transitive 3D Localization Using a Zone-Based Positioning Service

We propose a novel indoor optical positioning technique called zone-based positioning. The approach enables coordinate prediction of a mobile user device with the assistance of trusted optical anchor points, angle diversity, and optical wireless communications. Then, by a process called transitive positioning, additional targets within the field-of-view of the device can also be positioned once the user device is localized. Through modeling and analysis, the predicted performance of the approach reaches less than 5cm mean square error for direct 3D positioning with insignificant degradations for iterative transitive positioning. Experimental validations of the models using a prototype zone positioning unit demonstrate the potential for the approach including opportunities for additional accuracy refinement.


I. INTRODUCTION
Indoor positioning is expected to be an enabler for many future mobility use cases similar to what is provided by global navigation satellite systems (GNSS) services outdoors. These use cases include navigation through public spaces, such as malls and hospitals, but also spans position-based marketing, object labeling for augmented reality (AR), and physical control of remote objects. The key attribute is knowing position either absolutely within a frame of reference (such as globally, or within some more local regime, such as a building), or relatively with respect to oneself or an anchor point in the environment. In this paper we focus on relative positioning -quantifying as accurately as possible the position of a mobile user device with respect to one or more identifiable anchor points embedded in the environment. We also propose a means for estimating the positions of additional objects within the field of view (FOV) of the user device, subsequent to positioning of oneself to the anchor. We call this approach ''Zone-based Positioning'' as it resolves the position of a device relative to an anchor and then estimates the position of objects within its zone, illustrated in Fig. 1. Reconciling absolute positions within a larger space, such as an office building, is very important and can be achieved through appropriate map representations as an extension of our technique but is beyond the scope of this paper.
The associate editor coordinating the review of this manuscript and approving it for publication was Marco Martalo .
Indoor positioning has been achieved with promising but modest success with a wide variety of modalities including radio frequency (RF), ultra-wideband (UWB), ultrasound, camera images, visible light, and infrared (IR) light [1]. There are many pros and cons with these media; we seek a scalable approach using low cost components and good accuracy in the order of 10cm or less as measured in 3D mean square error (MSE). UWB methods perform well but are typically more complex, expensive, and provides system-level knowledge of position (less privacy). RF schemes can piggyback on WiFi signals but have lower accuracy. Camera and imagebased solutions can sacrifice user privacy in that they capture frames of other users unbeknownst to them and require more computation power. Light and IR-based approaches have good potential but can be less accurate due to large FOV diffuse lighting sources. In our work, we adopt the use of low cost, low-intensity lasers, visible or IR, which improve the resolution for measuring angles and range.
Once a modality is established, positioning techniques usually require a reference anchor to establish an initial location (''position fixing''). Subsequent locations are estimated blindly (''dead reckoned'') until a new fix is established. Even cameras require fixed anchors to reference from. In practice, these position fixes can be very frequent and thus bound drift error [2], [3]. Accuracy here will depend on the accuracy of the position fix and the frequency of updates [2], [4], [5]. Our proposed zone-based positioning technique makes use of trusted photovoltaic active anchor points. Coupled with these anchor points and a controllable (user or mechanically actuated) modulated beam, our technique both improves measurement accuracy and enables the location of object position subsequent to user device location fixing.
A review of related works indicates that systems using angle diversity (angle of arrival -AOA) and timebased schemes (time-of-flight/time-difference-of-arrival -TOF/TDOA) have the best performance independent of medium [1], [6], [7]. However, the same-time synchronization that enables the high accuracies of time-based solutions also makes it hard to implement across large number of devices and spaces [8]. Although, AOA can be used with beam-formed RF signals, light is inherently directional and works well with AOA approaches. Recent efforts in lightbased positioning literature are targeted toward developing new AOA receivers for two-party bistatic systems, that is, systems that are infrastructure-based with active devices that locate themselves. These receivers can require numerous apertures [9], [10], rotating mechanisms [11], or structured tilts [12], [13], which are not ideal from a cost and implementation perspective for large number of devices. There is also increased focus, including from us, on angle-diverse transmitters [14]- [16].
In this paper we propose several improvements over existing approaches. These include (a) high accuracy 3D positioning through the use of laser targeting and range and angle measurement; (b) the ability to create a mobile positioning zone centered around a user with respect to an anchor, and (c) the use of a transitive technique to estimate position of other objects within the FOV of the same targeted positioning unit. The overall approach is called a Zone-Based Positioning Service (ZPS), Fig. 1. Under a ZPS, the Zone Positioning Unit (ZPU) carried by a user or robot locates anchor beacons with a narrow FOV optical source. The position of the ZPU is determined by the pointing angles received at the beacon through the payload of the modulated optical source. Subsequently, once the ZPU is localized, it can position other objects within its zone by targeting these objects. We call this second iteration transitive positioning. A ZPS fuses three core technologies: active anchor points, angle diversity, and optical wireless communications (OWC). We first introduce a ZPS without the transitive aspect in a conference workshop proceeding [17]. This paper is a comprehensive extension of that paper with more analysis, simulated and experimental. New in this paper is the proposition and evaluation of the novel transitive positioning construct.
The remainder of the paper is organized as follows: Section II proposes the Zone-Based Positioning Service in detail; Section III derives the system models; Section IV describes the experimental prototype; Section V analyzes and discusses results; Section VI concludes the paper.

II. ZONE-BASED POSITIONING SERVICE
Although there are many indoor positioning approaches, the concept of a Zone-Based Positioning Service emerges from recognizing that an angle-diverse mobile user-device (UD) can enable high positioning accuracy for itself but also for other devices within its FOV. This approach works as follows: a UD (attached to a person, or robot, etc.) positions itself relative to active anchor points in a space and then additional devices are positioned relative to that initially positioned device. Key advantages of this approach include (a) the ability to focus on positional accuracy commensurate with environmental context, (b) active positioning for the purpose of limiting location privacy exposure, and (c) the use of lowcost, relatively available components.
Section II is an overview of the ZPS concept. With respect to the individual elements in a ZPS, the first directly positioned UD is called a Zone Positioning Unit, the active anchor points are called Trust Beacons (TBs), and the secondarily positioned devices in the transitive zone are called transitive devices. Fig. 2 highlights the ZPS as a block diagram. The ZPU achieves high accuracy positioning through angle-diversity. Angle-diversity is obtained by the ZPU by continuously measuring its orientation angles with an inertia measurement unit (IMU) and then modulating it over a narrow FOV optical source. The TBs, equipped with photovoltaic elements, in turn decode the optical payload for the angular information. These angles combined with the anchor coordinates and measured ranges to the TBs allow determination of the positions of the ZPU. Next, transitive devices can be passively or actively positioned. In the active case, angles are communicated similar to the ZPU-TB interaction and in the passive case, the user targets and identifies the object with the ZPU to position said object. In the following, we consider the role of each component and the two types of positioning realized by a ZPS: direct and transitive. VOLUME 8, 2020 A. TRUST BEACONS Trust beacons serve in the capacity to anchor positions within a physical space. These represent fixed beacons placed in the environment with known coordinates. The TBs serve an important role of nulling position error for a UD that moves through the physical space and are equipped with optical detectors to acquire line-of-sight (LOS) transmission from the ZPU and network connectivity to realize backhaul communications with UDs and a connecting network. TBs are placed in an environment with a density relative to the importance of positioning. This is an important aspect of the approach. For example, long corridors in a building do not necessitate high-accuracy positioning nor high density of TBs. In contrast, an active personal workspace can support multiple positioning use cases and TBs. The design of a TB is intended to be simple and inexpensive, made up of a microcontroller (MCU) and optical detector (e.g., photodiode -PD), Fig. 2. Simplicity and lost cost will allow liberal deployment of TBs. TBs can be configured as a mesh or a centralized topology similar to other beacon types (e.g., bluetooth low energy -BLE). This configuration simplifies deployment -devices need only to be within range of each other or a centralized gateway. Finally, the TB infrastructure would ideally accommodate a wide range of ZPU configurations and upgrades.

B. ZONE-POSITIONING UNIT
The zone positioning unit is the mobile user-device that positions itself and is a feature module designed to measure range and angle with respect to trust beacons or transitive devices. The ZPU is the more complex device within a ZPS and enables different performance levels depending on part specifications; e.g., the positioning zone created can be larger or smaller. Typically, a ZPU includes an IMU to establish angle measurements, a narrow FOV optical source for LOS OWC, and optionally a range sensor -we use a light detection and ranging (LIDAR) unit. We envision the ZPU to be integrated into a headset, although combining with a mobile phone, wearable, or mobile robot are alternative configurations. The ZPU measures orientation with respect to its reference axes and then encodes that information onto the optical payload of its modulated laser in preparation to hit a target and complete a data transfer. This transfer engages when a user directs the laser beam onto a TB or transitive device. The active communication of the ZPU allows users to decide to ''turn off'' positioning when not needed, which prevents unintentional positioning. Finally, for fast targeting of TBs or transitive devices, a feedback loop comprised of micro-electromechanical system (MEMS) actuators [18] and gaze-tracking [19] can enhance the overall system performance.

C. OPTICAL COMMUNICATIONS AND POSITIONING
A distinguishing element in a ZPS is the interaction between a ZPU and a TB or transitive device via optical interconnection and measurement. In this subsection, we address specifically direct positioning of the ZPU. In the following subsection, we expand to include transitive devices. The aforementioned communication link is based off a fundamental optical wireless communications concept: intensity modulation with direct detection (IM/DD) [20]. The narrow FOV optical source (i.e., a low-power laser diode) at the ZPU acts as a intensity modulated transmitter and a simple PD at each TB serves as a direct detection receiver. In our system, this optical link is visible but it can be IR and also steered for a wider coverage using MEMS steering [18], [21]. The optical communications link is unidirectional from ZPU to TB and continuously encoded with the current orientation angles of the ZPU to enable AOA measurements at the TBs without the need of an angle diversity receiver. Fig. 3 shows an example ZPU and TB interaction, where the signal is null at the PD until the laser at the ZPU targets the TB with its pointing angles, φ tn , at a given time instance tn. Also, within the laser communications payload are continuously updated range information if present and instructions on how to communicate back to the ZPU via an RF backchannel. Once the TB receives the payload, it relays a message back to the ZPU with the orientation angles it received but also appending its own coordinates. Finally, the ZPU computes position using TB coordinates and measured angles and ranges.
The laser is continually modulated so that once the laser hits a target TB receiver, the TB can initiate a return call to the ZPU with updated measurements. The LOS nature of the laser beam and near instantaneous speed of light allow the beam to be treated as a ray in models. This implies that between the TB and ZPU, the angle of departure is the same as the AOA. The latency in measured distances between the ZPU and TB are likewise negligible when compared to macro motions. With regards to beam intensity and modulation, the payload is relatively small that on-off-keying (OOK) at low intensities is sufficient. We have shown this form of laser modulation in our ray-surface positioning (RSP) implementation [22]. In the case where LIDAR is used for ranging, the laser payload can be designed to incorporate LIDAR pulses or a separate LIDAR unit can be colocated with the laser.

D. TRANSITIVE POSITIONING
A key feature of carrying around an AOA sensor in the ZPU is that the ZPU has the ability to position other devices in its vicinity, ''zone,'' which we call transitive positioning. Transitive positioning is accomplished by reusing the same laser and range sensor on the ZPU to target TBs to also target secondary peripheral devices. We call these secondary positioned devices transitive devices, which can be active devices with PDs or passive objects with no electronics. In the case of active transitive positioning, the technique is similar to RSP [15] and direct positioning: the angles are decoded by an active receiver. The key difference between RSP and active transitive positioning is that in transitive positioning, the steerable laser unit is mobile whereas it is fixed in RSP. At a lower cost compared to the ZPU, the active transitive devices only each need a photodiode and MCU, the same hardware build as a TB. There is no need for a laser or IMU.
Transitive positioning can also be accomplished on passive objects (i.e., no PDs). Passive transitive positioning relies on the user to identify and target the passive object with the ZPU. Once targeted and confirmed by the user (e.g., human-aided or gaze-tracking and MEMS steering), the same angles and range measurements without data transfer is used by the ZPU to position the passive objects. The directly positioned UD now knows the position of these passive objects.
In theory, there would be compounded errors for transitive positioning but certain applications can tolerate the additional error given the low or no hardware cost. However, we show in our results section that compounded errors are negligible in 3D when using a consumer LIDAR unit.

III. SYSTEM DESIGN
In this section, we describe the system design and geometric models developed to evaluate the proposed Zone-based Positioning Service. We also analyze different configurations under the models when exposed to uniform noise. Fig. 4 illustrates one zone in respect to two TBs. The direct positioning of the ZPU are a recap of material in [17]. Later, we will show how a ZPS approach is extended as a means for measuring the position of objects within the ZPU FOV.

A. ESTIMATING POSITION
In this section, we derive models for a ZPS considering noise that can exist in a practical system. Under a noise value of zero, no noise, these models produce exact positioning with no errors.

1) 2D POSITIONING
As a baseline, we solve for the 2D scenario, in which the ZPU height is assumed known. Examples of known ZPU heights are ZPU headsets worn by the same user, fixed height robots, and industrial machines; we assume a ZPU headset. Measured values at the ZPU headset are pitch and yaw angles: φ i and θ i , where i refers to TB i . When subjected to uniform noise, pitch and yaw become: We assume roll is negligible as the headset will sit symmetrically on the head of a person and parallel to the gravitational plane and thus there is no change in roll. Due to no prior reference heading direction, since θ i is measured from a fixed but unknown vector, v, we use two TBs, placed C away from each other laterally to calibrate yaw, shown in Fig. 4. Pitch and roll are measured respective to the horizon and thus can be pre-calibrated. Using two TBs is not an issue if a highspeed MEMs steerer is incorporated to scan and target the two TBs. From the pitch angles, known ZPU height, H , and beacon coordinates based on the geometry described in Fig. 4, we can calculate for the planar distances, A and B, between the user and TBs: where = H − Z and Z is the z-coordinate of the TBs. H is known at the ZPU and Z is communicated from the TB to the ZPU on target. We next resolve the orientation yaw aspect of the headset. We designate the angle difference between the beacons as θ C = θ 2 − θ 1 . We then use the Law of Sines to solve for angle, θ A : Although C is defined and known, confining C may actually force ABC to not converge resulting in a scenario where no triangle solution is possible from the measured data. In that case, we can estimate the lateral displacement asĈ using the Law of Cosines: This allows for a solved triangle every time with the measured noisy data, albeit sometimes with greater or lesser errors. We explore the effects of using C versusĈ in the results section as they affect errors based on TB placement.

2) 3D POSITIONING
For the 3D scenario, when height is unknown, which is common, another measurement in the form of range is required. Although we configure our system for LIDAR, range information can be provided in a multitude of ways with different accuracies: light and RF ranging via RSS, RADAR, TOF, etc. VOLUME 8, 2020 We use specifications from a consumer-grade Class 1 LIDAR for our simulations -the Garmin LIDAR-lite has an accuracy of R = ±2.5cm [23]. With LIDAR, radial distance between TB and user, R i is measured, where i refers to TB i . A and B are now calculated from R 1 and R 2 : Now, we can solve for height with either TB 1 or TB 2 using Equation 1: We can average the two values together for a better height estimate. In fact, if height is static, we can average any number of beacon measurements encountered for height improvements over time:ẑ where N is the total number of beacons with measurement data, and R i is the radial distance between the TB i and the ZPU measured with a ranging sensor, and φ i is the angle related to the radial distance measured by the IMU. In practice, R i is subjected to error, R .

3) TRANSITIVE POSITIONING
Finally, we look at the second hop positioning where the ZPU positions an object or device. Yaw angle, θ D , between a TB and transitive device is measured at the ZPU similarly to how θ C was measured in the initial positioning stage. Fig. 5 shows the transitive geometry with respect to the direct positioning zone for the case where yaw is positive, 180 • > θ D > 0 • and for one quadrant of the transitive ZPU zone. When yaw is negative, the geometry is flipped. To avoid repetitiveness, we derive for the shown quadrant; the other three quadrants are achieved through coordinate transformations. First, we solve for the planar distance between the transitive device, TB, and ZPU, which makes up triangle TBD. Since we are working in 3D, we assume we have ranging information. T and B are calculated as A and B were calculated in the 3D case, Eq. 5 using radial distances R T and R 1 respectively. D and θ T are calculated using the Law of Cosine and Law of Sine respectively as we did previously: We also know the estimated angle between the TB anchor and the ZPU based on the estimated ZPU location, θ 3 : Finally, for the 180 • > θ D > 0 • case, we can set the constraint θ 4 = 90 • − θ T − θ 3 and estimate transitive device location,x t ,ŷ t : x t = Dsin(θ 4 ),ŷ t = Dcos(θ 4 ).
The z coordinate estimate,ẑ t is based strictly on the pitch angle, φ T , depending on whether the pitch angle is positive or negative:

B. EXPLORING DIFFERENT CONFIGURATIONS
This subsection is dedicated to investigating and determining the optimal parameters of a ZPS based off the models from above. These initial parameters are investigated using the 2D model and we use noise levels based on practical components. Later, we show simulated theoretical and experimental empirical results for 3D and transitive positioning in the results section. For test coordinates, we choose a 1m × 1m plane 1m away from TB 1 in both x and y dimensions. This assumption is based on the ZPU unlikely being at large angles away from the TBs in normal uses. This attribute could also be built in as a requirement for optimal performance. The ZPU height is assumed fixed for all results at 1.654m which is an approximate average human height. TBs are placed at a height of Z = 2m unless otherwise specified. Table 1 summarizes these parameters (and experimental parameters used later), where not previously defined parameters X and Y are the test coordinate locations in the general case.  Fig. 6a explores position estimates as a cumulative distribution function (CDF) of the MSE for different TB placement heights and also the lateral displacement C between the TBs across the entire test space under the baseline parameters. The note of interest here is that a small C distance results in weaker performance regardless of height. The best performance is when the TB is placed at 1m, a difference of 0.654m away from the user, which illustrates that placing the TBs at a plane further from human height is ideal. This effect is due to angle measurement noise having a smaller effect on large angle deviations. Fig. 6b shows different values of C and the maximum error in the test space at different TB heights. Along with proving that a large height difference results in the best performance as concluded with the prior graph, Fig. 6b also shows the optimal C displacement for a given TB height. For around 0.5m to 1.5m displacement, the results are similar and there is no significant improvement. Placing TBs between 0.5m to 1.5m away from each other is thus ideal. We carry this conclusion into our experiments, where we use C = 0.5m. Fig. 6c shows the difference in finer quality steering when introducing MEMS steering in the yaw axis. The MEMS steering does reduce position estimate errors but not drastically. However, MEMS steering is still desirable for certain use cases as the MEMS actuators provide faster convergence in the form of high-speed and large FOV -a Mirrorcle Scan Module has a scan rate of 1Khz and a FOV of 40 • [24]. This is important for mobile devices and use cases requiring fast acquisition of TBs with less effort from the user. Although not studied, MEMS steering can also be used for pitch. We expect that decreasing the noise on both pitch and yaw will result in incremental, but not significant, performance gains. VOLUME 8, 2020 Fig. 6d shows the difference when predicting position using a known TB displacement, C, versus an estimated displacement,Ĉ. Interesting here is each C andĈ can perform better than the other depending on TB height placement. There is no fast holding rule to follow on whether a known or estimated displacement should be used as it is dependent on TB location. Fortunately, TB location is known to the user-device when estimating and can be used to optimize performance. However, in the case of uncertainty,Ĉ will always ensure an estimate can be made and is the better choice for displacement as a default. From this, we decide to useĈ for our experiments.

IV. EXPERIMENTAL PROTOTYPE
Now that we have presented models and explored TB configurations, we want to establish that the quality of the available measurement devices supports the exact models. We design an experimental prototype to validate our models under the inaccuracies of a real system. Our prototype ZPS is capable of direct 2D and 3D positioning as well as transitive positioning.

A. HARDWARE AND SETUP
Our ZPS experimental test apparatus is a full build of the ZPS for one potential zone. Fig. 7 shows the entire test apparatus with a prototype ZPU headset, two trust beacons, and a transitive device. We also implement real-time optical wireless communications between the ZPU, TBs, and transitive device. The ZPU consists of a 5mW 650nm red TTLtransitor-transitor logic-laser (Adafruit part no. 1056 [25]), and laser driver (Adafruit variant ESP32 microcontroller [26]) fitted to a pair of Bose Frames [27]. The Bose Frames is a commercial-off-the-shelf (COTS) audio AR headset with built-in motion sensors: an accelerometer, gyroscope, and magnetometer. We developed an Apple iOS app to tap into the exposed Euler rotation angles of pitch, roll, yaw provided through the Bose Frames API. Fig. 7 insert shows the relative axes regarding pitch, roll, and yaw of the Bose Frames and thus the ZPU. We drive the laser using the UART communication protocol, which is a simple OOK modulation common with microcontrollers, at a modest baud rate of 115.2kbps as speed is not our concern in this paper. However, OWC literature show Gbps data rates [28], [29]. The trust beacon is a Thorlabs PIN PD (part no. PDA36A [30]) with another ESP32 MCU. The transitive device is an identical PD and MCU setup as the TB.
As for the test harness, we mount the prototype ZPU on an optical breadboard for stability, which in turn is fastened to a sturdy tripod, Fig. 7. The tripod lets us reliably and quickly adjust height, pitch, and yaw. We also mount the transitive device on an optical breadboard fastened to a tripod. The entire setup is placed within the coverage of a motion capture camera system (Optitrack [31]) to measure the coordinates of the ZPU down to millimeters to ensure correct placement and data collection. The IR retro-reflectors can be seen on the test equipment in Fig. 7. The TBs are placed on a cage as shown in Fig. 7.

B. SOURCES OF ERRORS
We characterize the repeatability of the Bose Frame as the Bose Frame API only exposes a rotation vector but does not give any performance specifications for the sensors. In terms of repeatability, we find that in the case of no motion, the device keeps the same measurements. We also experiment to measure the drift of the reported pitch, roll, and yaw when exposed to motions confined to movements less than 10m from starting location and accelerations less than 9.8m/s 2 (gravity).
First, we measure the pitch, roll, and yaw of the Bose Frames at a known location. Next, we move the Bose Frames erratically (e.g., motion that is jerky, smooth, quick, elaborate, small, large, etc.) and place the Bose Frames back at the known location and remeasure the pitch, roll, and yaw angles. We repeat this procedure for 20 samples. Fig. 8 shows the raw measured angles. We find that with pitch and roll, the measurements remained consistently within ±0.5 • of the mean, whereas with yaw, the measurements drift upwards. This is due to pitch and roll measured relative to gravity and the horizon and yaw having no reference point. We can combat the lack of a yaw reference by calibrating between two points. Fig. 8 shows that when we take the difference between two yaw measurements, by rotating the device between two known orientations, the angle differences between the measurements are now consistent with the pitch and roll measurements and within ±0.5 • of the mean. This experiment gives us confidence to trust the θ C measurement mentioned in Section III-A for position estimates. We also use this ±0.5 error as noise, φ n , θ n , in our simulations. 80942 VOLUME 8, 2020 We can increase yaw accuracy by using the yaw angle difference of a MEMS steerer, which can bring the noise error to less than 0.01 • . However, as concluded in the results section, the importance of adding MEMS is for faster convergence rather than increased accuracy. Finally, it is possible to reference magnetic north using the magnetometer but that technique is not reliable indoors. There are also higher-end IMUs and different algorithms to explore beyond the COTS Bose Frame system for calculating reliably pitch and yaw at the ZPU.
As for the ranging errors, we use errors R = ±2.5cm as reported from the specifications of the Garmin LIDAR-lite [23]. The LIDAR-lite is a unit used in consumer projects. There are higher-end LIDARs available and also different types of range sensors.

V. EXPERIMENTAL RESULTS AND DISCUSSION
For our results, we consider both simulations, based on models, and experiments, from the prototype ZPS, to investigate ZPS. The simulated conditions serve as a benchmark for the experimental results and also to study the effects of different parameters where an experimental test is not taken. The experimental parameters show performance using relatively inexpensive COTS components and one TB configuration: TB 1 at (0, 0, 2)m and TB 2 at (0, 0.5, 2)m.

A. BENCHMARKING FOR 2D AND 3D POSITIONING
In this section, we benchmark the reliability of our models with experimental data. We then extrapolate in simulation for an optimized configuration based on the parameters explored in the previous section and predict performance if given more precise components. Fig. 9a shows 2D positioning simulation results for 100 data points at four different locations: A (1, 1)m, B (2, 1)m, C (1, 2)m, and D (2, 2)m and a height of 1.57m; this height was chosen due to it being the default height of our tripod. The TBs were placed respectively at (0, 0, 2)m and (0, 0.5,2)m. In this setup, we modeled noise at the IMU as an uniform distribution between ±0.5 • -this value is obtained from the Bose Frames repeatability test, Fig. 8. Results show that accuracies are location dependent as expected. We also generated a 95% confidence ellipse for each test location, where 95% of simulated data points are within the ellipses. Fig. 9b show results using data collected from the proof-ofconcept prototype configuration at the same four locations, 20 points per a location: A(1, 1)m, B(2, 1)m, C(1, 2)m, and D(2, 2)m and a ZPU height of 1.57m. In addition to the mathematical models described in Section III, we used a small angle approximation to account for the laser placement adjacent to the Bose Frames reference axes and corrected for pitch angle due to tilt of the Bose Frames when mounted to the optical breadboard; this value was 4.35 • . In Fig. 9b, we overlaid the generated 95% confidence ellipses generated from the simulated data, Fig. 9a, on the experimental data. We show that the experimental results are within the bounds of the simulated results. Given a laboratory setup where the laser is adjacent and not exactly on the ZPU yaw axis versus a dedicated industrial build, the experimental results are very promising and give us confidence to trust our models.
Next, we consider 3D positioning. Factoring in range from LIDAR greatly increases the positioning accuracy even though we are adding an additional cost to the positioning system. This is because LIDAR is a high accuracy methodology and provides better estimates for radial distances than estimating based off the IMU: using Eq. 1 versus Eq. 5. Building from the 2D simulation and experimental verification, we ran the same experiment as we did for 2D for 3D but also adding in measured values from a LIDAR-lite mounted on the same pitch and yaw axis of the Bose Frames (offset from the IMU, which we correct for). This time, we show results as a CDF of MSE). Fig. 10a shows the 2D data shown spatially in Fig. 9 in addition to the new 3D results. The experimental errors for 2D positioning performs slightly better than simulated data due to less sample points collected in the experiment. For 3D positioning however, the simulated and experimental data overlap almost identically, showing for locations A, B, C, and D less than 20cm MSE for 95% of cases, which is 2 times better than the 40cm MSE for 95% of cases for 2D. These results using our prototype further validate our developed models.
Given our confidence in our models, as illustrated in Fig. 9  and 10a, we can now model scenarios with improved hardware (i.e., optimized parameters), different TB configurations, and for a larger test space instead of just locations A, B, C, and D. Fig. 10b is the result of these efforts and show positioning MSE of less than 5cm for the baseline condition for 3D positioning under the test coordinates in Table 1, which is a vast improvement over the 20cm accuracy for 2D positioning, Fig. 6. The less than 5cm MSE is also comparable or better than other angle-diverse approaches. In addition, this accuracy can easily be implemented with the ZPU as specified for our prototype and range sensors just VOLUME 8, 2020  slightly better than our hobbyist sensor. The ZPU is thus a device that can be designed to come at different price points. In fact for coarser resolutions and inexpensive devices, the ranging unit can be lower in quality than our baseline.

B. TRANSITIVE POSITIONING PERFORMANCE
Finally, we discuss transitive positioning performance. Results show that the second iteration of positioning does not demonstrate significant degradation when compared to direct positioning and that transitive positioning is both a viable and high accuracy technique. We use the transitive algorithm described in Sec. III-A.3 In order to quantify our results, we first define zones for a give ZPU location. Zones are described as planes in this scenario and are squares centered at a ZPU test location. Fig. 11 shows four zones; two zones, 1 × 1m and 2 × 2m, per a ZPU test location, (1,1)m and (2,2)m. We also show the test locations of the ZPU following the quadrant model from Section III-A.3. Due to symmetry, the results of the remaining quadrants are expected to be the same. The height of the transitive devices are 1.1m, chosen for the height of our second tripod. The TB and transitive device are greater than 0.5m away from each other due to height difference. For our simulations, the ZPU is first estimated and then once that location is found, the location of the transitive device is pre- dicted using those ZPU estimates. Fig. 11b shows the results of these simulations. The main differentiator between the two clusters of error curves is due to direct positioning errors; the error computed for the ZPU. There is minimal error in the second iteration transitive positioning. The low compounded error is due to the LIDAR unit being a high fidelity sensor that little to no new errors are introduced. Our simulations also show that the size of the zone does not matter significantly. We can conclude that transitive positioning follows the same error bounds as with the ZPU direct positioning.
Finally, we test transitive positioning experimentally for 3 locations; the experiments are similar to the ones devised for direct 2D and 3D positioning. Here, we pick three locations in our test quadrant, A (0.5)m, B (0.5,1.5)m, C (1,1.5)m, and collet 20 data points for each location. We first estimate the ZPU position using experimental data and then use the noisy estimated positions to predict the position of the transitive device. We compare estimated positions with actual position of the ZPU and transitive device at each of the three points in Fig. 11c. We plot the CDF of these experimental results against simulated results in Fig. 11d. Again we found that similar to 2D and 3D positioning, experimental data matched simulated results, which implies that our developed models can accurately predict positions that we verified with experimental data.

C. DISCUSSION
Angle-diverse light-based solutions are favorable in that they enable 2D and 3D positioning. In terms of positioning accuracy, a ZPS performs on the same order of magnitude as other direct light-based angle-diverse schemes; representative approaches shown in Table 2. The distinguishing elements of each techniques is in the hardware used to acquire angle diversity. An advantage of a ZPS is that the hardware that enables angle diversity is already or becoming commonplace. IMUs are omnipresent in consumer devices and more devices are incorporating range sensors for depth perception. Having a network of TBs also ensure LOS is met compared to overhead lighting as the TBs do not necessitate overhead placement as visible light would. The mobile zone is also an advantage of a ZPS. The premium ZPU build using MEMS and gaze tracking would be more expensive but those elements are included for speed gains and passive transitive positioning. In terms of a cost comparison, there is insufficient information from related work and we are reluctant to quantify without complete data.
None of the compared solutions have the ability to position other devices in its vicinity. This attribute differentiates a ZPS from its peers; the laser and range sensor used to position the ZPU are also used to position peripheral devices. Cameras inherently also have this ability but the use of image processing to identify the objects in the image raises computation power and complexity. The active photodiode nature of a ZPS allows for exact identification via a communication handshake. For the other compared approaches, duplex transceivers are required to enable transitive positioning of peripheral devices. In their defense, these peripheral devices can be positioned directly for many of the schemes if every device was equipped with the special hardware introduced in the respective methods. However, the cost of single purpose hardware may not be worthwhile. In addition, a ZPS uniquely allows passive positioning of objects beyond PD enabled transitive devices. This gives our system the ability to position a variety of devices.
Finally, having the ZPU position devices in its vicinity shortens the physical distance between devices (TB, ZPU, TD) and avoids in some cases occlusions over long ranges. These shorter ''hops'' allow for low signal power from the laser-diode similar to repeaters that amplify signal strength and prevent signal degradation.

VI. CONCLUSION
In this paper, we propose a novel Zone-Based Positioning Service (ZPS) to realize indoor positioning of a user device and transitive positioning of additional devices within an envelope (zone) surrounding the user. A ZPS is supported by a device attached to the mobile entity (person, robot, or otherwise) which is able to measure angle and distance from an established anchor point called a Trust Beacon. Through modeling and analysis, the predicted positioning performance of a ZPS can be as low as 5cm given high precision components and 10cm using readily available commodity components. Experiments using a working prototype of a ZPS validates the developed models for 2D, 3D, and transitive positioning. In addition to demonstrating fine-grained indoor positioning, the ZPS concept has the potential to be a low-cost and ubiquitous positioning service.