UAV Positioning Based on Multi-Sensor Fusion

Real-time and stable positioning data is essential for the UAV to perform various tasks. The traditional multi-sensor data fusion algorithm needs to know the measurement noise of sensor data, and even if there are corresponding adaptive methods to estimate the noise, most methods cannot deal with time-varying noise. In addition, traditional fusion algorithms usually are complicated, causing a large amount of calculation. In this paper, a multi-sliding window classification adaptive unscented Kalman filter (MWCAUKF) method with timestamp sort updating was proposed, which can improve the accuracy and stability of positioning. This method consists of three phases. First, according to the timestamp of sensor data, the multi-sensor data are added with fusion filtering in order. Then it estimates the measurement noise of multiple sensors through multiple sliding Windows. Finally, the sensor data classification method is adopted to deal with the filter instability caused by time-varying noise. Both theoretical analysis and experimental results show that this method has a low computational cost, high accuracy, and good stability.


I. INTRODUCTION
Positioning is the core of UAV, which plays an important role in UAV navigation [1], autonomous control [2] and the execution of various tasks. With the rapid development of science and technology, the accuracy and reliability of positioning have been put forward more and more high requirements. It is difficult for a single sensor to meet the positioning needs of UAV, and multi-sensor fusion positioning is the current development trend of UAV positioning [3]. That is mainly because multiple information sources can be fully utilized to supplement each other, forming a positioning system with multiple redundancies, higher positioning accuracy, and stability [4].
At present, research on UAV location algorithm based on multi-sensor data fusion is mainly based on Kalman filter [5] data fusion. Among them, multi-sensor data fusion can be divided into centralized fusion Kalman filter [7], [9] and distributed fusion Kalman filter [8], [10] according to different fusion modes. Although the fusion accuracy of centralized fusion filter is relatively high, it has the disadvantages of large computation, complicated computation and high requirement on the central processor. The distributed fusion algorithm is different from the centralized fusion algorithm [11].
The associate editor coordinating the review of this manuscript and approving it for publication was Lubin Chang .
Instead, it does not feed observations from each sensor directly into the fusion center. The distributed fusion algorithm uses local sensor data to predict the motion state of the target first, and then sends the prediction data to the fusion center. Therefore, it has the advantages of small amount of transmitted data, high reliability, and high scalability. However, in the process of one-step prediction by distributed fusion method, data dimensions obtained by sensors are different due to the different performance of various sensors. Therefore, various data need to be adjusted in corresponding dimensions [12]. Moreover, the data frequency obtained by various sensors is different, so it is necessary to synchronize the time of various data in the fusion center, which can the optimal order of fusion be determined [13]. These processes before data fusion add many complex processing and computing processes to the process of multi-sensor data fusion.
For the conventional Kalman filter algorithm [14], [15], the model structure and noise statistical characteristics of dynamic systems are required to be known, but in practical applications, the models and noise statistical characteristics of many systems are unknown, resulting in the Kalman filter algorithm losing its optimality. It is estimated that the accuracy decreases or even diverges. Adaptive Kalman filtering can solve this problem. Common adaptive filtering algorithms include Sage-Husa adaptive Kalman filtering algorithm [16], strong tracking filtering algorithm [17], and fuzzy VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ adaptive algorithm [18]. Sage-Husa adaptive Kalman filter has the characteristics of high accuracy and instability, while the strong tracking filter has low accuracy but has extremely strong tracking ability about state changes. Literature [19] combines the advantages and disadvantages of Sage-Husa adaptive filtering algorithm and strong tracking filtering algorithm. The fuzzy adaptive filtering algorithm has the problem of fuzzy parameter selection. Similarly, the fading memory filtering algorithm [28], [29] also has the problem of fading factor selection. The H∞ filtering [20] algorithm is very complex and computationally intensive, so it requires high system hardware. Aiming at the above problems, this paper proposes a new multi-sensor fusion adaptive Kalman filter framework and applies it to the positioning system of UAV. The main contributions of this paper are as follows.
• Aiming at the problem that various sensor data needs time synchronization in data fusion, this paper proposes a filter update method based on timestamp sort, which effectively avoids the problem of multi-sensor time synchronization. At the same time, this method greatly reduces the calculation amount of the filtering process. With this method, sensor measurements can be easily fused together regardless of the sensor update rates.
• The proposed adaptive Kalman filtering method based on multi-sliding window noise estimation can estimate the measurement noise of multiple sensors simultaneously. Traditional adaptive filtering can only estimate a single sensor noise. Different from traditional adaptive filtering, this paper can estimate the measurement noise of multiple sensors in one filter at the same time. During the filtering process, the filtering parameters are dynamically adjusted to improve filtering accuracy.
• Aiming at the multi-sliding window noise estimation, the adaptive Kalman filter cannot effectively deal with the filter divergence caused by time-varying noise. This paper proposes a classification method for noise to improve the stability of filtering. In the Section II of this article, the differences between the work in this paper and the current related work are introduced. Section III gives a brief introduction to the entire system. The system is described in detail in sections IV and V . Section VI analyzes the performance of the system. Section VII gives specific experimental results and experimental analysis. Conclusions and future work will be discussed in Section VIII .

II. RELATIVE WORK
There are a lot of work-related to UAV positioning in multisensor fusion. Some related work referred to in this paper is compared.
For multi-sensor data fusion, the time synchronization of the data directly affects the accuracy of the fusion. A method of sequence measurement updating is proposed in literature [22]. The updated time node in this method is as same as the IMU data measurement time node. Other sensors are used as auxiliary sensors to correct IMU data, and their update nodes must be on the IMU update nodes. This method requires time synchronization for all sensors. The specific method of time synchronization is given in literature [23]. But this article focuses on time sorting. In this paper, all sensor data are sorted by time. Data fusion is performed in the chronological order of data acquisition without the need for data synchronization. Therefore, the method in this paper transforms multi-sensor fusion into the fusion of single data and system state data.
The complicated filtering process leads to a huge increase in the amount of calculation. In literature [24], an improved Kalman filter is proposed, which reduces the calculation of the filtering process. Furthermore, the proposed subband Kalman filtering method in literature [25], which effectively reduces the computation of filtering. Besides, literature [26] proposed a weighted fusion recombination innovation strategy, which reduced the calculation amount under the condition of suppressing the influence of noise on the fusion result. Compared with the centralized fusion method and the distributed fusion method, the calculation of the distributed algorithm is much less. A comparison method of calculation is given in [27]. This article is based on the timestamp sorting update method. The measurement update of the data is the filtered fusion update. In this method, only one measurement data needs to be processed for one filtering update, which avoids processing all sensor data at one time. This processing method effectively reduces the amount of calculation.
Adaptive filtering is very effective for sensors with unknown measurement noise. The fading memory filtering method [28], [29] reduces the influence of outdated data on the current state data by introducing fading memory factor into the filtering. This approach brings the system model close to the current real model. However, how to choose the appropriate fading memory factor is a difficult problem faced by this method. This paper uses a sliding windowbased noise statistics method to avoid the selection of fading factors, remove outdated data in time, and use the updated data to evaluate the entire filtering process. The data fusion method [30], [31] based on the adaptive filtering of fuzzy logic avoids the selection of fading memory factors, but it has problems such as low accuracy, easy dispersion, and difficulty in handling strong time-varying noise. Similar to the algorithm in this paper, a random weighted sliding window adaptive filtering method is proposed in the literature [6]. Because the selection of weights is relatively random, this algorithm is not applicable when dealing with strong variability noise. The strong tracking filtering algorithm proposed in [39] effectively addresses the problem of filtering divergence caused by time-varying noise. Based on the characteristics of the sensor data, this paper uses a classification method to adjust the measurement covariance and process covariance accordingly to solve the problem of strong time-varying noise.

III. OVERVIEW OF MULTI-SENSOR FUSION POSITIONING SYSTEM
At the core of this paper, an adaptive filtering framework is constructed and applied to the positioning system of UAV. The framework diagram of the Multi-sensor fusion position system is as Fig. 1.
In the entire multi-sensor fusion positioning system, it mainly includes data preprocessing, data timestamp sorting, multi-sensor filtering fusion, multi-sliding window noise statistics, and filtering anomaly capture. The preprocessing of the data mainly filters out invalid data to improve the stability of the filter, followed by coordinate transformation and state prediction. Data timestamp sorting is the basis of the filtering algorithm in this paper. Multi-sensor fusion filtering based on this sort changes the traditional multi-sensor filtering form, making the filtering process more clear and simple. Secondly, in the process of multi-sensor data fusion, it is unavoidable that sensor measurement noise is unknown. Multi-sliding window noise statistics can just solve this problem. However, the statistical window is too long, which will cause the deviation of the time-varying noise estimation. In this paper, a filtering exception capture module is added to the filtering process, and a processing method based on sensor classification is proposed to effectively avoid this situation.

IV. DATA PREPROCESSING
In this positioning system, the data preprocessing process includes the validity identification of data, coordinate conversion, and System State Pre-Estimation. These processes transform the sensor data into the pre-estimation of the system state and facilitate the fusion of the filtering system.
In order to verify the proposed algorithm, the sensors used in this paper mainly include 9-axis IMU, GPS, and lidar. These sensors allow the UAV to get position data both indoors and outdoors. Before the measurement data is input into the filtering fusion system, the validity identification of data filter the invalid data in advance, which has a greater impact on improving the fault tolerance of the filter fusion system. For the sensors mentioned in this paper, the validity of analyzing the data of these sensors is as follows. For example, whether GPS data is valid can be determined by the number of satellites received by the GPS receiver. When the number of satellites received by GPS is greater than or equal to 4, we consider that the GPS data is reliable, otherwise, the data is unreliable. Similarly, whether the lidar data is valid is determined by the number of data points scanned. One frame of lidar data includes 180 data points. When the lidar scan data points are less than 30, the lidar data is considered to be unusable in the scanning matching algorithm [37]. In order to continuously output the positioning data, the validity of IMU data is not verified in this paper. Other sensor data can be verified separately according to the sensor features.
In this paper, the coordinate system of each sensor needs to be converted to a unified coordinate system [32]. This paper uses east-north-up (ENU) coordinates as the global coordinate system. Longitude, latitude, and altitude are based on the geodetic (LLA) coordinate system. The formula for converting the LLA coordinate system to the ENU coordinate system is given in the literature [40]. This article assumes that the attitude angle obtained by the IMU is data in the ENU coordinate system. The relationship between global acceleration and the acceleration obtained by the IMU is as follows: where a enu is the acceleration of the UAV in the ENU coordinate system, and a IMU is the acceleration of the UAV in the IMU coordinate system. R B I is the rotation matrix from the IMU coordinate to the ENU coordinate system.
where ψ, θ, and φ are the UAV's ENU-frame yaw, pitch, and roll, respectively. c and s designate the cosine and sine functions. In this paper, the default lidar coordinate axis is parallel to the ENU coordinate axis, so the formula for transferring the polar coordinates (r i , ξ i ) of the lidar scanning data to the ENU coordinate system is as follows: where Point i represents the coordinates of the ith scanning point of the lidar in the ENU coordinate system, and P 0 represents the current position of the lidar in the ENU coordinate system. The position coordinates obtained by the scan matching algorithm are the position coordinates under the ENU coordinate system. It can be known from the above that GPS and lidar can obtain the UAV position. IMU can obtain the attitude angle, VOLUME 8, 2020 acceleration and angular velocity of the UAV. To represent the state of UAV, the state quantity is defined as follows: where (P x , P y , P z ) are the positions of the uav in the ENU coordinate system. (V x , V y , V z ) are the velocity of the UAV in the ENU coordinate system. (ω x , ω y , ω z ) and (a x , a y , a z ) are the angular velocity and acceleration of the IMU. The conversion relationship is as follows: I is the transformation matrix from the angular velocity in the body coordinates to the angular velocity in the geographical coordinate system. P k and V k represent the position and velocity at time k. Similarly, where P k+1 and V k+1 represent the position and velocity at time k + 1. t represents the amount of time change from time k to time k +1. Therefore, the state of the system can be estimated with equations (5), (6) and (7).

V. MULTI-SLIDING WINDOW CLASSIFICATION ADAPTIVE UNSCENTED KALMAN FILTER MODEL
Compared with Extended Kalman filter (EKF), Unscented Kalman Filter (UKF) removes the computational step of messy Jacobian matrices and keeps at least third-order nonlinear function approximation [15]. This article uses UKF as the basis to construct a multi-sensor fusion UKF filter.

A. MULTI-SENSOR FUSION UKF MODEL BASED ON TIMESTAMP SORTING UPDATE(TSU)
Assuming that there are N sensors, the data obtained from each sensor has a timestamp. The data is sorted based on the timestamp. Fig. 2 shows the measurement update sequence of GPS, IMU and lidar data. In the traditional multi-sensor data fusion process, data fusion is performed at a fixed time node. It can be seen from the figure that the data is not updated at a fixed time node, but is updated according to the timestamp of the measurement data. The fusion process can be understood as obtaining one data and then immediately fused into the filter, and then update the entire system state. So the filter update frequency is the sum of all data update frequencies.
For multi-sensor fusion, consider the non-linear function process form. where x k ∈ R n is the state vector of the system at time step k. z s k ∈ R m(s) is the measurement vector of the s-th sensor at time step k. m(s) represents dimension of the s-th sensor. w k ∈ R n and v s k ∈ R m(s) represent additive process noise and measurement noise of the s-th sensor, respectively. f (·) is the process model and h s (·) is the non-linear function of the s-th sensor measurement model. The process noise w k and the measurement noise v s k are uncorrelated time-varying white Gaussian noise, and the measurement noise between the sensors is uncorrelated. Its statistical characteristics satisfy: among them, Q k is a non-negative definite matrix, R s k is a positive definite matrix, and For the nonlinear system described by equation (9), the multi-sensor UKF filtering process is as follows when the noise of each sensor is known.

1) TIME UPDATE (PREDICTION STEP)
For each time step k = 1, 2, . . ., we need first to calculate sigma points and then time-update using time-update equations. The calculation method of points is as follows: where ξ is sigma-point. λ is the adjustable parameter that controls the distribution of Sigma-points around x k−1 .
( √ (n+λ)P k−1 ) i is the i-th column of the root mean square of the matrix (n+λ)P k−1 . x is the system state, and P is the error covariance matrix.
The Sigma points transformed by the process model among them, where q k and Q k are process noise and process covariance, respectively.

2) TIMESTAMP SORTING UPDATE (CORRECTION STEP)
For each available measurement z s k = z s (t k ) (s = 1, 2, . . . , N ) at time instant t = t k ,we can update the state estimate x k and error covariance matrix P k . There may be multiple measurements at the same time, and the filter will fuse them in the order in which they are stored. After the measurement model h s (·) is transformed again, the Sigma point is Calculate the measured predicted value and its covariancê Calculate the covariance between the predicted value of the state and the measured predicted valuê Determine the kalman gain matrix Update the status estimate x k and its covariance P k After finishing the last measurement-update loop, the state and covariance estimates at time t = t k are given byx k ,P k . These results become the initialcondition for the next timeupdate step.
In the above steps, r s k and R s k respectively represent the measurement noise and measurement covariance of the s-th sensor at time step k.ẑ s k|k−1 andPˆzs k|k−1 respectively represent the measured prediction and covariance of the s-th sensor at time step k.Px k|k−1ẑ s k|k−1 and K s k respectively represent the Cross covariance and Kalman gain of the s-th sensor at time step k. In equation (21), z s k is the measurement value of the s-th sensor. According to the specific type of measured data, the measured noise type and the measured covariance type in the filtering process can be obtained.

B. MULTI-WINDOW ADAPTIVE UKF MODEL
In many cases, system noise and measurement noise are unknown. Therefore, it is necessary to estimate the noise. According to the nonlinear system described by equation (9), it is easy to know However, since the true state x k cannot be observed during the filtering process, it is not possible to directly apply the equation (23) to estimate the system noise statistics. Consider a time window of width W , and assume that the system noise statistics are constant or very small within this window.
In equation (23), using the estimated valuex k or predicted valuex k|k−1 of x k instead of x k , and calculate the expected value by using the sliding window approximation method to obtain the following suboptimal estimator. Among them, the size of W should be moderate, and W is set to 10 in this paper. IMU data experiences 100 ms in a window, LIDAR is 1 s, and GPS is 10 s. IMU and LIDAR can be considered that the noise does not change within a window. Because the GPS window is too long, it will be discussed in Section V-C.
among them, f ( x k−1−j ) represents the posterior mean value obtained by transforming the state estimation value x k−1−j by the non-linear function f (·). For KF, the accuracy value of f ( x k−1−j ) can be obtained by state transition matrix transformation; however, for non-linear UKF, f ( x k−1−j ) can only be obtained by UT transformation with third-order Taylor VOLUME 8, 2020 accuracy approximation [14], that is: Similarly, it can only be obtained by a third-order Taylor approximation, ie In the literature [6], the noise statistical estimator formula (24) has been proved to be an unbiased estimate of q k and r s k , but the estimates of Q k and R s k are biased. However, an unbiased estimation formula can be obtained through a biased proof as followŝ Herez k−j = z s k −ẑ s k|k−1 . For a window width of W , W measurements are made in a short time (t k−s(W ) , t k ). Because the system noise is constant or the change is small in the statistics window W , there is Therefore, N sliding windows are needed for N sensors. Estimate their measurement noise through N sliding windows.
r s k and R s k are estimates of measurement error and measurement covariance of the s-th sensor data.

C. TIME-VARYING NOISE PROCESSING
In the UKF filter for noise estimation of the multi-sliding window, the method of replacing the true value with the optimal estimation value is used to estimate the measurement noise and process noise. The premise of this method is that in the set window, the time is short enough to make equation (29) true. In this paper, multiple windows are set for multi-sensor noise estimation. Each window has the same width, but experiences different time. This results in some windows being unsatisfied with the formula (29). This leads to inaccurate estimation of process noise and measurement noise, which eventually leads to filter divergence. In this paper, sensors are classified based on whether their measurement errors accumulate over time. For example, the displacement predicted by IMU will increase the cumulative error over time. GPS data is different from IMU data, which does not increase the measurement error over time. When GPS data and IMU data are fused together in this paper, sliding window estimation of IMU noise is more accurate. Since GPS needs to go through W seconds under the window width of W , GPS noise estimation is not accurate. As shown in Fig. 3, the solid line represents the filtered trajectory, and the dotted line represents the real trajectory. The green circle represents the magnitude of the process covarianceQ k , and the process noiseq k becomes very small. The yellow circle indicates the actual measurement covariance of the GPS sensor.R GPS k is the statistical measurement covariance of the GPS sensor and does not change much with time. As time increases, the filtered trajectory of the solid arrow shifts more and more, but the statistical process noise and process covariance do not change much. Among them, the statistical measurement errorr s k of GPS is getting larger and larger. Data abnormality capture is performed according to the noise statistical resultr GPS k . According to experience, set the GPS positioning noise upper limit to pr GPS . Whenr GPS k > pr GPS we think that the data is biased. When this occurs, the corresponding process noise and process covariance need to be adjusted.
where t GPS r represents the time interval between two GPS data. Similarly, t q represents the average time interval between filter updates Because process noise is closely related to process covariance and state quantities, the above method can be used to quickly converge the filter to the global true value.

VI. PERFORMANCE ANALYSIS OF FILTERING METHOD A. FILTER METHOD ACCURACY ANALYSIS
For the filtering method proposed in this paper, covariance analysis method is used for accuracy analysis [36]. Assume that the data state dimension of all sensors is n, the observation data dimension is m, and the total number of sensors is N , set the time elapsed during the filter fusion process to T . It is assumed the filtering has converged within this time T . Theorem 1: Under the above conditions, distributed filter fusion and TSU filter fusion have the same fusion accuracy.
Proof: According to the simplified distributed filtering process and TSU filtering process shown in Appendix, the process covariance of TSU filtering process is set as P t T , and the distributed filtering process covariance is set as P d T .
When N ≥ 2, there is a fixed update time according to the distributed filtering process, and there are C update processes in time T . The covariance after time T is: (37) Within time T , the number of updates for the s-th sensor in the TSU filter is C(s). There are N sensors, the covariance after time T is (38) where T /C(s) represents the sampling step. For the TSU filtering process, this step size can change at any time.
T /C(s) − 1 only represents the last update, not the value of T /C(s) minus 1.
In the case of filter convergence [41], the final process covariance will be stabilized as long as the iteration times are sufficient. That is: So there is So when the system converges, the distributed filter fusion and TSU filter fusion have the same filtering accuracy. From the above, theorem 1 is proved.

B. ANALYSIS OF CALCULATION OF FILTERING FRAMEWORK
The analysis of the calculation problem is mainly the analysis of the calculation problem caused by the filtering fusion method. According to the filtering process shown in Appendix, the calculation is calculated. When calculating calculations, follow these guidelines: 1) The types of calculations are summarized into four types: addition, assignment, multiplication, and division; 2) One assignment operation is equivalent to one addition operation; 3) Matrix operations are calculated as operations between matrix elements. Table 1 shows the calculation formula for the single filter fusion. It can be seen from the table that when N = 1, the calculation amount of the three filtering frames is equal. When N ≥ 2, the calculation of the TSU method is less than the calculation of distributed and does not increase the calculation with the increase of the number of sensors. In order to quantify the advantages of this algorithm, two different values of m, n, and N are discussed here. The results are shown in Table 2 and Table 3.
From the statistical results of the calculation of the filtering framework given in Table 2 and Table 3, compared with the distributed filtering method, the filtering method in this paper reduces the total computation by 62.7% and 82.3% respectively. It can be seen from Table 1 that with the increase of the number of system sensors, state dimension, and observation dimension, the advantages of the algorithm in this paper in computation will become more and more obvious.

VII. EXPERIMENT AND ANALYSIS A. SIMULATION
The simulation experiment is carried out on the gazebo [33] simulation platform under the Linux system. The UAV is simulated by the open-source simulation software package Hector_quadrotor [34]. The noise and drift of simulation sensor are set in the simulation scene, as shown in Table 4.
We build a simulation scenario as shown in Fig. 4 in the gazebo simulation environment, and set cuboid obstacles to provide obstacles for the simulation of 2D lidar positioning [37]. The red track is the preset track of the UAV. The UAV takes off in the upper right corner of the red rectangular trajectory in the picture and flies counterclockwise.
In order to verify the performance of the adaptive filtering proposed in this paper, experiments were performed on GPS and IMU sensors. The algorithm in this paper proposes a multi-sliding window classification adaptive unscented Kalman filter based on the timestamp sort update filter. The proposed MWCAUKF is compared with standard EKF, UKF,  and improved Sage-Husa adaptive unscented Kalman filter (SHAUKF). The comparison chart of simulation trajectory is shown in Fig. 5. From the positioning of trajectory in Fig. 5, MWCAUKF is obviously closer to the true trajectory than standard UKF and EKF trajectories. Compared with the improved SHAUKF, MWCAUKF is not easy to distinguish the advantages and disadvantages from the trajectory. Fig. 6 and Fig. 7 further compare the two aspects of position error and velocity error.
From Fig. 6 and Fig. 7, it can be seen that the MWCAUKF has smaller position and velocity errors than the standard UKF and EKF. In addition, the MWCAUKF algorithm has a smaller trajectory error fluctuation than SHAUKF, EKF, and UKF. It can be seen that MWCAUKF has better performance in tracking real data. Therefore, compared with the standard EKF, UKF and SHAUKF, our method has higher fusion accuracy, smaller volatility, and can quickly converge to the real value. Specific experimental data are shown in Table 5. From the data in Table 5, when the noise is unknown, the average error of the position and velocity of the algorithm in this paper is better than the current SHAUKF adaptive filtering algorithm. Table 5 shows that standard UKF and EKF filters have larger errors than SHAUKF when the noise is unknown. Therefore, compared with EKF, UKF and SHAUKF, the algorithm in this paper has smaller average error in position and velocity, so the text algorithm has the highest accuracy in data fusion.
The algorithm design in this paper is aimed at multi-sensor data fusion. In order to reflect the applicability and effectiveness of the algorithm for multiple sensors, lidar data fusion is added. The positioning trajectory is shown in Fig. 8.
It can be seen from the experimental trajectory of Fig. 8 that with the addition of a sensor, the coincidence between the positioning trajectory and the real trajectory increases. Among them, the average error of X-axis velocity is 0.0715m/s, the average error of Y-axis velocity is 0.1173m/s, the position error of X-axis is 0.0713m, and the position error of Y-axis is 0.0653m. It can be seen that when a lidar sensor is added, the positioning accuracy is significantly improved, indicating that the algorithm in this paper is effective for multi-sensor fusion.

B. REAL EXPERIMENT
In order to verify the system model, the adaptive Kalman filter based on multi-sensor fusion is applied to the UAV positioning system. The multi-sensor fusion UAV positioning systems require related hardware and software environments.   In the hardware environment, the flight platform uses DJI's M100 quad-rotor drone. The drone itself carries an IMU module to provide IMU data; it uses RPLIDAR-A2 lidar from Silan Technology; Raspberry Pi is used as a data fusion processor; GPS receiving device model is Ublox8030. The software environment is consistent with the simulation, and the program is debugged under ROS (Robot Operating System).
The algorithm proposed in this paper is applied to the positioning system of real UAV, and experiments are carried out in relatively open outdoor scenes. Since DJI's fused data is relatively accurate [35], the GPS data of DJI's M100 drone is used as the comparison data of the positioning data. The experimental trajectory comparison chart is shown in Fig. 10.
It can be seen from the Fig. 10 that the UAV flight distance is about 100m. Compared with the GPS positioning data of the M100, all experimental trajectories have the same offset problem. This problem is a coordinate offset problem caused by different positioning satellites [38]. DJI's GPS uses Global Positioning System (GPS). Ublox8030 GPS uses Beidou Satellite Navigation System (DBS). The experiment only compares the advantages and disadvantages of the algorithm in this paper with the algorithms of UKF, EKF and SHAUKF. Therefore, this offset can be ignored. As can be seen from Fig. 10, compared with the positioning trajectories under the other three filtering algorithms, the trajectory obtained by the algorithm in this paper is the closest to the M100 data trajectory. Fig. 11 and Fig. 12 also illustrate the excellent performance of the MWCAUKF algorithm. The position and velocity of MWCAUKF are closer to the DJI's own GPS data. Compared with the other three filtering algorithms, MWCAUKF has less volatility and better tracking ability for real data. Table 6 shows the actual experimental data. These data are the average error compared to DJI's GPS data. From the experimental data, we can see that the MWCAUKF algorithm is still applicable in real scenarios. Compared with DJI's GPS positioning data, the average positioning error on the X-axis is 0.4407m and the Y-axis is 1.0433m. In addition, MWCAUKF algorithm has higher positioning accuracy than SHAUKF algorithm. The overall data shows that the MWCAUKF algorithm has advantages in data accuracy in terms of position and velocity of the UAV. By combining Fig. 11, Fig. 12 and Table 6, it can be seen that MWCAUKF data has higher fusion accuracy compared with UKF, EKF, and SHAUKF, higher tracking of data truth value, lower volatility of position and velocity, and better stability of fusion algorithm. When the UAV is performing the task, the sensor data may be invalid. GPS may have weak satellite reception or satellite signal loss. Lidar can only work in the presence of obstacles. Aiming at the performance problem of filtering fault tolerance, this paper conducted experiments on the roof of the experimental building of the school. The continuity of the experimental trajectory verifies the fault tolerance and stability of the filtering framework in this paper for the fusion of multiple sensor data. The experimental trajectory is shown in the actual image map as shown in Fig. 13.
The roof of the building in the yellow box in Fig. 13(a) is shown in Fig. 13(b). The red trace in Fig. 13(a) is the actual trajectory. As can be seen from the figure, the positioning trajectory coincides with the actual roof corridor. During the experiment, the UAV flew around the corridor on the roof of the building. The specific trajectory is shown in Fig. 14. As can be seen from the figure, in the two areas of A and B, the GPS positioning is affected due to the position of the adjacent buildings, and the positioning is not accurate. This article solves this problem by merging data from lidar. Similarly, when the data scanned by the lidar is insufficient, the positioning of the UAV cannot be completed, and GPS positioning can be used as compensation to achieve stable positioning of the UAV. Experiments prove that the

VIII. CONCLUSION
It can be seen from the experimental data that the filtering method proposed in this paper has a lower computational cost than the distributed filtering algorithm. In this method, there is no need to perform strict time synchronization on the sensor data, and the measurement data of the sensors can be easily fused regardless of the update rate of the sensors. Compared with the standard UKF, EKF, and Suga-Husa adaptive unscented Kalman filter, the proposed algorithm has higher accuracy and better stability. It can track real data well and make the filter not easy to diverge. Based on the above analysis and experiments, the filtering framework proposed in this paper can effectively fuse a variety of sensor data and provide a strong guarantee for the autonomous positioning of the UAV.
The algorithm proposed in this paper can effectively fuse multi-sensor data, but as the number of sensors decreases, the positioning accuracy will also decrease. When there are only IMUs remaining in multiple sensors, the cumulative error of the cheap IMU used will increase over time. How to effectively fuse such large error data is the focus of future work.

MULTI-SENSOR FILTER FUSION METHOD
Assuming there are N sensors, the discrete form of the target state equation and the sensor measurement equation are where F k is the state transition matrix, H i k is the measurement matrix of the i sensor, z i k is the observation value of the i sensor, and v i k is the measurement error of the i sensor.

A. DISTRIBUTED FILTERING PROCESS
From [27], the distributed filtering process can be obtained as follows.
State prediction: Measurement vector update process: Obviously, the calculation will not increase the matrix operation dimension due to the increase of sensors.

B. SIMPLIFIED TSU FILTERING PROCESS
State prediction: