I. Introduction
Recent advances in autonomous systems show the great potential for smart mobility. Simultaneous localization and mapping (SLAM) are [1] fundamental for most autonomous systems. Three-dimensional (3D) light detection and ranging (LiDAR) provides dense 3D point clouds of the surrounding information, which is widely utilized to provide position and map solutions [2] for autonomous systems. However, the performance of LiDAR-based odometry can be affected by the numerous dynamic obj ects [3] and the structureless environments [4]. Visual odometry [5] is a popular technique to provide state estimation by feature matching. But the performance of the visual-based method is sensitive to illumination conditions and the available features [6]. The global navigation satellite system (GNSS) provides absolute positioning services. Unfortunately, its performance can be degenerated due to the non-line-of-sight (NLOS) and multipath [7]. A single sensor is hard to meet the reliable navigation requirements for AV, thus multi -sensor integration has received significant attention because of their complementary and redundancy.
Top: Illustration of the error map broadcast through the roadside unit (RSU). AVI is equipped with a sensor-rich (e.g., LiDAR, camera, GNSS, and high-end devices which can provide ground truth positioning) to evaluate the sensor error periodically, while the AV2 and AV3 are the autonomous vehicles that receive the error map information to aid their navigation with their available sensors. Bottom: The collected RGB images day and night.