I. Introduction
Depth sensors such as depth cameras and LiDAR are crucial for autonomous driving tasks. They provide the depth information to form 3D point clouds and serve as a significant perceptual module to help us understand the scene. However, due to the inherent property and the limited number of scan lines, LiDAR point clouds are usually sparse and density imbalanced, which causes a loss in geometric structure. Especially, as shown in Fig. 1(a), the farther the distance is, the sparser the point cloud is. This phenomenon may also result in performance degradation in downstream tasks such as LiDAR-based 3D object detection, LiDAR mapping [1], [2] and LiDAR segmentation [3]. To alleviate the above problems, point cloud upsampling becomes a critical and emergent issue.
(a) A liDAR-based scene point cloud [4]. (b) A single-object point cloud [5]. (c) An RGB scene image and (d) its sparse LiDAR point cloud. (e) and (f) are the upsampled results of (d) generated by PU-GCN [6] and the proposed LiUpNet.