I. Introduction
Light detection and ranging (LiDAR) is an optical sensor that measures the distance by scanning an object with lasers. Raw LiDAR data are a set of points on objects’ surfaces, commonly called a 3-D point cloud. This allows us to obtain various information about the object, including position, size, and orientation, with high accuracy. Hence, LiDARs are increasingly being deployed on the roadside to monitor traffic in detail to facilitate the development of connected and automated transportation systems (CATSs). Specifically, they can continuously and timely collect detailed vehicle trajectories, forming a vital foundation for real-time traffic control in CATS, such as vehicle collision warnings, lane-dependent variable speed limits, and trajectory control of connected and automated vehicles (CAVs). However, 3-D point clouds contain a large number of background points beyond road users (i.e., vehicles and pedestrians), such as the ground, buildings, and trees [1], [2]. Object detection and tracking computational load can be hefty if background points are processed repeatedly in each frame [3]. Therefore, background filtering is essential for real-time trajectory collection by roadside LiDARs [1], [4].