I. Introduction
LiDARs, which capture high-resolution 3D point clouds (PCs) to provide shape and depth information for object detection, are crucial for guaranteeing the performance and safety of autonomous vehicles (AVs). LiDAR-based 3D object detection has achieved superior performance owing to advancements in deep learning technology. However, deep neural networks are vulnerable to adversarial examples [1]-[8]. Prior studies have explored the security of LiDARs and introduced disappearing attacks [9]-[12] to investigate vulnerabilities in 3D object detection [13]-[16]. Attackers typically manufacture adversarial objects around the target to make it invisible to detectors, potentially causing severe safety incidents, e.g. collisions and casualties, as illustrated in Fig. 1.