Abstract:
In autonomous driving, the perception module typically utilizes a combination of millimeter-wave radar and LiDAR. However, when driving in challenging environmental condi...Show MoreMetadata
Abstract:
In autonomous driving, the perception module typically utilizes a combination of millimeter-wave radar and LiDAR. However, when driving in challenging environmental conditions, this perception combination fails to effectively acquire geometric shape information about the surrounding environment. Therefore, we propose an alternative perception approach that employs Synthetic Aperture Radar (SAR). However, existing algorithms heavily rely on large-scale datasets. In light of this, we propose a meta-learning framework, named Sample and Embedding Adaptive Network (Sea-Net), for few-shot SAR image object classification. Furthermore, the semantics of SAR images and traditional optical images differ, and thus data enhancement methods that are effective on optical images are less so on SAR images. Based on this observation, we introduce a self-adaptive augmentation algorithm for the center of the target domain, which performs self-adaptive augmentation based on the semantic features of SAR images. The entire enhancement stage can be parallelized to speed up computation. Moreover, the imaging principle of SAR image results in coherent speckle noise with bright and dark interlacing in the image, causing a small difference between classes in the SAR image mapped to the embedded space. To address this issue, we propose the edge ambiguous embedding correction based on the self-attention mechanism. This method effectively increases the distance between different classes. Our experimental results on the MSTAR dataset demonstrate that the proposed model outperforms existing methods.
Published in: IEEE Transactions on Intelligent Vehicles ( Early Access )