Abstract:
Semantic scene understanding, including the perception and classification of moving agents, is essential to enabling safe and robust driving behaviours of autonomous vehi...Show MoreMetadata
Abstract:
Semantic scene understanding, including the perception and classification of moving agents, is essential to enabling safe and robust driving behaviours of autonomous vehicles. Cameras and LiDARs are commonly used for semantic scene understanding. However, both sensor modalities face limitations in adverse weather and usually do not provide motion information. Radar sensors overcome these limitations and directly offer information about moving agents by measuring the Doppler velocity, but the measurements are comparably sparse and noisy. In this letter, we address the problem of panoptic segmentation in sparse radar point clouds to enhance scene understanding. Our approach, called SemRaFiner, accounts for changing density in sparse radar point clouds and optimizes the feature extraction to improve accuracy. Furthermore, we propose an optimized training procedure to refine instance assignments by incorporating a dedicated data augmentation. Our experiments suggest that our approach outperforms state-of-the-art methods for radar-based panoptic segmentation.
Published in: IEEE Robotics and Automation Letters ( Volume: 10, Issue: 2, February 2025)
CARIAD SE, Wolfsburg, Germany
Center for Robotics, University of Bonn, Bonn, Germany
Center for Robotics, University of Bonn, Bonn, Germany
CARIAD SE, Wolfsburg, Germany
Center for Robotics, University of Bonn, Bonn, Germany
Lamarr Institute for Machine Learning and Artificial Intelligence, Germany
CARIAD SE, Wolfsburg, Germany
Center for Robotics, University of Bonn, Bonn, Germany
Center for Robotics, University of Bonn, Bonn, Germany
CARIAD SE, Wolfsburg, Germany
Center for Robotics, University of Bonn, Bonn, Germany
Lamarr Institute for Machine Learning and Artificial Intelligence, Germany