Abstract:
Object detection and subsequent perception of the environment surrounding a vehicle play a very important role in autonomous driving applications. Existing perception alg...Show MoreMetadata
Abstract:
Object detection and subsequent perception of the environment surrounding a vehicle play a very important role in autonomous driving applications. Existing perception algorithms do not generalize well since most algorithms are trained in well-structured driving environment datasets. To deploy self-driving cars on the road, they should have a reliable and robust perception system to handle all corner cases. This paper introduces a multimodal dataset, TIAND* (TiHAN-IITH Autonomous Navigation Dataset), collected from structured and unstructured environments seen in and around the city of Hyderabad, India, as an aid to further research in the generalization of object detection algorithms. The sensor suite contains four cameras, six radars, one Lidar, and GPS and IMU. TIAND comprises 150 scenes, each spanning a duration ranging from 2 minutes to 4 minutes. Subsequently, we present the object detection model’s performance using camera, radar, and Lidar data. Additionally, we offer insights into projecting data from Lidar to camera and from radar to camera.
Published in: 2024 IEEE Intelligent Vehicles Symposium (IV)
Date of Conference: 02-05 June 2024
Date Added to IEEE Xplore: 15 July 2024
ISBN Information: