Sensor modality fusion with CNNs for UGV autonomous driving in indoor environments | IEEE Conference Publication | IEEE Xplore

Sensor modality fusion with CNNs for UGV autonomous driving in indoor environments


Abstract:

We present a novel end-to-end learning framework to enable ground vehicles to autonomously navigate unknown environments by fusing raw pixels from cameras and depth measu...Show More

Abstract:

We present a novel end-to-end learning framework to enable ground vehicles to autonomously navigate unknown environments by fusing raw pixels from cameras and depth measurements from a LiDAR. A deep neural network architecture is introduced to effectively perform modality fusion and reliably predict steering commands even in the presence of sensor failures. The proposed network is trained on our own dataset, from LiDAR and a camera mounted on a UGV taken in an indoor corridor environment. Comprehensive experimental evaluation to demonstrate the robustness of our network architecture is performed to show that the proposed deep learning neural network is able to autonomously navigate in the corridor environment. Furthermore, we demonstrate that the fusion of the camera and LiDAR modalities provides further benefits beyond robustness to sensor failures. Specifically, the multimodal fused system shows a potential to navigate around static and dynamic obstacles and to handle changes in environment geometry without being trained for these tasks.
Date of Conference: 24-28 September 2017
Date Added to IEEE Xplore: 14 December 2017
ISBN Information:
Electronic ISSN: 2153-0866
Conference Location: Vancouver, BC, Canada

Contact IEEE to Subscribe

References

References is not available for this document.