Loading [MathJax]/extensions/MathMenu.js
A RUGD Dataset for Autonomous Navigation and Visual Perception in Unstructured Outdoor Environments | IEEE Conference Publication | IEEE Xplore

A RUGD Dataset for Autonomous Navigation and Visual Perception in Unstructured Outdoor Environments


Abstract:

Research in autonomous driving has benefited from a number of visual datasets collected from mobile platforms, leading to improved visual perception, greater scene unders...Show More

Abstract:

Research in autonomous driving has benefited from a number of visual datasets collected from mobile platforms, leading to improved visual perception, greater scene understanding, and ultimately higher intelligence. However, this set of existing data collectively represents only highly structured, urban environments. Operation in unstructured environments, e.g., humanitarian assistance and disaster relief or off-road navigation, bears little resemblance to these existing data. To address this gap, we introduce the Robot Unstructured Ground Driving (RUGD) dataset with video sequences captured from a small, unmanned mobile robot traversing in unstructured environments. Most notably, this data differs from existing autonomous driving benchmark data in that it contains significantly more terrain types, irregular class boundaries, minimal structured markings, and presents challenging visual properties often experienced in off road navigation, e.g., blurred frames. Over 7, 000 frames of pixel-wise annotation are included with this dataset, and we perform an initial benchmark using state-of-the-art semantic segmentation architectures to demonstrate the unique challenges this data introduces as it relates to navigation tasks.
Date of Conference: 03-08 November 2019
Date Added to IEEE Xplore: 28 January 2020
ISBN Information:

ISSN Information:

Conference Location: Macau, China

Contact IEEE to Subscribe

References

References is not available for this document.