By Topic

Fast road classification and orientation estimation using omni-view images and neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)
Zhigang Zhu ; Dept. of Comput. Sci., Tsinghua Univ., Beijing, China ; Shiqiang Yang ; Guangyou Xu ; Xueyin Lin
more authors

This paper presents the results of integrating omnidirectional view image analysis and a set of adaptive backpropagation networks to understand the outdoor road scene by a mobile robot. Both the road orientations used for robot heading and the road categories used for robot localization are determined by the integrated system, the road understanding neural networks (RUNN). Classification is performed before orientation estimation so that the system can deal with road images with different types effectively and efficiently. An omni-view image (OVI) sensor captures images with 360 degree view around the robot in real-time. The rotation-invariant image features are extracted by a series of image transformations, and serve as the inputs of a road classification network (RCN). Each road category has its own road orientation network (RON), and the classification result (the road category) activates the corresponding RON to estimate the road orientation of the input image. Several design issues, including the network model, the selection of input data, the number of the hidden units, and learning problems are studied. The internal representations of the networks are carefully analyzed. Experimental results with real scene images show that the method is fast and robust

Published in:

Image Processing, IEEE Transactions on  (Volume:7 ,  Issue: 8 )