By Topic

An autonomous visual perception model for robots using object-based attention mechanism

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)

The object-based attention theory has shown that perception processes only select one object of interest from the world at a time which is then represented for action. This paper therefore presents an autonomous visual perception model for robots by simulating the object-based bottom-up attention mechanism. Using this model visual perception of robots starts from attentional selection over the scene followed by high-level analysis only on the attended object. The proposed model involves three components: pre-attentive segmentation, bottom-up attentional selection and post-attentive recognition and learning of the attended object. The model pre-attentively segments the visual field into discrete proto-objects at first. Automatic bottom-up competition is then performed to yield a location-based saliency map. By combination of location-based salience within each proto-object, the proto-object based salience is evaluated. The most salient proto-object is selected for recognition and learning. This model has been applied into the robotic task of automatical detection of objects. Experimental results in natural and cluttered scenes are shown to validate this model.

Published in:

Robotics and Biomimetics (ROBIO), 2009 IEEE International Conference on

Date of Conference:

19-23 Dec. 2009