By Topic

Learning visual object definitions by observing human activities

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Veloso, M. ; Sch. of Comput. Sci., Carnegie Mellon Univ., Pittsburgh, PA ; von Hundelshausen, F. ; Rybski, P.E.

Humanoid robots, while moving in our everyday environments, necessarily need to recognize objects. Providing robust object definitions for every single object in our environments is challenging and impossible in practice. In this work, we build upon the fact that objects have different uses and humanoid robots, while co-existing with humans, should have the ability of observing humans using the different objects and learn the corresponding object definitions. We present an object recognition algorithm, FOCUS, for finding object classifications through use and structure. FOCUS learns structural properties (visual features) of objects by knowing first the object's affordance properties and observing humans interacting with that object with known activities. FOCUS combines an activity recognizer, flexible and robust to any environment, which captures how an object is used with a low-level visual feature processor. The relevant features are then associated with an object definition which is then used for object recognition. The strength of the method relies on the fact that we can define multiple aspects of an object model, i.e., structure and use, that are individually robust but insufficient to define the object, but can do so jointly. We present the FOCUS approach in detail, which we have demonstrated in a variety of activities, objects, and environments. We show illustrating empirical evidence of the efficacy of the method

Published in:

Humanoid Robots, 2005 5th IEEE-RAS International Conference on

Date of Conference:

5-5 Dec. 2005