Scheduled System Maintenance:
On April 27th, single article purchases and IEEE account management will be unavailable from 2:00 PM - 4:00 PM ET (18:00 - 20:00 UTC).
We apologize for the inconvenience.
By Topic

Object modeling for environment perception through human-robot interaction

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
3 Author(s)
Soohwan Kim ; Cognitive Robot. Center, Korea Inst. of Sci. & Technol., Seoul, South Korea ; Dong Hwan Kim ; Sung-Kee Park

In this paper we propose a new method of object modeling for environment perception through human-robot interaction. Particularly, within a multi-modal object modeling architecture, we tackle the gestural language part using a stereo camera. To do that, we define three human gestures based on the size of target objects; holding small objects, pointing at medium ones, and contacting two corner points of large ones. When a user indicates where the target object is located in the environment, the robot interprets the user's gesture and captures one or more images including the target objects. The region of interest where a target object is likely to be located in the captured image is estimated from the environmental context and the user's gesture. Finally, given an image with a region of interest, the robot performs foreground/background segmentation automatically. Here, we suggest a marker-based watershed segmentation method for object segmentation. Experimental results show that the segmentation quality of our method is as good as that of the GrabCut algorithm, but the computational time of ours is so much faster that it is appropriate for on-line interactive object modeling.

Published in:

Control Automation and Systems (ICCAS), 2010 International Conference on

Date of Conference:

27-30 Oct. 2010