By Topic

Modelling human assembly actions from observation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Paul, G.V. ; Robotics Inst., Carnegie Mellon Univ., Pittsburgh, PA, USA ; Jiar, Y. ; Wheeler, M.D. ; Ikeuchi, K.

This paper describes a system which can model an assembly task performed by a human. The actions are recorded in real-time using a stereo system. The assembled objects and the fingers of the hand are tracked through the image sequence. We use the spatial relations between the fingers and the objects to temporally segment the task into approach, pre-manipulate, manipulate and depart phases. We interpret the actions in each segment broadly into grasp, push, fine motion, etc. We then analyze the contact relations between objects during the manipulate phase to reconstruct the fine motion path of the manipulated object. The fine motion in configuration space is a series of connected path segments lying on the features (c-surfaces) of the configuration space obstacle. We project the observed configurations onto these c-surfaces and reconstruct the path segments. The connected path segments form the fine motion path. We demonstrate the system using the peg in hole task

Published in:

Multisensor Fusion and Integration for Intelligent Systems, 1996. IEEE/SICE/RSJ International Conference on

Date of Conference:

8-11 Dec 1996