Skip to Main Content
This paper realizes a humanoid robotic system to execute target grasping (TG) in the 3-D world coordinate. At the outset, the HR scans the field to find specific target(s), which is (are) randomly distributed in the 3-D coordinate before the HR. By an active stereo vision system (ASVS), the HR is navigated to the planned posture and then the task of TG is executed. The first feature of this paper is that the transform between the target in the left and right image plane coordinates of the ASVS and the target in the 3-D world coordinate is off-line approximated by multilayer neural network (MLNN) using Levenberg Marquardt Back Propagation (LMBP) training law. Because the computation of inverse kinematics (IK) of two arms is time consuming, another off-line modeling using MLNN is employed to approximate the transform between estimated ground truth of target and joint coordinate of two arms. This is the second feature of this paper. Finally, the grasping of three targets with different colors and different 3-D world coordinates via our HR is demonstrated to verify the effectiveness and efficiency of the proposed method.