Skip to Main Content
We propose a 3D linear visual servoing for humanoid robot linear visual servoing is based on the linear approximation between binocular visual space and joint space in humanoid robot. It is very robust to calibration error, especially to camera turing, because it uses neither camera angles nor joint angles to calculate feedback command. Although the method is effective in 3D positioning control, the work space is limited to its front space. In this paper, we expand work space of linear visual servoing to be able to manipulate the target object in wide space. We obtain the linear approximation matrix in other space and express the matrix as the function of neck angle by using neural network. Some experimental results are presented to demonstrate the effectiveness of the proposed method.