Skip to Main Content
The purpose of this study is to propose a new tool to define the posture of a complete anthropomorphic arm model during grasping taking into account task and environment constraints. The developed model is based on a neural network architecture mixing both supervised and reinforcement learning. The task constraints are materialized by target points to be reached by the fingertips on the surface of the object to be grasped while environment constraints are represented by obstacles. With no prior information on the shape, position and number of obstacles, the model is able to find a suitable solution according to specified criteria. Simulation results are proposed and commented.