Skip to Main Content
This work describes a data fusion architecture for robotic assembly tasks based on human sensory-motor skills. These skills are transferred to the robot through geometric and dynamic perception signals. Artificial neural networks are used in the learning process. The data fusion paradigm is addressed. It consists of two independent modules for optimal fusion and filtering. Kalman techniques linked to stochastic signal evolutions are used in the fusion algorithm. Compliant motion signals obtained from vision and pose sense are fused, enhancing the task performance. Simulations and peg-in-hole experiments are reported.