Skip to Main Content
This paper presents a visual/motor behavior learning approach, based on neural networks. The behavior description is introduced in order to create behavior learning. Our behavior-based system evolution task is a mobile robot detecting a target and driving/acting towards it. First, the mapping relations between the image feature domain of the object and the robot action domain are derived. Second, a multilayer neural network for off-line learning of the mapping relations is used. This learning structure through neural network training process represents a connection between the visual perceptions and motor sequence of actions in order to gripe a target. Last, using behavior learning through a noticed action chain, we can predict mobile robot behavior for a variety of similar tasks in similar environment. Prediction results suggest the methodology is adequate and could be recognized as an idea for designing mobile robot assistance for blind people guiding or for performing manipulation tasks, as a helping hand to disabled people.