Skip to Main Content
In this paper, it was confirmed that a real mobile robot with a simple visual sensor could learn appropriate actions to reach a target by Direct-Vision-Based reinforcement learning (RL). In Direct-Vision-Based RL, raw visual sensory signals are put into a layered neural network directly, an the neural network is trained by Back Propagation using the training signal that is generated based on reinforcement learning. Considering the time delay to get the visual sensory signals, it was proposed that the actor outputs are trained using the critic output at two time steps ahead. It was shown that the robot with a monochrome visual sensor could obtain reaching actions to a target object through the learning from scratch without any advance knowledge and any help of humans.