Skip to Main Content
In this paper, we present a new robotic platform for human robot interaction. The robot sends information to the operator and receives high level commands. In addition, the robot utilizes the visual information to navigate in rescue and unknown environments. In our method, we convert the captured image in a binary one, which after the partition is used as the input of the neural controller. The neural control system, which maps the visual information to motor commands, is evolved online using real robots. We show that evolved neural networks performed well in unknown and unstructured environments. Furthermore, we compare the performance of neural controllers with an algorithmic vision based method.