Skip to Main Content
This paper presents a mobile robotic system for human assistance in navigation-the robot navigates by receiving visual instructions from a human being and is able to replicate them autonomously. We describe three generic components defined as the HOST, the VISION, and the CONTROL components as well as their integration in our teachable mobile robot. These components are connected to each other via a transputer serial link, namely they are loosely coupled, they work in parallel and are asynchronous with each other. Each component is described with a peculiar feature of extensibility. Especially in the VISION component, there are two major features. The first one is a correlator which each vision board possesses. The correlator does block-matching between the template and the grabbed images in real-time. The other is the PIM library which manages the visual tasks over limited parallel visual resources of the mobile robot. These features of our design enable the system to be real-time and allow for efficient and extensible software development. In order to show the feasibility of our system design, we present a preliminary experiment of the route teaching on our mobile robot.