Skip to Main Content
The exploration of unknown environment autonomously and intelligently by autonomous mobile robot is one of the main problems that still have to be solved. From the biological systems it appears that it is not only a matter of computational power but also the right sensors and methods of implementation. In this work we understood that the invention of the powerful and practically realizable, emerging paradigm called cellular nonlinear network (CNN) fully realized the concept of real-time handling of time signals coining from space distribution sources. On the other hand inertial gyro sensor can increase the robustness and computing efficiency of vision system, which is subject to signal degradation and high computational expense, by providing a frame-to-frame prediction of camera orientation and position. Since most of the earlier or recent studies have tackled the artificial vision with two or three views in account, in this work we present a direct trigonometric recovery of 3D scene geometry from a stream of images. These images are vertically rectified using gyroscopes' output to minimize the token relative displacements between each frame. To match this vertically rectified stream of images in real-tune we borrow the strength of CNN, and the matching results are trigonometrically processed for 3D range estimation.