Abstract:
Generally, any mobile robot must first understand its surroundings before being able to traverse an unknown environment and carry out its intended tasks. To do this, ofte...Show MoreMetadata
Abstract:
Generally, any mobile robot must first understand its surroundings before being able to traverse an unknown environment and carry out its intended tasks. To do this, oftentimes a host of sensors are used to detect the terrain around grounded robots, and this data is used to model the same environment virtually. Both Lidar and vision-based sensors, which are very common across most industries, often return points of interest in the form of a point cloud map. Although point cloud data can be an invaluable input for control applications such as path planning and traj ectory tracking, it can sometimes be ambiguous or unhelpful to humans, and costly to compute. To deal with this issue, this paper presents a stitching approach that foregoes the motion model and attempts to create large mosaics of perspective-transformed images from a camera on a mobile robot, utilizing only refined image registration and rejection criteria. To run and validate the proposed stitching algorithm, a hardware test platform was set up, which consists of a battery-powered mobile robot, a mounted stereo vision sensor, and a base PC with the help of ROS.
Date of Conference: 12-15 October 2021
Date Added to IEEE Xplore: 28 December 2021
ISBN Information: