Skip to Main Content
In aerial robots' visual navigation, it is essential yet very difficult to detect the attitude and position of the robots operated in real time. By introducing a new parametric model, the problem can be reduced from almost unmanageable to be partly solved, though not fully, as per the requirement. In this parametric approach, a multi-scale least square method is formulated first. By propagating as well as improving the parameters down from layer to layer of the image pyramid, a new global feature line can then be detected to parameterize the attitude of the robots. Furthermore, this approach paves the way for segmenting the image into distinct parts, which can be realized by deploying a Bayesian classifier on the picture cell level. Comparison with the Hough transform based method in terms of robustness and precision shows that this multi-scale least square algorithm is considerably more robust to noises. Some discussions are also given.