By Topic

Mesh-based global motion compensation for robust mosaicking and detection of moving objects in aerial surveillance

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Munderloh, M. ; Inst. fur Informationsverarbeitung (TNT), Leibniz Univ. Hannover, Hannover, Germany ; Meuel, H. ; Ostermann, J.

Global Motion Compensation is one of the key technologies for aerial image processing e.g. to detect moving objects on the ground or to generate a mosaick image of the observed area. For this task, it is necessary to estimate and compensate the motion of the pixels between the recorded frames evoked by the movement of the camera. As the camera is statically attached to a flying device such as a quadro-copter (also called Micro Air Vehicle, MAV) or a helicopter, the motion of the camera directly corresponds to the plane movements. For simplification, only a planar landscape model is used nowadays to describe the global motion of the scene. However, if objects like buildings or mountains are close to the camera, i.e. the MAV is at a low altitude, this simplification is not valid. Therefore we propose a more complex model by introducing a 2D mesh-based motion compensation technique, also known as image warping, to compensate the global motion. We show the benefits if used for mosaick creation by smaller artifacts due to perspective distortions and smaller drift problems. We also improve a moving object detection system to identify moving objects more reliably. Moreover, the proposed method is also more robust in case of lens distortions.

Published in:

Computer Vision and Pattern Recognition Workshops (CVPRW), 2011 IEEE Computer Society Conference on

Date of Conference:

20-25 June 2011