Approaches, Challenges, and Applications for Deep Visual Odometry: Toward Complicated and Emerging Areas | IEEE Journals & Magazine | IEEE Xplore

Approaches, Challenges, and Applications for Deep Visual Odometry: Toward Complicated and Emerging Areas


Abstract:

Visual odometry (VO) is a prevalent way to deal with the relative localization problem, which is becoming increasingly mature and accurate, but it tends to be fragile und...Show More

Abstract:

Visual odometry (VO) is a prevalent way to deal with the relative localization problem, which is becoming increasingly mature and accurate, but it tends to be fragile under challenging environments. Comparing with classical geometry-based methods, deep-learning-based methods can automatically learn effective and robust representations, such as depth, optical flow, feature, ego-motion, etc., from data without explicit computation. Nevertheless, there still lacks a thorough review of the recent advances of deep-learning-based VO (Deep VO). Therefore, this article aims to gain a deep insight on how deep learning can profit and optimize the VO systems. We first screen out a number of qualifications, including accuracy, efficiency, scalability, dynamicity, practicability, and extensibility, and employ them as the criteria. Then, using the offered criteria as the uniform measurements, we detailedly evaluate and discuss how deep learning improves the performance of VO from the aspects of depth estimation, feature extraction and matching, and pose estimation. We also summarize the complicated and emerging areas of Deep VO, such as mobile robots, medical robots, augmented and virtual reality, etc. Through the literature decomposition, analysis, and comparison, we finally put forward a number of open issues and raise some future research directions in this field.
Published in: IEEE Transactions on Cognitive and Developmental Systems ( Volume: 14, Issue: 1, March 2022)
Page(s): 35 - 49
Date of Publication: 18 November 2020

ISSN Information:

Funding Agency:


References

References is not available for this document.