Model-Agnostic Metalearning-Based Text-Driven Visual Navigation Model for Unfamiliar Tasks | IEEE Journals & Magazine | IEEE Xplore

Model-Agnostic Metalearning-Based Text-Driven Visual Navigation Model for Unfamiliar Tasks


In this study we introduce FCIS and Word2vec into our DRL network to respectively extract visual and semantic features according to object class, creating more direct and...

Abstract:

As vision and language processing techniques have made great progress, mapless-visual navigation is occupying uppermost position in domestic robot field. However, most cu...Show More

Abstract:

As vision and language processing techniques have made great progress, mapless-visual navigation is occupying uppermost position in domestic robot field. However, most current end-to-end navigation models tend to be strictly trained and tested on identical datasets with stationary structure, which leads to great performance degradation when dealing with unseen targets and environments. Since the targets of same category could possess quite diverse features, generalization ability of these models is also limited by their visualized task description. In this article we propose a model-agnostic metalearning based text-driven visual navigation model to achieve generalization to untrained tasks. Based on meta-reinforcement learning approach, the agent is capable of accumulating navigation experience from existing targets and environments. When applied to finding a new object or exploring in a new scene, the agent will quickly learn how to fulfill this unfamiliar task through relatively few recursive trials. To improve learning efficiency and accuracy, we introduce fully convolutional instance-aware semantic segmentation and Word2vec into our DRL network to respectively extract visual and semantic features according to object class, creating more direct and concise linkage between targets and their surroundings. Several experiments have been conducted on realistic dataset Matterport3D to evaluate its target-driven navigation performance and generalization ability. The results demonstrate that our adaptive navigation model could navigate to text-defined targets and achieve fast adaption to untrained tasks, outperforming other state-of-the-art navigation approaches.
In this study we introduce FCIS and Word2vec into our DRL network to respectively extract visual and semantic features according to object class, creating more direct and...
Published in: IEEE Access ( Volume: 8)
Page(s): 166742 - 166752
Date of Publication: 09 September 2020
Electronic ISSN: 2169-3536

Funding Agency:


References

References is not available for this document.