I. Introduction
Interest in improving real-time remote human-robot interaction is growing rapidly. First-person view (FPV) using VR for unmanned ground vehicle (UGV) with remote or (semi-) automatic control is increasingly used for search and rescue operations, in disaster recovery, and for terrain and object surveillance, especially in unsafe environments [1], [2]. Immersive VR displays for UGV teleoperation can improve the user's concentration and performance in obstacles avoidance tasks when compared to a normal display such as desktop monitor [3]. UGV can explore the surrounding environment and send information captured via sensors or cameras to remotely located users in real-time. However, most cameras attached to these robots have limitations, such as low degrees of freedom, narrow field-of-view, and poor photo-sensitivity, especially in dark and complex environments with interference from the objects or obstacles found in such environments.