Abstract:
Emerging mobile platforms, such as autonomous robots and AR devices, require RGBD data and 3D bounding-box (BB) information for accurate navigation and seamless interacti...Show MoreMetadata
Abstract:
Emerging mobile platforms, such as autonomous robots and AR devices, require RGBD data and 3D bounding-box (BB) information for accurate navigation and seamless interaction with the surrounding environment. Specifically, the extraction of RGB-D data and 3D BB needs to be done in real-time (> 30fps) while consuming low power (< 1W) due to limited battery capacity. In addition, a conventional depth processing system consumes high power due to a high performance (HP) time-of-flight (ToF) sensor with an illuminator (> 3W) [1]. However, even the HP ToF fails to extract depth in areas of extreme reflectance, leading to failure in navigation or AR interaction. In addition, software implementation on an application processor suffers from high latency (~ 0.1 s) to preprocess the depth data and process the 3D point cloud-based neural network (PNN) [2]. Therefore, this paper proposes an SoC for low-power and low-latency depth estimation and 3D object detection with high accuracy, as shown in Fig. 33.4.1. The system implements depth fusion [3], [4] to allow accurate RGB-D extraction without hollows, while using a low-power (LP) ToF sensor (<0.4W). The SoC can fully accelerate the depth-processing pipeline, achieving a maximum of 45.6fps.
Date of Conference: 20-26 February 2022
Date Added to IEEE Xplore: 17 March 2022
ISBN Information: