Processing math: 100%
Cross-Spatial Fusion and Dynamic-Range Particle Filter-Based FPGA-GPU Architecture for 1-ms RGB-Based Object Pose Tracking | IEEE Journals & Magazine | IEEE Xplore

Cross-Spatial Fusion and Dynamic-Range Particle Filter-Based FPGA-GPU Architecture for 1-ms RGB-Based Object Pose Tracking


Abstract:

Synchronization with real-time data is critical for vision-driven measurement techniques, particularly in factory automation (FA) object pose tracking. Increasing image s...Show More

Abstract:

Synchronization with real-time data is critical for vision-driven measurement techniques, particularly in factory automation (FA) object pose tracking. Increasing image sampling frequency and reducing processing delays can enhance production efficiency, yet most existing works focus on accuracy rather than speed due to computational complexity. This article develops a high-frame-rate and ultralow delay object pose tracking architecture using field-programmable gate array (FPGA)-graphics processing unit (GPU) hetero complementation. It proposes two methods: a cross-space fusion module (CSFM) that enhances feature extraction and depth accuracy by using 3-D models and temporal information and a dynamic-range particle filter (DRPF) designed to improve pose calculation accuracy with a hardware-oriented approach. Furthermore, a high-frame-rate dataset for FA pose tracking was developed to evaluate the system. The results indicate that compared to the state of the art, the proposed method exhibits only a 0.04% reduction in average 3-D distance (ADD)-0.1d accuracy during the frame-by-frame operation but a 22.25% improvement during real-time operation. For an input of a 640 \times 360 image sequence at 1000 frames per second (FPS), the proposed method processes the entire sequence from camera capture to output with 0.927 ms/frame.
Article Sequence Number: 2517220
Date of Publication: 17 March 2025

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.