Skip to Main Content
In this paper we present a new architecture for implementing real-time stereo vision on FPGA chips. The proposed architecture is based on reducing the computational needs by focusing on specific image features only instead of processing every image pixel. Two classes of features are considered. The first are low level features like edges and the second are high level features like complete patterns or regions. The paper discusses how both types of features can be integrated with depth calculations to reduce the required FPGA resources while maintaining real-time performance. This allows implementation on relatively small FPGA chips or when limited resources are available. The proposed architecture was successfully implemented on a Virtex 4 FPGA and tested using several sample data sets. The results show that the proposed architecture has excellent accuracy coupled with a significant reduction in required resources.