Abstract:
Local fusion of disparity maps allows fast parallel 3D modeling of large scenes that do not fit into main memory. While existing methods assume a constant disparity uncer...Show MoreMetadata
Abstract:
Local fusion of disparity maps allows fast parallel 3D modeling of large scenes that do not fit into main memory. While existing methods assume a constant disparity uncertainty, disparity errors typically vary spatially from tenths of pixels to several pixels. In this paper we propose a method that employs a set of Gaussians for different disparity classes, instead of a single error model with only one variance. The set of Gaussians is learned from the difference between generated disparity maps and ground-truth disparities. Pixels are assigned particular disparity classes based on a Total Variation (TV) feature measuring the local oscillation behavior of the 2D disparity map. This feature captures uncertainty caused for instance by lack of texture or fron to-parallel bias of the stereo method. Experimental results on several datasets in varying configurations demonstrate that our method yields improved performance both qualitatively and quantitatively.
Published in: 2014 2nd International Conference on 3D Vision
Date of Conference: 08-11 December 2014
Date Added to IEEE Xplore: 09 February 2015
Electronic ISBN:978-1-4799-7000-1
Print ISSN: 1550-6185