Skip to Main Content
We present a method to generate stylized stereo imagery that effectively communicates shape and distance of the depicted scene objects. We use computer vision techniques to analyze real stereo image pairs. In particular, a region based stereo matching algorithm with symmetrical treatment of occlusions is used to extract a disparity map and successively the depth information of the scene. The reference image is color segmented for the purpose of color stylization and an algorithm combining intensity image edges and depth discontinuities is applied to depict dominant object contours in the image. We use disparity information to propagate stylized color segments to the second view together with the object outlining contours. The stylized image pairs are consistent across the two views and can be easily fused for stereoscopic viewing. The stereoscopic image fusion provides an extra dimension of depth that is absent on the individual images.