Skip to Main Content
Image based rendering (IBR) consists of several steps: (i) calibration (or ego-motion computation) of all input images, (ii) determination of regions in the input images used to synthesize the new view. (iii) interpolating the new view from the selected areas of the input images. We propose a unified representation for all these aspects of IBR using the space-time (x-y-t) volume. The presented approach is very robust, and allows to use IBR in general conditions even with a hand-held camera. To take care of (i), the space-time volume is constructed by placing frames at locations along the time axis so that image features create straight lines in the EPI (epipolar plane images). Different slices of the space-time volume are used to produce new views, taking care of (ii). Step (iii) is done by interpolating between image samples using the feature lines in the EPI images. IBR examples are shown for various cases: sequences taken from a driving car, from a handheld camera, or when using a tripod.