Skip to Main Content
We propose an analytical model to estimate the rendering quality in 3D video. The model relates errors in the depth images to the rendering quality, taking into account texture image characteristics, texture image quality, the camera configuration and the rendering process. Specifically, we derive position (disparity) errors from the depth errors, and the probability distribution of the position errors is used to calculate the power spectral density of the rendering errors. Experiment results with video sequences and coding/rendering tools used in MPEG 3DV activities show that the model can accurately estimate the synthesis noise up to a constant offset. Thus, the model can be used to estimate the change in rendering quality for different system designs.