Skip to Main Content
In this paper, we derive and evaluate theoretical rate-distortion performance bounds for scalable video compression algorithms which use a single motion-compensated prediction (MCP) loop. These bounds are derived using rate-distortion theory based on an optimum mean-square error (MSE) quantizer. By specifying translatory motion and using an approximation of the predicted error frame power spectral density, it is possible to derive parametric versions of the rate-distortion functions which are based solely on the input power spectral density and the accuracy of the motion-compensated prediction. The theory is applicable to systems which allow prediction drift, such as the SNR-scalability in MPEG-2, as well as those with zero prediction drift such as the MPEG-4 fine grained scalable standard.