I. Introduction
Available video coding algorithms, notably those adopted by the ITU-T and ISO/IEC MPEG standards, exploit the source signal statistics at the encoder using a block-based prediction and transform coding paradigm, known as hybrid or predictive video coding. The key coding tools for hybrid coding are: 1) temporal prediction to exploit the temporal redundancy between the video frames; 2) transform coding, e.g., discrete cosine transform (DCT) to exploit the spatial redundancy; 3) quantization of the transform coefficients to exploit the visual irrelevancy (related to the human visual system limitations); and 4) entropy coding to exploit the statistical redundancy of the created coding symbols. Since a hybrid video coding solution exploits the correlation between and within the video frames at the encoder, it typically leads to rather complex encoders and much simpler decoders, without much flexibility in terms of complexity budget allocation besides making the encoder less complex and thus less efficient. This conventional video coding approach is especially appropriate for services and systems such as broadcasting and video-on-demand, where the video data is encoded by one encoder and decoded by many decoders, ideally as simple as possible.