Skip to Main Content
The field of video coding has been exploring the compact representation of video data, where perceptual redundancies in addition to signal redundancies are removed for higher compression. Many research efforts have been dedicated to modeling the human visual system's characteristics. The resulting models have been integrated into video coding frameworks in different ways. Among them, coding enhancements with the just noticeable distortion (JND) model have drawn much attention in recent years due to its significant gains. A common application of the JND model is the adjustment of quantization by a multiplying factor corresponding to the JND threshold. In this paper, we propose an alternative perceptual video coding method to improve upon the current H.264/advanced video control (AVC) framework based on an independent JND-directed suppression tool. This new tool is capable of finely tuning the quantization using a JND-normalized error model. To make full use of this new rate distortion adjustment component the Lagrange multiplier for rate distortion optimization is derived in terms of the equivalent distortion. Because the H.264/AVC integer discrete cosine transform (DCT) is different from classic DCT, on which state-of-the-art JND models are computed, we analytically derive a JND mapping formula between the integer DCT domain and the classic DCT domain which permits us to reuse the JND models in a more natural way. In addition, the JND threshold can be refined by adopting a saliency algorithm in the coding framework and we reduce the complexity of the JND computation by reusing the motion estimation of the encoder. Another benefit of the proposed scheme is that it remains fully compliant with the existing H.264/AVC standard. Subjective experimental results show that significant bit saving can be obtained using our method while maintaining a similar visual quality to the traditional H.264/AVC coded video.