Skip to Main Content
In this paper, we propose an efficient and practical algorithm to dynamically adapt the Lagrange multipliers for each macroblock based on the context of the neighboring or upper layer blocks to improve rate-distortion performance. Our method improves the accuracy for the detection of true motion vectors as well as the most efficient encoding modes for luma, which are used for deriving the motion vectors, and modes for chroma. Simulation results for H.264/advanced video coding video demonstrate that our method reduces bit rate significantly and achieves peak signal-to-noise ratio gain over those of the joint model (JM) software for all sequences tested, with negligible extra computational cost. The improvement is particularly significant for high motion high-resolution videos. This paper describes our work that led to our Joint Video Team adopted contribution (included in software JM 12.0 onward), collectively known as context adaptive Lagrange multiplier (CALM).