Skip to Main Content
A joint model of scalable video coding (SVC) uses exhaustive mode and motion searches to select the best prediction mode and motion vector for each macroblock (MB) with high coding efficiency at the cost of computational complexity. If major characteristics of a coding MB such as the complexity of the prediction mode and the motion property can be identified and used in adjusting motion estimation (ME), one can design an algorithm that can adapt coding parameters to the video content. This way, unnecessary mode and motion searches can be avoided. In this paper, we propose a content-adaptive ME for SVC, including analyses of mode complexity and motion property to assist mode and motion searches. An experimental analysis is performed to study interlayer and spatial correlations in the coding information. Based on the correlations, the motion and mode characteristics of the current MB are identified and utilized to adjust each step of ME at the enhancement layer including mode decision, search-range selection, and prediction direction selection. Experimental results show that the proposed algorithm can significantly reduce the computational complexity of SVC while maintaining nearly the same rate distortion performance as the original encoder.