Skip to Main Content
Due to the popularity of online repositories, an efficient compression algorithm that removes statistical redundancy and psychovisual redundancies without perceptual degradation is important for data transmission and storage. Visual attention model and visual sensitivity model are proposed for removing psychovisual redundancies. However most visual attention models are based on spatial component analysis, and only a few adopt motion vectors for temporal component analysis, which lacks perceptual information. This paper proposes a scene motion and saliency motion based visual attention model that effectively traces the movement of salient regions, and incorporates the obtained motion saliency map with Just Noticeable Distortion (JND) to determine the quantization parameters. The proposed framework achieves an 8% to 73% bit rate reduction compared with the H.264 in version JM14.0, and its bit rate reduction is three times higher than the previous methods. Visual quality assessment experiments indicate that participants cannot distinguish the difference between the compressed video streams and the original video streams.