By Topic

A high-performance and memory-efficient architecture for H.264/AVC motion estimation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Chao-Yang Kao ; Dept. of Comput. Sci., Nat. Tsing Hua Univ., Hsinchu ; Youn-Long Lin

Variable-block-size motion estimation (VBSME) is a major contributor to H.264/AVCpsilas excellent coding efficiency. However, its high computational complexity and memory requirement make design difficult. In this paper, we propose a memory-efficient hardware architecture for full-search VBSME (FSVBSME). Our architecture consists of sixteen 2-D arrays each consists of 16 times16 processing elements (PEs). Four arrays form a group to match in parallel four reference blocks against one current block. Four groups perform block matching for four current blocks in a consecutive and overlapped fashion. Taking advantage of reference pixel overlapping between multiple reference blocks of a current block and between search windows of several adjacent current blocks, we propose a novel data reuse scheme to reduce memory access. Compared with the popular Level C data reuse method, our design can save 98% of on-chip memory access with only 27% of memory overhead. Synthesized into a TSMC 130nm CMOS cell library, our design takes 453K logic gates and 2.94 K bytes of on-chip memory. Running at 130 MHz, it is capable of processing 1920 times 1088 30 fps video with 64times64 search range (SR) and two reference frames (RF). We suggest a criterion called design efficiency for comparing different related work. It shows that our design is 27% more efficient than the best design to date.

Published in:

Multimedia and Expo, 2008 IEEE International Conference on

Date of Conference:

June 23 2008-April 26 2008