By Topic

Memory analysis of VLSI architecture for 5/3 and 1/3 motion-compensated temporal filtering [video coding applications]

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Chao-Tsung Huang ; Dept. of Electr. Eng., Nat. Taiwan Univ., Taipei, Taiwan ; Ching-Yeh Chen ; Yi-Hau Chen ; Liang-Gee Chen

To the best of authors' knowledge, this paper presents the first work on memory analysis of VLSI architectures for motion-compensated temporal filtering (MCTF). The open-loop MCTF prediction scheme has led the revolution for hybrid video coding methods that are mainly based on the close-loop MC prediction (MCP) scheme, and it also becomes the core technology of the coming video coding standard, MPEG-21 part 13-scalable video coding (SVC). In this paper, the macroblock (MB)-level and frame-level data reuse schemes are analyzed for the MCTF. The MB-level data reuse is especially for the motion estimation (ME), and the level C+ scheme is proposed, which can further reduce the memory bandwidth of the conventional level C scheme. Frame-level data reuse schemes for MCTF are proposed according to the open-loop prediction nature.

Published in:

Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005.  (Volume:5 )

Date of Conference:

18-23 March 2005