By Topic

Scalable and area efficient concurrent interleaver for high throughput turbo-decoders

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Speziali, F. ; STMicroelectronics, Geneva, Switzerland ; Zory, J.

Parallel turbo decoder architectures have recently been proposed to reach high-throughput channel decoding capacity. However, the implementation of the underlying parallel interleaving subsystem suffers from memory access conflicts; those translate into logic overhead and critical path issues which are blocking factors for handheld system-on-chip solutions. In this paper, we explore several architecture and VLSI design strategies that drastically reduce the logic overhead and data-path delays of concurrent interleaving architectures. A stalling mechanism is introduced that reduces the interleaving subsystem die area and improves the architecture scalability with respect to the number of MAP producers. ASIC synthesis results in 0.18μm and 0.13μm CMOS STMicroelectronics technologies demonstrate the efficiency of the proposed VLSI concurrent interleaving architecture.

Published in:

Digital System Design, 2004. DSD 2004. Euromicro Symposium on

Date of Conference:

31 Aug.-3 Sept. 2004