By Topic

Efficient per-flow queueing in DRAM at OC-192 line rate using out-of-order execution techniques

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Nikologiannis, A. ; Inst. of Comput. Sci., Found. for Res. & Technol., Crete, Greece ; Katevenis, M.

Modern switches and routers often use dynamic RAM (DRAM) in order to provide large buffer space. For advanced quality of service (QoS), per-flow queueing is desirable. We study the architecture of a queue manager for many thousands of queues at OC-192 (10 Gbps) line rate. It forms the core of the “datapath” chip in an efficient chip partitioning for the line cards of switches and routers that we propose. To effectively deal with bank conflicts in the DRAM buffer, we use pipelining and out-of-order execution techniques, like the ones originating in the supercomputers of the 60s. To avoid off-chip SRAM, we maintain the pointers in the DRAM, using free buffer preallocation and free list bypassing. We have described our architecture using behavioral Verilog (a hardware description language), at the clock-cycle accuracy level, assuming Rambus DRAM. We estimate the complexity of the queue manager at roughly 60 thousand gates, 80 thousand flip-flops, and 4180 Kbits of on-chip SRAM, for 64 K flows

Published in:

Communications, 2001. ICC 2001. IEEE International Conference on  (Volume:7 )

Date of Conference: