Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

System-level performance optimization of the data queueing memory management in high-speed network processors

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

6 Author(s)

In high-speed network processors, data queueing has to allow real-time memory (de)allocation, buffering, retrieving, and forwarding of incoming data packets. Its implementation must be highly optimized to combine high speed, low power, large data storage, and high memory bandwidth. In this paper, such data queueing is used as case study to demonstrate the effectiveness of a new system-level exploration method for optimizing the memory performance in dynamic memory management. Assuming that a multi-bank memory architecture is used for data storage, the method trades off bank conflicts against memory accesses during real-time memory (de)allocation. It has been applied to the data queueing module of the PRO3 system. Compared with the conventional memory management technique for embedded systems, our exploration method can save up to 90% of the bank conflicts, which allows to improve worst-case memory performance of data queueing operations by 50% too.

Published in:

Design Automation Conference, 2002. Proceedings. 39th

Date of Conference:

2002