By Topic

Improving the throughput and delay performance of network processors by applying push model

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

6 Author(s)
Bin Liu ; Dept. of Comput. Sci. & Technol., Tsinghua Univ., Beijing, China ; Bo Yuan ; Huichen Dai ; Hongbo Zhao
more authors

Traditional network processors (NPs) adopt pull model, where NP cores pull packet data from external memory to local memory, triggered by cache miss or fetch instructions. Due to the long latency of data fetching, hardware multithreading is typically used to reduce the waiting time. Multithreading incurs context switch overhead, leading to inefficiency in payload processing applications. We propose a push model for future NP's architectural design to increase throughput and decrease processing delay. A hardware push unit helps to move the segments of a packet to a core's local memory to reduce hardware thread switching. Theoretical analyses are given to compare the pull and push model's performance. Further, we selected our FPGA based THNPU NP platform for verification. Experimental results indicate that the push model not only improves the system throughput, but also reduces the delay, with only a fraction of logic gate increase.

Published in:

Quality of Service (IWQoS), 2012 IEEE 20th International Workshop on

Date of Conference:

4-5 June 2012