By Topic

Providing guaranteed rate services in the load balanced Birkhoff-von Neumann switches

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Cheng-Shang Chang ; Inst. of Commun. Eng., Nat. Tsing Hua Univ., Hsinchu ; Duan-Shin Lee ; Chi-Yao Yue

In this paper, we propose two schemes for the load balanced Birkhoff-von Neumann switches to provide guaranteed rate services. The first scheme is based on an earliest eligible time first (EETF) policy. In such a scheme, we assign every packet of a guaranteed rate flow a targeted departure time that is the departure time from the corresponding work conserving link with capacity equal to the guaranteed rate. By implementing the EETF policy with jitter control mechanisms and first come first serve (FCFS) queues, we show that the end-to-end delay for every packet of a guaranteed rate flow is bounded by the sum of its targeted departure time and a constant that only depends on the number of flows and the size of the switch. Our second scheme is a frame based scheme as in Keslassy and McKeown, 2002. There, time slots are grouped into fixed size frames. Packets are placed in appropriate bins (buffers) according to their arrival times and their flows. We show that if the incoming traffic satisfies certain rate assumptions, then the end-to-end delay for every packet and the size of the central buffers are both bounded by constants that only depend on the size of the switches and the frame size. The second scheme is much simpler than the first one in many aspects: 1) the on-line complexity is O(1) as there is no need for complicated scheduling; 2) central buffers are finite and thus can be built into a single chip; 3) connection patterns of the two switch fabrics are changed less frequently; 4) there is no need for resequencing-and-output buffer after the second stage; and 5)variable length packets may be handled without segmentation and reassembly

Published in:

Networking, IEEE/ACM Transactions on  (Volume:14 ,  Issue: 3 )