By Topic

How to waste 2/3 of the throughput of a switch: a tight characterization of load balancing algorithms that do not split

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
1 Author(s)
Mneimneh, S. ; CUNY, New York

Load balancing has recently acquired increased interest among researchers in the switching community. The premise is to replace the single switch, running at a speed proportional to the number n of input and output ports (and to the line rate), by a load balanced switch consisting of k switches, each running at 1/k the speed. Indeed, with such an architecture, and for some classes of traffic patterns, simply spreading the traffic among the k switches achieves 100% throughput (thus the term load balancing). However, reordering among packets of the same flow may occur, which becomes a major concern. One way of avoiding this problem is to use a load balancing algorithm that does not split, i.e. that routes all packets of a given flow through exactly one of the k switches, without re-routing existing Hows. We shall call such a load balancing algorithm a non-splitting algorithm. While non-splitting algorithms definitely preserve the order of packets, it is intuitively well understood that these algorithms waste throughput. The focus of this paper is to exactly characterize this waste. We prove an asymptotic tight upper bound of 1/3 on the throughput of any non-splitting algorithm when n >> k.

Published in:

High Performance Switching and Routing, 2007. HPSR '07. Workshop on

Date of Conference:

May 30 2007-June 1 2007