By Topic

Networking, IEEE/ACM Transactions on

Issue 3 • Date June 2003

Filter Results

Displaying Results 1 - 13 of 13
  • A spectrum of TCP-friendly window-based congestion control algorithms

    Page(s): 341 - 355
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1183 KB) |  | HTML iconHTML  

    The increasing diversity of Internet application requirements has spurred recent interest in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The control rules are parameterized so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. This paper presents a comprehensive study of a new spectrum of window-based congestion controls, which are TCP-friendly as well as TCP-compatible under RED. Our controls utilize history information in their control rules, and by doing so, they improve the transient behavior. We demonstrate analytically, and through extensive ns simulations, the steady-state and transient behavior of several instances of this new spectrum. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Delay-based congestion avoidance for TCP

    Page(s): 356 - 369
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (598 KB) |  | HTML iconHTML  

    The set of TCP congestion control algorithms associated with TCP-Reno (e.g., slow-start and congestion avoidance) have been crucial to ensuring the stability of the Internet. Algorithms such as TCP-NewReno (which has been deployed) and TCP-Vegas (which has not been deployed) represent incrementally deployable enhancements to TCP as they have been shown to improve a TCP connection's throughput without degrading performance to competing flows. Our research focuses on delay-based congestion avoidance algorithms (DCA), like TCP-Vegas, which attempt to utilize the congestion information contained in packet round-trip time (RTT) samples. Through measurement and simulation, we show evidence suggesting that a single deployment of DCA (i.e., a TCP connection enhanced with a DCA algorithm) is not a viable enhancement to TCP over high-speed paths. We define several performance metrics that quantify the level of correlation between packet loss and RTT. Based on our measurement analysis, we find that, although there is useful congestion information contained within RTT samples, the level of correlation between an increase in RTT and packet loss is not strong enough to allow a TCP-sender to improve throughput reliably. While DCA is able to reduce the packet loss rate experienced by a connection, in its attempts to avoid packet loss, the algorithm reacts unnecessarily to RTT variation that is not associated with packet loss. The result is degraded throughput as compared to a similar flow that does not support DCA. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparative study of various TCP versions over a wireless link with correlated losses

    Page(s): 370 - 383
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (719 KB) |  | HTML iconHTML  

    We investigate the behavior of the various transmission control protocol (TCP) algorithms over wireless links with correlated packet losses. For such a scenario, we show that the performance of NewReno is worse than the performance of Tahoe in many situations and even OldTahoe in a few situations because of the inefficient fast recovery method of NewReno. We also show that random loss leads to significant throughput deterioration when either the product of the square of the bandwidth-delay ratio and the loss probability when in the good state exceeds one, or the product of the bandwidth-delay ratio and the packet success probability when in the bad state is less than two. The performance of Sack is always seen to be the best and the most robust, thereby arguing for the implementation of TCP-Sack over the wireless channel. We also show that, under certain conditions, the performance depends not only on the bandwidth-delay product but also on the nature of timeout, coarse or fine. We have also investigated the effects of reducing the fast retransmit threshold. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bandwidth-delay constrained path selection under inaccurate state information

    Page(s): 384 - 398
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (954 KB) |  | HTML iconHTML  

    A key issue in any QoS routing framework is how to compute a path that satisfies given QoS constraints. We focus on the path computation problem subject to bandwidth and delay constraints. This problem can be solved easily if the exact state information is available to the node computing the path. In practice, nodes have only imprecise knowledge of the network state. Reliance on outdated information and treating it as exact can significantly degrade the effectiveness of the path selection. We adopt a probabilistic approach in which the state parameters (available bandwidth and delay) are characterized by random variables. The goal is then to find the most-probable bandwidth-delay-constrained path (MP-BDCP). We provide efficient solutions for the MP-BDCP problem by decomposing it into the most-probable delay-constrained path (MP-DCP) problem and the most-probable bandwidth-constrained path (MP-BCP) problem. MP-DCP by itself is known to be NP-hard, necessitating the use of approximate solutions. We use the central limit theorem and Lagrange relaxation techniques to provide two complementary solutions for MP-DCP. These solutions are highly efficient, requiring on average a few iterations of Dijkstra's shortest path algorithm. As for MP-BCP, it can be easily transformed into a variant of the shortest path problem. Our MP-DCP and MP-BCP solutions are then combined to obtain a set of near-nondominated paths for the MP-BDCP problem. Decision makers can then select one or more of these paths based on a specific utility function. Extensive simulations demonstrate the efficiency of the proposed algorithmic solutions and, more generally, to contrast the probabilistic path selection approach with the standard threshold-based triggered approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis of the multiple input-queued packet switch with the restricted rule

    Page(s): 478 - 487
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (546 KB) |  | HTML iconHTML  

    The multiple input-queued (MIQ) switch is the switch which manages multiple (m) queues in each input port, each of which is dedicated to a group of output ports. Since each input port can switch up to m cells in a time slot, one from each queue, it hardly suffers from the head-of-line (HOL) blocking which is known to be the decisive factor limiting the throughput of the single input-queued (SIQ) switch. As a result, the MIQ switch guarantees enhanced performance characteristics as the number of queues m in an input increases. However, the service of multiple cells from an input could cause internal speedup or expansion of the switch fabric, diluting the merit of high-speed operation in the conventional SIQ scheme. The restricted rule is contrived to circumvent this side effect by regulating the number of cells switched from an input port. We analyze the performance of the MIQ switch employing the restricted rule. For the switch using the restricted rule, the closed formulas for the throughput bound, the mean cell delay and average queue length, and the cell loss bound of the switch are derived as functions of m, by generalizing the analysis for the SIQ switch by J.Y. Hui and E. Arthurs (see IEEE J. Select. Areas Commun., vol.SAC-5, p.1262-73, 1987). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multicast traffic in input-queued switches: optimal scheduling and maximum throughput

    Page(s): 465 - 477
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (752 KB) |  | HTML iconHTML  

    The paper studies input-queued packet switches loaded with both unicast and multicast traffic. The packet switch architecture is assumed to comprise a switching fabric with multicast (and broadcast) capabilities, operating in a synchronous slotted fashion. Fixed-size data units, called cells, are transferred from each switch input to any set of outputs in one time slot, according to the decisions of the switch scheduler, that identifies at each time slot a set of nonconflicting cells, i.e., cells neither coming from the same input, nor directed to the same output. First, multicast traffic admissibility conditions are discussed, and a simple counterexample is presented, showing intrinsic performance losses of input-queued with respect to output-queued switch architectures. Second, the optimal scheduling discipline to transfer multicast packets from inputs to outputs is defined. This discipline is rather complex, requires a queuing architecture that probably is not implementable, and does not guarantee in-sequence delivery of data. However, from the definition of the optimal multicast scheduling discipline, the formal characterization of the sustainable multicast traffic region naturally follows. Then, several theorems showing intrinsic performance losses of input-queued with respect to output-queued switch architectures are proved. In particular, we prove that, when using per multicast flow FIFO queueing architectures, the internal speedup that guarantees 100% throughput under admissible traffic grows with the number of switch ports. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Delay analysis of feedback-synchronization signaling for multicast flow control

    Page(s): 436 - 450
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1054 KB)  

    Feedback signaling plays a key role in flow control because the traffic source relies on the signaling information to make correct and timely flow-control decisions. Design of an efficient signaling algorithm is a challenging task since the signaling messages can tolerate neither error nor latency. Multicast flow-control signaling imposes two additional challenges: scalability and feedback synchronization. Previous research on multicast feedback-synchronization signaling has mainly focused on algorithm design and implementation. However, the delay properties of these algorithms are, despite their vital importance, neither well understood nor thoroughly studied. We develop both deterministic and statistical binary-tree models to study the delay performance of the multicast signaling algorithms. The deterministic model is used to derive the expressions of each path's feedback roundtrip time in a multicast tree, while the statistical model is employed to derive the general probability distributions of each path becoming the multicast-tree bottleneck. Using these models, we analyze and contrast the signaling delay scalability of two representative multicast signaling protocols - the soft-synchronization protocol (SSP) and the hop-by-hop (HBH) scheme - by deriving the first and second moments of multicast signaling delays. Also derived is the optimal flow-control update interval for SSP to minimize the multicast signaling delay. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A simple model of real-time flow aggregation

    Page(s): 422 - 435
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (738 KB) |  | HTML iconHTML  

    The IETF's integrated services (IntServ) architecture, together with reservation aggregation, provides a mechanism to support the quality-of-service demands of real-time flows in a scalable way, i.e., without requiring that each router be signaled with the arrival or departure of each new flow for which it forwards data. However, reserving resources in "bulk" implies that the reservation does not precisely match the true demand. Consequently, if the flows' demanded bandwidth varies rapidly and dramatically, aggregation can incur significant performance penalties of under-utilization and unnecessarily rejected flows. On the other hand, if demand varies moderately and at slower time scales, aggregation can provide an accurate and scalable approximation to IntServ. We develop a simple analytical model and perform extensive trace-driven simulations to explore the effectiveness of aggregation under a broad class of factors. Example findings include: 1) a simple single-time-scale model with random noise can capture the essential behavior of surprisingly complex scenarios; 2) with a two-order-of-magnitude separation between the dominant time scale of demand and the time scale of signaling and moderate levels of secondary noise, aggregation achieves a performance that closely approximates that of IntServ. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal design of survivable mesh networks based on line switched WDM self-healing rings

    Page(s): 501 - 512
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (509 KB) |  | HTML iconHTML  

    Network survivability provided at the optical layer is a desirable feature in modern high-speed networks. For example, the wavelength division multiplexed (WDM) self-healing ring (or SHR/WDM) provides a simple and fast optically transparent protection mechanism against any single fault in the ring. Multiple self-healing rings may be deployed to design a survivable optical mesh network by superposing a set of rings on the arbitrary topology. However, the optimum design of such a network requires the joint solution of three subproblems: the ring cover of the arbitrary topology (the RC subproblem); the routing of the working lightpaths between end node pairs to carry the offered traffic demands (the WL subproblem); and the provisioning of the SHR/WDM spare wavelengths to protect every line that carries working lightpaths (the SW subproblem). The complexity of the problem is exacerbated when software and hardware requirements pose additional design constraints on the optimization process. The paper presents an approach to optimizing the design of a network with arbitrary topology protected by multiple SHRs/WDM. Three design constraints are taken into account, namely, the maximum number of rings acceptable on the same line, the maximum number of rings acceptable at the same node, and the maximum ring size. The first objective is to minimize the total wavelength mileage (working and protection) required in the given topology to carry a set of traffic demands. The exact definition of the problem is given based on an integer linear programming (ILP) formulation that takes into account the design subproblems and constraints and assumes ubiquitous wavelength conversion availability. To circumvent the computational complexity of the exact problem formulation, a suboptimal solution is proposed based on an efficient pruning of the solution space. By jointly solving the three design subproblems, it is numerically demonstrated that the proposed optimization technique yields up to 12% reduction of the total wavelength mileage when compared to solutions obtained by sequentially and independently solving the subproblems. The second objective is to reduce the number of wavelength converters required in the solution produced by the ILP formulation. Two approaches- are proposed in this case that trade the required wavelength mileage for the number of wavelength converters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comparison of ring and tree embedding for real-time group multicast

    Page(s): 451 - 464
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (598 KB) |  | HTML iconHTML  

    In general topology networks, routing from one node to another over a tree embedded in the network is intuitively a good strategy, since it typically results in a route length of O(logn) links, n being the number of nodes in the network. Routing from one node to another over a ring embedded in the network results in route length of O(n) links. However, in group (many-to-many) multicast, the overall number of links traversed by each packet, i.e., the networks elements on which resources must possibly be reserved, is typically O(N) for both tree and ring embedding, where N is the size of the group. The paper focuses on tree versus ring embedding for real-time group multicast in which all packets should reach all the nodes in the group with a bounded end-to-end delay. Real-time properties are guaranteed by the deployment of time-driven priority in network nodes. In order to have a better understanding of the nontrivial problem of ring versus tree embedding, we consider static, dynamic and adaptive group multicast scenarios. Tree and ring embedding are compared using different metrics. The results are interesting and counterintuitive, showing that embedding a tree is not always the best strategy. In particular, dynamic and adaptive multicast on a tree require a protocol for updating state information during operation of the group. Such a protocol is not required on the ring where the circular topology and implicit token passing mechanisms are sufficient. Moreover, the bandwidth allocation on the ring for the three multicast scenarios is O(N), while on a general tree it is O(N) for the static multicast scenario and O(N2) for the dynamic and adaptive multicast scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of network calculus to general topologies using turn-prohibition

    Page(s): 411 - 421
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (431 KB) |  | HTML iconHTML  

    Network calculus is known to apply in general only to feedforward routing networks, i.e., networks where routes do not create cycles of interdependent packet flows. We address the problem of using network calculus in networks of arbitrary topology. For this purpose, we introduce a novel graph-theoretic algorithm, called turn-prohibition (TP), that breaks all the cycles in a network and, thus, prevents any interdependence between flows. We prove that the TP-algorithm prohibits the use of at most 1/3 of the total number of turns in a network, for any network topology. Using analysis and simulation, we show that the TP-algorithm significantly outperforms other approaches for breaking cycles, such as the spanning tree and up/down routing algorithms, in terms of network utilization and delay bounds. Our simulation results also show that the network utilization achieved with the TP-algorithm is within a factor of two of the maximum theoretical network utilization, for networks of up to 50 nodes of degree four. Thus, in many practical cases, the restriction of network calculus to feedforward routing networks may not represent a too significant limitation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 'Optimal energy allocation and admission control for communications satellites

    Page(s): 488 - 500
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (782 KB) |  | HTML iconHTML  

    We address the issue of optimal energy allocation and admission control for communications satellites in Earth orbit. Such satellites receive requests for transmission as they orbit the Earth, but may not be able to serve them all, due to energy limitations. The objective is to choose which requests to serve so that the expected total reward is maximized. The special case of a single energy-constrained satellite is considered. Rewards and demands from users for transmission (energy) are random and known only at request time. Using a dynamic programming approach, an optimal policy is derived and is characterized in terms of thresholds. Furthermore, in the special case where demand for energy is unlimited, an optimal policy is obtained in closed form. Although motivated by satellite communications, our approach is general and can be used to solve a variety of resource allocation problems in wireless communications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic routing of restorable bandwidth-guaranteed tunnels using aggregated network resource usage information

    Page(s): 399 - 410
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (486 KB) |  | HTML iconHTML  

    The paper presents new algorithms for dynamic routing of restorable bandwidth-guaranteed paths. We assume that connections are requested one-by-one and there is no prior knowledge of future arrivals. In order to guarantee restorability an alternate link (node) disjoint backup (restoration) path has to be determined, as well as an active path, when the connection is initiated. This joint on-line routing problem is particularly important in optical networks and in MPLS networks for dynamic provisioning of bandwidth-guaranteed or wavelength paths. A simple solution is to find two disjoint paths, but this results in excessive resource usage. Backup path bandwidth usage can be reduced by judicious sharing of backup paths amongst certain active paths while still maintaining restorability. The best sharing performance is achieved if the routing of every path in progress in the network is known to the routing algorithm at the time of a new path setup. We give a new integer programming formulation for this problem. Complete path routing knowledge is a reasonable assumption for a centralized routing algorithm, but is not often desirable, particularly when distributed routing is preferred. We show that a suitably developed algorithm which uses only aggregated information, and not per-path information, is able to perform almost as well as one using complete information. Disseminating this aggregate information is feasible using proposed traffic engineering extensions to routing protocols. We formulate the dynamic restorable bandwidth routing problem in this aggregate information scenario and develop efficient routing algorithms. The performance of our algorithm is close to the complete information bound. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign