By Topic

Networking, IEEE/ACM Transactions on

Issue 6 • Date Dec. 2003

Filter Results

Displaying Results 1 - 14 of 14
  • A novel scheduling scheme to share dropping ratio while guaranteeing a delay bound in a MultiCode-CDMA network

    Page(s): 994 - 1006
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1113 KB) |  | HTML iconHTML  

    A MultiCode-CDMA network that is capable of providing quality-of-service guarantees will find widespread application in future wireless multimedia networks. However, providing delay guarantees to time-sensitive traffic in such a network is challenging because its transmission capacity is variable even in the absence of any channel impairment. We propose and evaluate the performance of a novel transmission scheduling scheme that is capable of providing such a delay guarantee in a MultiCode-CDMA network. The proposed scheme drops packets to ensure that delays for all transmitted packets are within the guaranteed target bounds, but packets are dropped in a controlled manner such that the average dropping ratios of a set of time-sensitive flows can be proportionally differentiated according to the assigned weighting factors or shares. We provide extensive simulation results to show the effectiveness of the proposed scheme as well as to study the effects of various parameters on its performance. In particular, we show that it can simultaneously guarantee a delay upper bound and a proportionally differentiated dropping ratio in a fading wireless channel for different traffic loads, peak transmission rates, and weighting factors of individual flows. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Service overlay networks: SLAs, QoS, and bandwidth provisioning

    Page(s): 870 - 883
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (873 KB) |  | HTML iconHTML  

    We advocate the notion of service overlay network (SON) as an effective means to address some of the issues, in particular, end-to-end quality of service (QoS), plaguing the current Internet, and to facilitate the creation and deployment of value-added Internet services such as VoIP, Video-on-Demand, and other emerging QoS-sensitive services. The SON purchases bandwidth with certain QoS guarantees from the individual network domains via bilateral service level agreement (SLA) to build a logical end-to-end service delivery infrastructure on top of the existing data transport networks. Via a service contract, users directly pay the SON for using the value-added services provided by the SON. In this paper, we study the bandwidth provisioning problem for a SON which buys bandwidth from the underlying network domains to provide end-to-end value-added QoS sensitive services such as VoIP and Video-on-Demand. A key problem in the SON deployment is the problem of bandwidth provisioning, which is critical to cost recovery in deploying and operating the value-added services over the SON. The paper is devoted to the study of this problem. We formulate the bandwidth provisioning problem mathematically, taking various factors such as SLA, service QoS, traffic demand distributions, and bandwidth costs. Analytical models and approximate solutions are developed for both static and dynamic bandwidth provisioning. Numerical studies are also performed to illustrate the properties of the proposed solutions and demonstrate the effect of traffic demand distributions and bandwidth costs on SON bandwidth provisioning. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Prevention of deadlocks and livelocks in lossless backpressured packet networks

    Page(s): 923 - 934
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (530 KB) |  | HTML iconHTML  

    No packets will be dropped inside a packet network, even when congestion builds up, if congested nodes send backpressure feedback to neighboring nodes, informing them of unavailability of buffering capacity-stopping them from forwarding more packets until enough buffer becomes available. While there are potential advantages in backpressured networks that do not allow packet dropping, such networks are susceptible to a condition known as deadlock in which throughput of the network or part of the network goes to zero (i.e., no packets are transmitted). In this paper, we describe a simple, lossless method of preventing deadlocks and livelocks in backpressured packet networks. In contrast with prior approaches, our proposed technique does not introduce any packet losses, does not corrupt packet sequence, and does not require any changes to packet headers. It represents a new networking paradigm in which internal network losses are avoided (thereby simplifying the design of other network protocols) and internal network delays are bounded. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Proxy-assisted techniques for delivering continuous multimedia streams

    Page(s): 884 - 894
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (575 KB) |  | HTML iconHTML  

    We present a proxy-assisted video delivery architecture that can simultaneously reduce the resources requirements at the central server and the service latency experienced by clients (i.e., end users). Under the proposed video delivery architecture, we develop and analyze two novel proxy-assisted video streaming techniques for on-demand delivery of video objects to a large number of clients. By taking advantage of the resources available at the proxy servers, these techniques not only significantly reduce the central server and network resource requirements, but are also capable of providing near-instantaneous service to a large number of clients. We optimize the performance of our video streaming architecture by carefully selecting video delivery techniques for videos of various popularity and intelligently allocating resources between proxy servers and the central server. Through empirical studies, we demonstrate the efficacy of the proposed proxy-assisted video streaming techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling multiple IP traffic streams with rate limits

    Page(s): 948 - 958
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (834 KB) |  | HTML iconHTML  

    We start with the premise, and provide evidence that it is valid, that a Markov-modulated Poisson process (MMPP) is a good model for Internet traffic at the packet/byte level. We present an algorithm to estimate the parameters and size of a discrete MMPP (D-MMPP) from a data trace. This algorithm requires only two passes through the data. In tandem-network queueing models, the input to a downstream queue is the output from an upstream queue, so the arrival rate is limited by the rate of the upstream queue. We show how to modify the MMPP describing the arrivals to the upstream queue to approximate this effect. To extend this idea to networks that are not tandem, we show how to approximate the superposition of MMPPs without encountering the state-space explosion that occurs in exact computations. Numerical examples that demonstrate the accuracy of these methods are given. We also present a method to convert our estimated D-MMPP to a continuous-time MMPP, which is used as the arrival process in a matrix-analytic queueing model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Protocol design for scalable and reliable group rekeying

    Page(s): 908 - 922
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (959 KB) |  | HTML iconHTML  

    We present the design and specification of a protocol for scalable and reliable group rekeying together with performance evaluation results. The protocol is based upon the use of key trees for secure groups and periodic batch rekeying. At the beginning of each rekey interval, the key server sends a rekey message to all users consisting of encrypted new keys (encryptions, in short) carried in a sequence of packets. We present a scheme for identifying keys, encryptions, and users, and a key assignment algorithm that ensures that the encryptions needed by a user are in the same packet. Our protocol provides reliable delivery of new keys to all users eventually. It also attempts to deliver new keys to all users with a high probability by the end of the rekey interval. For each rekey message, the protocol runs in two steps: a multicast step followed by a unicast step. Proactive forward error correction (FEC) multicast is used to reduce delivery latency. Our experiments show that a small FEC block size can be used to reduce encoding time at the server without increasing server bandwidth overhead. Early transition to unicast, after at most two multicast rounds, further reduces the worst-case delivery latency as well as user bandwidth requirement. The key server adaptively adjusts the proactivity factor based upon past feedback information; our experiments show that the number of NACKs after a multicast round can be effectively controlled around a target number. Throughout the protocol design, we strive to minimize processing and bandwidth requirements for both the key server and users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient rate-controlled bulk data transfer using multiple multicast Groups

    Page(s): 895 - 907
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (622 KB) |  | HTML iconHTML  

    Controlling the rate of bulk data multicast to a large number of receivers is difficult, due to the heterogeneity among the end systems' capabilities and their available network bandwidth. If the data transfer rate is too high, some receivers will lose data, and retransmissions will be required. If the data transfer rate is too slow, an inordinate amount of time will be required to transfer the data. In this paper, we examine an approach toward rate-controlled multicast of bulk data in which the sender uses multiple multicast groups to transmit data at different rates to different subgroups of receivers. We present simple algorithms for determining the transmission rate associated with each multicast channel, based on static resource constraints, e.g., network bandwidth bottlenecks. Transmission rates are chosen so as to minimize the average time needed to transfer data to all receivers. Analysis and simulation are used to show that our policies for rate selection perform well for large and diverse receiver groups and make efficient use of network bandwidth. Moreover, we find that only a small number of multicast groups are needed to reap most of the possible performance benefits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dimensioning optical networks under traffic growth models

    Page(s): 935 - 947
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (878 KB) |  | HTML iconHTML  

    In this paper, we consider the problem of dimensioning a large optical wavelength-division multiplexing (WDM) network assuming the traffic is growing over time. Traffic between pairs of nodes is carried through lightpaths which are high-bandwidth end-to-end circuits, occupying a wavelength on each link of the path between two nodes. We are interested in dimensioning the WDM links so that the first lightpath request rejection will occur, with high probability, after a specified period of time T. Here we introduce the concept of capacity exhaustion probability - the probability that at least one lightpath request will be rejected in the time period (0,T) due to lack of bandwidth/capacity on some link. We propose a network dimensioning method based on a traffic growth model which eventually results in a nonlinear optimization problem with cost minimization as the objective and route capacity exhaustion probabilities as the constraints. Computation of exact capacity exhaustion probabilities requires large computing resources and is thus feasible only for small networks. We consider a reduced load approximation for estimating capacity exhaustion probabilities of a wavelength routed network with arbitrary topology and traffic patterns. We show that the estimates are quite accurate and converge to the correct values under a limiting regime in the desired range of low-capacity exhaustion probabilities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analytic models for the latency and steady-state throughput of TCP Tahoe, Reno, and SACK

    Page(s): 959 - 971
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (858 KB) |  | HTML iconHTML  

    Continuing the process of improvements made to TCP through the addition of new algorithms in Tahoe and Reno, TCP SACK aims to provide robustness to TCP in the presence of multiple losses from the same window. In this paper we present analytic models to estimate the latency and steady-state throughput of TCP Tahoe, Reno, and SACK and validate our models using both simulations and TCP traces collected from the Internet. In addition to being the first models for the latency of finite Tahoe and SACK flows, our model for the latency of TCP Reno gives a more accurate estimation of the transfer times than existing models. The improved accuracy is partly due to a more accurate modeling of the timeouts, evolution of cwnd during slow start and the delayed ACK timer. Our models also show that, under the losses introduced by the droptail queues which dominate most routers in the Internet, current implementations of SACK can fail to provide adequate protection against timeouts and a loss of roughly more than half the packets in a round will lead to timeouts. We also show that with independent losses SACK performs better than Tahoe and Reno and, as losses become correlated, Tahoe can outperform both Reno and SACK. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blocking behaviors of crosstalk-free optical banyan networks on vertical stacking

    Page(s): 982 - 993
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (881 KB) |  | HTML iconHTML  

    Banyan networks are attractive for constructing directional coupler (DC)-based optical switching networks for their small depth and self-routing capability. Crosstalk between optical signals passing through the same DC is an intrinsic drawback in DC-based optical networks. Vertical stacking of multiple copies of an optical banyan network is a novel scheme for building nonblocking (crosstalk-free) optical switching networks. The resulting network, namely vertically stacked optical banyan (VSOB) network, preserves all the properties of the banyan network, but increases the hardware cost significantly. Though much work has been done for determining the minimum number of stacked copies (planes) required for a nonblocking VSOB network, little is known on analyzing the blocking probabilities of VSOB networks that do not meet the nonblocking condition (i.e., with fewer stacked copies than required by the nonblocking condition). In this paper, we analyze the blocking probabilities of VSOB networks and develop their upper and lower bounds with respect to the number of planes in the networks. These bounds depict accurately the overall blocking behaviors of VSOB networks and agree with the conditions of strictly nonblocking and rearrangeably nonblocking VSOB networks respectively. Extensive simulation on a network simulator with both random routing and packing strategy has shown that the blocking probabilities of both strategies fall nicely within our bounds, and the blocking probability of packing strategy actually matches the lower bound. The proposed bounds are significant because they reveal the inherent relationships between blocking probability and network hardware cost in terms of the number of planes, and provide network developers a quantitative guidance to trade blocking probability for hardware cost. In particular, our bounds provide network designers an effective tool to estimate the minimum and maximum blocking probabilities of VSOB networks in which different routing strategies may be applied. An interesting conclusion drawn from our work that has practical applications is that the hardware cost of a VSOB network can be reduced dramatically if a predictable and almost negligible nonzero blocking probability is allowed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bounds on the throughput of congestion controllers in the presence of feedback delay

    Page(s): 972 - 981
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (627 KB)  

    We consider decentralized congestion control algorithms for low-loss operation of the Internet using the ECN bit. There has been much analysis of such algorithms, but with a few exceptions, these typically ignore the effect of feedback delays in the network on stability. We study a single node with many flows passing through it, with each flow (possibly) having a different round-trip delay. Using a fluid model for the flows, we show that even with delays, the total data rate at the router is bounded; and this bound shows that the (peak) total rate grows linearly with increase in system size, i.e., the fraction of overprovisioning required is constant with respect to N, the number of flows in the system. Further, for typical user data rates and delays seen in the Internet today, the bound is very close to the data rate at the router without delays. Earlier results by Johari and Tan have given conditions for a linearized model of the network to be (locally) stable. We show that even when the linearized model is not stable, the nonlinear model is upper bounded, i.e., the total rate at the bottleneck link is upper bounded, and the upper bound is close to the equilibrium rate for TCP. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Editorial

    Page(s): 869
    Save to Project icon | Request Permissions | PDF file iconPDF (139 KB)  
    Freely Available from IEEE
  • Author Index

    Page(s): 1007 - 1009
    Save to Project icon | Request Permissions | PDF file iconPDF (169 KB)  
    Freely Available from IEEE
  • Subject index

    Page(s): 1010 - 1017
    Save to Project icon | Request Permissions | PDF file iconPDF (204 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign