By Topic

Networking, IEEE/ACM Transactions on

Issue 2 • Date April 2005

Filter Results

Displaying Results 1 - 24 of 24
  • Table of contents

    Publication Year: 2005 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (452 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking publication information

    Publication Year: 2005 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • Stochastic traffic engineering for demand uncertainty and risk-aware network revenue management

    Publication Year: 2005 , Page(s): 221 - 233
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB) |  | HTML iconHTML  

    We present a stochastic traffic engineering framework for optimizing bandwidth provisioning and route selection in networks. The objective is to maximize revenue from serving demands, which are uncertain and specified by probability distributions. We consider heterogenous demands with different unit revenues and uncertainties. Based on mean-risk analysis, the optimization model enables a carrier to maximize mean revenue and contain the risk that the revenue falls below an acceptable level. Our framework is intended for off-line traffic engineering design, which takes a centralized view of network topology, link capacity, and demand. We obtain conditions under which the optimization problem is an instance of convex programming and therefore efficiently solvable. We also study the properties of the solution and show that it asymptotically meets the stochastic efficiency criterion. We derive properties of the optimal solution for the special case of Gaussian distributions of demands. We focus on the impact of demand uncertainty on various aspects of traffic engineering, such as link utilization, bandwidth provisioning and total revenue. The carrier's tolerance to risk is shown to have a strong influence on traffic engineering and revenue management decisions. We develop the efficient frontier, which is the entire set of Pareto optimal pairs of mean revenue and revenue risk, to aid the carrier in selecting an appropriate operating point. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Achieving near-optimal traffic engineering solutions for current OSPF/IS-IS networks

    Publication Year: 2005 , Page(s): 234 - 247
    Cited by:  Papers (64)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1120 KB) |  | HTML iconHTML  

    Traffic engineering aims to distribute traffic so as to "optimize" some performance criterion. This optimal distribution of traffic depends on both the routing protocol and the forwarding mechanisms in use in the network. In IP networks running the OSPF or IS-IS protocols, routing is over shortest paths, and forwarding mechanisms distribute traffic "uniformly" over equal cost shortest paths. These constraints often make achieving an optimal distribution of traffic impossible. In this paper, we propose and evaluate an approach that can realize near optimal traffic distribution without changes to routing protocols and forwarding mechanisms. In addition, we explore the tradeoff that exists between performance and the configuration overhead that our solution requires. The paper's contributions are in formulating and evaluating an approach to traffic engineering in IP networks that achieves near-optimal performance while preserving the existing infrastructure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of capacitated survivable networks with a single Facility

    Publication Year: 2005 , Page(s): 248 - 261
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (536 KB) |  | HTML iconHTML  

    In this paper we focus on the single-facility capacitated survivable network design problem. We optimize simultaneously the network topology and the link dimensioning in order to route all traffic commodities according to survivability requirements. The latter are actually expressed in terms of the spare capacity required to address link failures in the context of different rerouting strategies. We present a mixed-integer linear programming model solved by combining several approaches. To tackle the high dimensionality and to separate the continuous and integer variables, we use Benders' decomposition and a cutting-plane approach. Going beyond the proposed method itself, we examine and compare two well-known restoration techniques: local and end-to-end reroutings. Numerous computational results for realistic network instances provide a comparison of these rerouting mechanisms in terms of installed capacities, network density as well as overall costs and CPU time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An evolutionary management scheme in high-performance packet switches

    Publication Year: 2005 , Page(s): 262 - 275
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB) |  | HTML iconHTML  

    This paper deals with a novel buffer management scheme based on the combination of evolutionary computing and fuzzy logic for shared-memory packet switches. The philosophy behind it is adaptation of the threshold for each logical output queue to the real traffic conditions by means of a system of fuzzy inferences. The optimal fuzzy system is achieved using a systematic methodology based on Genetic Algorithms for membership-function selecting and tuning. This methodology approach allows the fuzzy system parameters to be automatically derived when the switch parameters vary, offering a high degree of scalability to the fuzzy control system. Its performance is close to that of the push-out mechanism, which can be considered ideal from a performance viewpoint, and at any rate much better than that of threshold schemes based on conventional logic. In addition, the fuzzy threshold scheme is simple to implement, unlike the push-out mechanism which is not practically feasible in high-speed switches due to the amount of time required for computation, and above all inexpensive when implemented using current standard technology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Schedulability criterion and performance analysis of coordinated schedulers

    Publication Year: 2005 , Page(s): 276 - 287
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB) |  | HTML iconHTML  

    Inter-server coordinated scheduling is a mechanism for downstream nodes to increase or decrease a packet's priority according to the congestion incurred at upstream nodes. In this paper, we derive an end-to-end schedulability condition for a broad class of coordinated schedulers that includes Core-stateless Jitter Virtual Clock (CJVC) and Coordinated Earliest Deadline First (CEDF). In contrast to previous approaches, our technique purposely allows flows to violate their local priority indexes while still providing an end-to-end delay bound. We show that under a simple priority assignment scheme, coordinated schedulers can outperform WFQ schedulers, while replacing per-flow scheduling operations with a simple coordination rule. Finally, we illustrate the performance advantages of coordination through numerical examples and simulation experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FlowMate: scalable on-line flow clustering

    Publication Year: 2005 , Page(s): 288 - 301
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (744 KB) |  | HTML iconHTML  

    We design and implement an efficient on-line approach, FlowMate, for clustering flows (connections) emanating from a busy server, according to shared bottlenecks. Clusters can be periodically input to load balancing, congestion coordination, aggregation, admission control, or pricing modules. FlowMate uses in-band (passive) end-to-end delay measurements to infer shared bottlenecks. Delay information is piggybacked on feedback from the receivers, or, if impossible, TCP or application round-trip time estimates are used. We simulate FlowMate and examine the effects of network load, traffic burstiness, network buffer sizes, and packet drop policies on clustering correctness, evaluated via a novel accuracy metric. We find that coordinated congestion management techniques are more fair when integrated with FlowMate. We also implement FlowMate in the Linux kernel v2.4.17 and evaluate its performance on the Emulab testbed, using both synthetic and tcplib-generated traffic. Our results demonstrate that clustering of medium to long-lived flows is accurate, even with bursty background traffic. Finally, we validate our results on the Internet Planetlab testbed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Resource allocation between persistent and transient flows

    Publication Year: 2005 , Page(s): 302 - 315
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB) |  | HTML iconHTML  

    The flow control algorithms currently used in the Internet have been tailored to share available capacity between users on the basis of the physical characteristics of the network links they use rather than the characteristics of their applications. However, real-time applications typically have very different requirements from file transfer or Web browsing, and treating them identically can result in a perception of poor quality of service even when adequate bandwidth is available. This is the motivation for differentiated services. In this paper, we explore service differentiation between persistent (fixed duration) and transient (fixed volume) flows, and also between transient flows of markedly different sizes; the latter is stimulated by current discussion on Web mice and elephants. We propose decentralized bandwidth allocation algorithms that can be implemented by end-systems without requiring the support of a complex network architecture, and show that they achieve performance very close to what is achievable by the optimal centralized scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TCP smart framing: a segmentation algorithm to reduce TCP latency

    Publication Year: 2005 , Page(s): 316 - 329
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB) |  | HTML iconHTML  

    TCP Smart Framing, or TCP-SF for short, enables the Fast Retransmit/Recovery algorithms even when the congestion window is small. Without modifying the TCP congestion control based on the additive-increase/multiplicative-decrease paradigm, TCP-SF adopts a novel segmentation algorithm: while Classic TCP always tries to send full-sized segments, a TCP-SF source adopts a more flexible segmentation algorithm to try and always have a number of in-flight segments larger than 3 so as to enable Fast Recovery. We motivate this choice by real traffic measurements, which indicate that today's traffic is populated by short-lived flows, whose only means to recover from a packet loss is by triggering a Retransmission Timeout. The key idea of TCP-SF can be implemented on top of any TCP flavor, from Tahoe to SACK, and requires modifications to the server TCP stack only, and can be easily coupled with recent TCP enhancements. The performance of the proposed TCP modification were studied by means of simulations, live measurements and an analytical model. In addition, the analytical model we have devised has a general scope, making it a valid tool for TCP performance evaluation in the small window region. Improvements are remarkable under several buffer management schemes, and maximized by byte-oriented schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CYRF: a theory of window-based unicast congestion control

    Publication Year: 2005 , Page(s): 330 - 342
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (624 KB) |  | HTML iconHTML  

    This work presents a comprehensive theoretical framework for memoryless window-based congestion control protocols that are designed to converge to fairness and efficiency. We first derive a necessary and sufficient condition for stepwise convergence to fairness. Using this, we show how fair window increase/decrease policies can be constructed from suitable pairs of monotonically nondecreasing functions. We generalize this to smooth protocols that converge over each congestion epoch. The framework also includes a simple method for incorporating TCP-friendliness. Well-studied congestion control protocols such as TCP, GAIMD, and Binomial congestion control can be constructed using this method. Thus, we provide a common framework for the analysis of such window-based protocols. We also present two new congestion control protocols for streaming media-like applications as examples of protocol design in this framework: The first protocol, LOG, has the objective of reconciling the smoothness requirement of an application with the need for a fast dynamic response to congestion. The second protocol, SIGMOID, guarantees a minimum bandwidth for an application but behaves exactly like TCP for large windows. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis of exponential backoff

    Publication Year: 2005 , Page(s): 343 - 355
    Cited by:  Papers (109)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB) |  | HTML iconHTML  

    New analytical results are given for the performance of the exponential backoff (EB) algorithm. Most available studies on EB focus on the stability of the algorithm and little attention has been paid to the performance analysis of EB. In this paper, we analyze EB and obtain saturation throughput and medium access delay of a packet for a given number of nodes N. The analysis considers the general case of EB with backoff factor r; binary exponential backoff (BEB) algorithm is the special case with r=2. We also derive the analytical performance of EB with maximum retry limit M (EB-M), a practical version of EB. The accuracy of the analysis is checked against simulation results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A stochastic model of TCP/IP with stationary random losses

    Publication Year: 2005 , Page(s): 356 - 369
    Cited by:  Papers (59)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (864 KB) |  | HTML iconHTML  

    In this paper, we present a model for TCP/IP congestion control mechanism. The rate at which data is transmitted increases linearly in time until a packet loss is detected. At this point, the transmission rate is divided by a constant factor. Losses are generated by some exogenous random process which is assumed to be stationary ergodic. This allows us to account for any correlation and any distribution of inter-loss times. We obtain an explicit expression for the throughput of a TCP connection and bounds on the throughput when there is a limit on the window size. In addition, we study the effect of the Timeout mechanism on the throughput. A set of experiments is conducted over the real Internet and a comparison is provided with other models that make simple assumptions on the inter-loss time process. The comparison shows that our model approximates well the throughput of TCP for many distributions of inter-loss times. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Individual QoS versus aggregate QoS: a loss performance study

    Publication Year: 2005 , Page(s): 370 - 383
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (480 KB) |  | HTML iconHTML  

    This paper explores the differences that can exist between individual and aggregate loss guarantees in an environment where guarantees are only provided at the aggregate level. The focus is on understanding which traffic parameters are responsible for inducing possible deviations and to what extent. In addition, we seek to evaluate the level of additional resources, e.g., bandwidth or buffer, required to ensure that all individual loss measures remain below their desired target. This paper's contributions are in developing analytical models that enable the evaluation of individual loss probabilities in settings where only aggregate losses are controlled, and in identifying traffic parameters that have a major influence on the differences between individual and aggregate losses. The latter allows us to further construct practical tools and guidelines for rapidly assessing if specific traffic sources can be safely multiplexed into a common service class. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time-diffusion synchronization protocol for wireless sensor networks

    Publication Year: 2005 , Page(s): 384 - 397
    Cited by:  Papers (94)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1136 KB)  

    In the near future, small intelligent devices will be deployed in homes, plantations, oceans, rivers, streets, and highways to monitor the environment. These devices require time synchronization, so voice and video data from different sensor nodes can be fused and displayed in a meaningful way at the sink. Instead of time synchronization between just the sender and receiver or within a local group of sensor nodes, some applications require the sensor nodes to maintain a similar time within a certain tolerance throughout the lifetime of the network. The Time-Diffusion Synchronization Protocol (TDP) is proposed as a network-wide time synchronization protocol. It allows the sensor network to reach an equilibrium time and maintains a small time deviation tolerance from the equilibrium time. In addition, it is analytically shown that the TDP enables time in the network to converge. Also, simulations are performed to validate the effectiveness of TDP in synchronizing the time throughout the network and balancing the energy consumed by the sensor nodes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal tradeoffs for location-based routing in large-scale ad hoc networks

    Publication Year: 2005 , Page(s): 398 - 410
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (560 KB) |  | HTML iconHTML  

    Existing location-based routing protocols are not versatile enough for a large-scale ad hoc environment to simultaneously meet all of the requirements of scalability, bandwidth efficiency, energy efficiency, and quality-of-service routing. To remedy this deficiency, we propose an optimal tradeoff approach that: 1) constructs a hybrid routing protocol by combining well-known location-update schemes (i.e., proactive location updates within nodes' local regions and a distributed location service), and 2) derives its optimal configuration, in terms of location-update thresholds (both distance and time-based), to minimize the overall routing overhead. We also build a route-discovery scheme based on an Internet-like architecture, i.e., first querying the location of a destination, then applying a series of local-region routing until finding a complete route by aggregating the thus-found partial routes. To find the optimal thresholds for the hybrid protocol, we derive the costs associated with both location updates and route discovery as a function of location-update thresholds, assuming a random mobility model and a general distribution for route request arrivals. The problem of minimizing the total cost is then cast into a distributed optimization problem. We first prove that the total cost is a convex function of the thresholds, and then derive the optimal thresholds. Finally, we show, via simulation, that our analysis results indeed capture the real behavior. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stable scheduling policies for fading wireless channels

    Publication Year: 2005 , Page(s): 411 - 424
    Cited by:  Papers (147)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB) |  | HTML iconHTML  

    We study the problem of stable scheduling for a class of wireless networks. The goal is to stabilize the queues holding information to be transmitted over a fading channel. Few assumptions are made on the arrival process statistics other than the assumption that their mean values lie within the capacity region and that they satisfy a version of the law of large numbers. We prove that, for any mean arrival rate that lies in the capacity region, the queues will be stable under our policy. Moreover, we show that it is easy to incorporate imperfect queue length information and other approximations that can simplify the implementation of our policy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Impact of interferences on connectivity in ad hoc networks

    Publication Year: 2005 , Page(s): 425 - 436
    Cited by:  Papers (100)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (688 KB) |  | HTML iconHTML  

    We study the impact of interferences on the connectivity of large-scale ad hoc networks, using percolation theory. We assume that a bi-directional connection can be set up between two nodes if the signal to noise ratio at the receiver is larger than some threshold. The noise is the sum of the contribution of interferences from all other nodes, weighted by a coefficient γ, and of a background noise. We find that there is a critical value of γ above which the network is made of disconnected clusters of nodes. We also prove that if γ is nonzero but small enough, there exist node spatial densities for which the network contains a large (theoretically infinite) cluster of nodes, enabling distant nodes to communicate in multiple hops. Since small values of γ cannot be achieved without efficient CDMA codes, we investigate the use of a very simple TDMA scheme, where nodes can emit only every nth time slot. We show that it achieves connectivity similar to the previous system with a parameter γ/n. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lightpath Re-optimization in mesh optical networks

    Publication Year: 2005 , Page(s): 437 - 447
    Cited by:  Papers (37)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (848 KB) |  | HTML iconHTML  

    Intelligent mesh optical networks deployed today offer unparalleled capacity, flexibility, availability, and, inevitably, new challenges to master all these qualities in the most efficient and practical manner. More specifically, demands are routed according to the state of the network available at the moment. As the network and the traffic evolve, the lightpaths of the existing demands becomes sub-optimal. In this paper we study two algorithms to re-optimize lightpaths in resilient mesh optical networks. One is a complete re-optimization algorithm that re-routes both primary and backup paths, and the second is a partial re-optimization algorithm that re-routes the backup paths only. We show that on average, these algorithms allow bandwidth savings of 3% to 5% of the total capacity in scenarios where the backup path only is re-routed, and substantially larger bandwidth savings when both the working and backup paths are re-routed. We also prove that trying all possible demand permutations with an online algorithm does not guarantee optimality, and in certain cases does not achieve it, while for the same scenario optimality is achieved through re-optimization. This observation motivates the needs for a re-optimization approach that does not just simply look at different sequences, and we propose and experiment with such an approach. Re-optimization has actually been performed in a nationwide live optical mesh network and the resulting savings are reported in this paper, validating reality and the usefulness of re-optimization in real networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimization of optical cross-connects with wave-mixing conversion

    Publication Year: 2005 , Page(s): 448 - 458
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (624 KB) |  | HTML iconHTML  

    This paper presents new constructions of multistage wave-mixing networks with arbitrary b×b space-switching elements, where b ≥ 2. In these networks, for a size of F fiber links and W wavelengths per link, converter requirements are O(FlogbW) or O(FW/b) for rearrangeable nodes, and O(FlogbWlogb(FW)) or O(FWlogb(FW)/b) for different types of strictly nonblocking nodes inspired from the Cantor topology. In all cases, the worst case number of cascaded conversion is O(logbW). When b=W ≤ F, the required number of converters, and the worst case number of cascaded conversions are respectively O(F) and O(1), and are both optimal up to a constant. The new networks generalize and improve upon previous wave-mixing networks based on 2×2 space switches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quality without compromise [advertisement]

    Publication Year: 2005 , Page(s): 459
    Save to Project icon | Request Permissions | PDF file iconPDF (319 KB)  
    Freely Available from IEEE
  • IEEE order form for reprints

    Publication Year: 2005 , Page(s): 460
    Save to Project icon | Request Permissions | PDF file iconPDF (378 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking society information

    Publication Year: 2005 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking Information for authors

    Publication Year: 2005 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (30 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign