By Topic

Networking, IEEE/ACM Transactions on

Issue 4 • Date Aug 2002

Filter Results

Displaying Results 1 - 12 of 12
  • Algorithms for provisioning virtual private networks in the hose model

    Page(s): 565 - 578
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (489 KB)  

    Virtual private networks (VPNs) provide customers with predictable and secure network connections over a shared network. The recently proposed hose model for VPNs allows for greater flexibility since it permits traffic to and from a hose endpoint to be arbitrarily distributed to other endpoints. We develop novel algorithms for provisioning VPNs in the hose model. We connect VPN endpoints using a tree structure and our algorithms attempt to optimize the total bandwidth reserved on edges of the VPN tree. We show that even for the simple scenario in which network links are assumed to have infinite capacity, the general problem of computing the optimal VPN tree is NP-hard. Fortunately, for the special case when the ingress and egress bandwidths for each VPN endpoint are equal, we can devise an algorithm for computing the optimal tree whose time complexity is O(mn), where m and n are the number of links and nodes in the network, respectively. We present a novel integer programming formulation for the general VPN tree computation problem (that is, when ingress and egress bandwidths of VPN endpoints are arbitrary) and develop an algorithm that is based on the primal-dual method. Our experimental results with synthetic network graphs indicate that the VPN trees constructed by our proposed algorithms dramatically reduce bandwidth requirements (in many instances, by more than a factor of 2) compared to scenarios in which Steiner trees are employed to connect VPN endpoints. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Algebra and algorithms for QoS path computation and hop-by-hop routing in the Internet

    Page(s): 541 - 550
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (374 KB) |  | HTML iconHTML  

    Prompted by the advent of quality-of-service routing in the Internet, we investigate the properties that path weight functions must have so that hop-by-hop routing is possible and optimal paths can be computed with a generalization of E.W. Dijkstra's algorithm (see Numer. Math., vol.1, p.269-71, 1959). We define an algebra of weights which contains a binary operation, for the composition of link weights into path weights, and an order relation. Isotonicity is the key property of the algebra. It states that the order relation between the weights of any two paths is preserved if both of them are either prefixed or appended by a common, third, path. We show that isotonicity is both necessary and sufficient for a generalized Dijkstra's algorithm to yield optimal paths. Likewise, isotonicity is also both necessary and sufficient for hop-by-hop routing. However, without strict isotonicity, hop-by-hop routing based on optimal paths may produce routing loops. They are prevented if every node computes what we call lexicographic-optimal paths. These paths can be computed with an enhanced Dijkstra's algorithm that has the same complexity as the standard one. Our findings are extended to multipath routing as well. As special cases of the general approach, we conclude that shortest-widest paths can neither be computed with a generalized Dijkstra's algorithm nor can packets be routed hop-by-hop over those paths. In addition, loop-free hop-by-hop routing over widest and widest-shortest paths requires each node to compute lexicographic-optimal paths, in general. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mobility increases the capacity of ad hoc wireless networks

    Page(s): 477 - 486
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (369 KB) |  | HTML iconHTML  

    The capacity of ad hoc wireless networks is constrained by the mutual interference of concurrent transmissions between nodes. We study a model of an ad hoc network where n nodes communicate in random source-destination pairs. These nodes are assumed to be mobile. We examine the per-session throughput for applications with loose delay constraints, such that the topology changes over the time-scale of packet delivery. Under this assumption, the per-user throughput can increase dramatically when nodes are mobile rather than fixed. This improvement can be achieved by exploiting a form of multiuser diversity via packet relaying. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient randomized Web-cache replacement schemes using samples from past eviction times

    Page(s): 441 - 454
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB) |  | HTML iconHTML  

    The problem of document replacement in Web caches has received much attention and it has been shown that the eviction rule "replace the least recently used document" performs poorly in Web caches. Instead, it has been shown that using a combination of several criteria, such as the recentness and frequency of use, the size and the cost of fetching a document, leads to a sizable improvement in hit rate and latency reduction. However, in order to implement these novel schemes, one needs to maintain complicated data structures. We propose randomized algorithms for approximating any existing Web-cache replacement scheme and thereby avoid the need for any data structures. At document-replacement times, the randomized algorithm samples N documents from the cache and replaces the least useful document from the sample, where usefulness is determined according to the criteria mentioned above. The next M View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Impact of TCP-like congestion control on the throughput of multicast groups

    Page(s): 500 - 512
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (375 KB) |  | HTML iconHTML  

    We study the impact of random queueing delays stemming from traffic variability on the performance of a multicast session. With a simple analytical model, we analyze the throughput degradation within a multicast (one-to-many) tree under TCP-like congestion and flow control. We use the (max,plus) formalism together with methods based on stochastic comparison (association and convex ordering) and on the theory of extremes to prove various properties of the throughput. We first prove that the throughput predicted by a deterministic model is systematically optimistic. In the presence of light-tailed random delays, we show that the throughput decreases according to the inverse of the logarithm of the number of receivers. We find analytically an upper and a lower bound for the throughput degradation. Within these bounds, we characterize the degradation which is obtained for various tree topologies. In particular, we observe that a class of trees commonly found in IP multicast sessions is significantly more sensitive to traffic variability than other topologies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal retrial and timeout strategies for accessing network resources

    Page(s): 551 - 564
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (445 KB) |  | HTML iconHTML  

    The notion of timeout (i.e., the maximal time to wait before retrying an action) occurs in many networking contexts. Use of timeouts is encountered especially in large-scale networks, where negative acknowledgments (NACKs) on failures have significantly higher delays than positive acknowledgments (ACKs) and frequently are not employed at all. Selection of a proper timeout involves a tradeoff between waiting too long and loading the network needlessly by waiting too little. The common approach is to set the timeout to a large value, such that, unless the action fails, it is acknowledged within the timeout duration with a high probability. This approach leads to overly long, far from optimal, timeouts. Our quantitative approach has the purpose of computing and studying the optimal timeout strategy. The tradeoff is modeled by introducing a "cost" per unit time (until success) and a "cost" per repeated attempt. The optimal strategy is then defined as one that a selfish user would follow to minimize its expected cost. We discuss various practical interpretations of these costs. We then derive formulas for the optimal timeout values and study some of their fundamental properties. We identify the worthwhile conditions for making parallel attempts from the outset. We also demonstrate a striking property of positive feedback and study the interaction resulting when many users selfishly apply the optimal timeout strategy; we use a noncooperative game model and show that it suffers from an inherent instability problem. Some implications of these results on network design are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Energy-efficient packet transmission over a wireless link

    Page(s): 487 - 499
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (454 KB) |  | HTML iconHTML  

    The paper considers the problem of minimizing the energy used to transmit packets over a wireless link via lazy schedules that judiciously vary packet transmission times. The problem is motivated by the following observation. With many channel coding schemes, the energy required to transmit a packet can be significantly reduced by lowering transmission power and code rate and therefore transmitting the packet over a longer period of time. However, information is often time-critical or delay-sensitive and transmission times cannot be made arbitrarily long. We therefore consider packet transmission schedules that minimize energy subject to a deadline or a delay constraint. Specifically, we obtain an optimal offline schedule for a node operating under a deadline constraint. An inspection of the form of this schedule naturally leads us to an online schedule which is shown, through simulations, to perform closely to the optimal offline schedule. Taking the deadline to infinity, we provide an exact probabilistic analysis of our offline scheduling algorithm. The results of this analysis enable us to devise a lazy online algorithm that varies transmission times according to backlog. We show that this lazy schedule is significantly more energy-efficient compared to a deterministic (fixed transmission time) schedule that guarantees queue stability for the same range of arrival rates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Delay jitter bounds and packet scale rate guarantee for expedited forwarding

    Page(s): 529 - 540
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (381 KB) |  | HTML iconHTML  

    We consider the definition of the expedited forwarding per-hop behavior (EF PHB) as given in RFC 2598 and its impact on worst case end-to-end delay jitter. On the one hand, the definition in RFC 2598 can be used to predict extremely low end-to-end delay jitter, independent of the network scale. On the other hand, the worst case delay jitter can be made arbitrarily large, while each flow traverses at most a specified number of hops, if we allow networks to become arbitrarily large; this is in contradiction with the previous statement. We analyze where the contradiction originates and find the explanation. It resides in the fact that the definition in RFC 2598 is not easily implementable in known schedulers, mainly because it is not formal enough and also because it does not contain an error term. We propose a new definition for the EF PHB, called "packet scale rate guarantee" (PSRG) that preserves the spirit of RFC 2598 while allowing a number of reasonable implementations and has very useful properties for per-node and end-to-end network engineering. We show that this definition implies a rate-latency service curve property. We also show that it is equivalent, in some sense, to the stronger concept of "adaptive service guarantee". Then we propose some proven bounds on delay jitter for networks implementing this new definition in cases without loss and with loss. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The BLUE active queue management algorithms

    Page(s): 513 - 528
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (712 KB) |  | HTML iconHTML  

    In order to stem the increasing packet loss rates caused by an exponential increase in network traffic, the IETF has been considering the deployment of active queue management techniques such as RED (random early detection) (see Floyd, S. and Jacobson, V., IEEE/ACM Trans. Networking, vol.1, p.397-413, 1993). While active queue management can potentially reduce packet loss rates in the Internet, we show that current techniques are ineffective in preventing high loss rates. The inherent problem with these algorithms is that they use queue lengths as the indicator of the severity of congestion. In light of this observation, a fundamentally different active queue management algorithm, called BLUE, is proposed, implemented and evaluated. BLUE uses packet loss and link idle events to manage congestion. Using both simulation and controlled experiments, BLUE is shown to perform significantly better than RED, both in terms of packet loss rates and buffer size requirements in the network. As an extension to BLUE, a novel technique based on Bloom filters (see Bloom, B., Commun. ACM, vol.13, no.7, p.422-6, 1970) is described for enforcing fairness among a large number of flows. In particular, we propose and evaluate stochastic fair BLUE (SFB), a queue management algorithm which can identify and rate-limit nonresponsive flows using a very small amount of state information. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Managing capacity for telecommunications networks under uncertainty

    Page(s): 579 - 588
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB) |  | HTML iconHTML  

    The existing telecommunications infrastructure in most of the world is adequate to deliver voice and text applications, but demand for broadband services such as streaming video and large file transfer (e.g., movies) is accelerating. The explosion in Internet use has created a huge demand for telecommunications capacity. However, this demand is extremely volatile, making network planning difficult. Modern financial option pricing methods are applied to the problem of network investment decision timing. In particular, we study the optimal decision problem of building new network capacity in the presence of stochastic demand for services. Adding new capacity requires a capital investment, which must be balanced by uncertain future revenues. We study the underlying risk factor in the bandwidth market and then apply real options theory to the upgrade decision problem. We notice that sometimes it is optimal to wait until the maximum capacity of a line is nearly reached before upgrading directly to the line with the highest known transmission rate (skipping the intermediate lines). It appears that past upgrade practice underestimates the conflicting effects of growth and volatility. This explains the current overcapacity in available bandwidth. To the best of our knowledge, this real options approach has not been used previously in the area of network capacity planning. Consequently, we believe that this methodology can offer insights for network management. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structuring Internet media streams with cueing protocols

    Page(s): 466 - 476
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (275 KB) |  | HTML iconHTML  

    We propose a new, media-independent protocol for including program timing, structure and identity information in Internet media streams. The protocol uses signaling messages called cues to indicate events whose timing is significant to receivers, such as the start or stop time of a media program. We describe the implementation and operation of a prototype Internet radio station which transmits program cues in audio broadcasts using a real-time transport protocol (RTP). A collection of simple yet powerful stream processing applications we implemented demonstrate how application creation is greatly eased when media streams are enriched with program cues. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic parallel access to replicated content in the Internet

    Page(s): 455 - 465
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (342 KB)  

    Popular content is frequently replicated in multiple servers or caches in the Internet to offload origin servers and improve end-user experience. However, choosing the best server is a nontrivial task and a bad choice may provide poor end user experience. In contrast to retrieving a file from a single server, we propose a parallel-access scheme where end users access multiple servers at the same time, fetching different portions of that file from different servers and reassembling them locally. The amount of data retrieved from a particular server depends on the resources available at that server or along the path from the user to the server. Faster servers deliver bigger portions of a file while slower servers deliver smaller portions. If the available resources at a server or along the path change during the download of a file, a dynamic parallel access automatically shifts the load from congested locations to less loaded parts (server and links) of the Internet. The end result is that users experience significant speedups and very consistent response times. Moreover, there is no need for complicated server selection algorithms and load is dynamically shared among all servers. The dynamic parallel-access scheme presented does not require any modifications to servers or content and can be easily included in browsers, peer-to-peer applications or content distribution networks to speed up delivery of popular content. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign