By Topic

Networking, IEEE/ACM Transactions on

Issue 2 • Date April 2009

Filter Results

Displaying Results 1 - 25 of 28
  • [Front cover]

    Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (413 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • The Design Trade-Offs of BitTorrent-Like File Sharing Protocols

    Page(s): 365 - 376
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB) |  | HTML iconHTML  

    The BitTorrent (BT) file sharing protocol is very popular due to its scalability property and the built-in incentive mechanism to reduce free-riding. However, in designing such P2P file sharing protocols, there is a fundamental trade-off between keeping fairness and providing good performance. In particular, the system can either keep peers (especially those resourceful ones) in the system for as long as possible so as to help the system to achieve better performance, or allow more resourceful peers to finish their download as quickly as possible so as to achieve fairness. The current BT protocol represents only one possible implementation in this whole design space. The objective of this paper is to characterize the design space of BT-like protocols. The rationale for considering fairness in the P2P file sharing context is to use it as a measure of willingness to provide service. We show that there is a wide range of design choices, ranging from optimizing the performance of file download time, to optimizing the overall fairness measure. More importantly, we show that there is a simple and easily implementable design knob so that the system can operate at a particular point in the design space. We also discuss different algorithms, ranging from centralized to distributed, in realizing the design knob. Performance evaluations are carried out, both via simulation and network measurement, to quantify the merits and properties of the BT-like file sharing protocols. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Unbiased Sampling for Unstructured Peer-to-Peer Networks

    Page(s): 377 - 390
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (731 KB) |  | HTML iconHTML  

    This paper presents a detailed examination of how the dynamic and heterogeneous nature of real-world peer-to-peer systems can introduce bias into the selection of representative samples of peer properties (e.g., degree, link bandwidth, number of files shared). We propose the metropolized random walk with backtracking (MRWB) as a viable and promising technique for collecting nearly unbiased samples and conduct an extensive simulation study to demonstrate that our technique works well for a wide variety of commonly-encountered peer-to-peer network conditions. We have implemented the MRWB algorithm for selecting peer addresses uniformly at random into a tool called ion-sampler. Using the Gnutella network, we empirically show that ion-sampler yields more accurate samples than tools that rely on commonly-used sampling techniques and results in dramatic improvements in efficiency and scalability compared to performing a full crawl. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lord of the Links: A Framework for Discovering Missing Links in the Internet Topology

    Page(s): 391 - 404
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (759 KB) |  | HTML iconHTML  

    The topology of the Internet at the autonomous system (AS) level is not yet fully discovered despite significant research activity. The community still does not know how many links are missing, where these links are and finally, whether the missing links will change our conceptual model of the Internet topology. An accurate and complete model of the topology would be important for protocol design, performance evaluation and analyses. The goal of our work is to develop methodologies and tools to identify and validate such missing links between ASes. In this work, we develop several methods and identify a significant number of missing links, particularly of the peer-to-peer type. Interestingly, most of the missing AS links that we find exist as peer-to-peer links at the Internet exchange points (IXPs). First, in more detail, we provide a large-scale comprehensive synthesis of the available sources of information. We cross-validate and compare BGP routing tables, Internet routing registries, and traceroute data, while we extract significant new information from the less-studied Internet exchange points (IXPs). We identify 40% more edges and approximately 300% more peer-to-peer edges compared to commonly used data sets. All of these edges have been verified by either BGP tables or traceroute. Second, we identify properties of the new edges and quantify their effects on important topological properties. Given the new peer-to-peer edges, we find that for some ASes more than 50% of their paths stop going through their ISPs assuming policy-aware routing. A surprising observation is that the degree of an AS may be a poor indicator of which ASes it will peer with. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Web User-Session Inference by Means of Clustering Techniques

    Page(s): 405 - 416
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (582 KB) |  | HTML iconHTML  

    This paper focuses on the definition and identification of ldquoWeb user-sessionsrdquo, aggregations of several TCP connections generated by the same source host. The identification of a user-session is non trivial. Traditional approaches rely on threshold based mechanisms. However, these techniques are very sensitive to the value chosen for the threshold, which may be difficult to set correctly. By applying clustering techniques, we define a novel methodology to identify Web user-sessions without requiring an a priori definition of threshold values. We define a clustering based approach, we discuss pros and cons of this approach, and we apply it to real traffic traces. The proposed methodology is applied to artificially generated traces to evaluate its benefits against traditional threshold based approaches. We also analyze the characteristics of user-sessions extracted by the clustering methodology from real traces and study their statistical properties. Web user-sessions tend to be Poisson, but correlation may arise during periods of network/hosts anomalous behavior. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Synchronization of Absolute and Difference Clocks Over Networks

    Page(s): 417 - 430
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1455 KB) |  | HTML iconHTML  

    We present a detailed re-examination of the problem of inexpensive yet accurate clock synchronization for networked devices. Based on an empirically validated, parsimonious abstraction of the CPU oscillator as a timing source, accessible via the TSC register in popular PC architectures, we build on the key observation that the measurement of time differences, and absolute time, requires separate clocks, both at a conceptual level and practically, with distinct algorithmic, robustness, and accuracy characteristics. Combined with round-trip time based filtering of network delays between the host and the remote time server, we define robust algorithms for the synchronization of the absolute and difference TSCclocks over a network. We demonstrate the effectiveness of the principles, and algorithms using months of real data collected using multiple servers. We give detailed performance results for a full implementation running live and unsupervised under numerous scenarios, which show very high reliability, and accuracy approaching fundamental limits due to host system noise. Our synchronization algorithms are inherently robust to many factors including packet loss, server outages, route changes, and network congestion. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Time Synchronization of Distributed Log Files in Networks With Local Broadcast Media

    Page(s): 431 - 444
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (639 KB) |  | HTML iconHTML  

    Real-world experiments in computer networks typically result in a set of log files, one for each system involved in the experiment. Each log file contains event timestamps based on the local clock of the respective system. These clocks are not perfectly accurate, and deviate from each other. For a thorough analysis, however, a common time basis is necessary. In this paper, we tackle the fundamental problem of creating such a common time base for experiments in networks with local broadcast media, where transmissions can be received by more than one node. We show how clock deviations and event times can be estimated with very high accuracy, without introducing any additional traffic in the network. The proposed method is applied after the experiment is completed, using just the set of local log files as its input. It leads to a large linear program with a very specific structure. We exploit the structure to solve the synchronization problem quickly and efficiently, and present an implementation of a specialized solver. Furthermore, we give analytical and numerical evaluation results and present real-world experiments, all underlining the performance and accuracy of the method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantifying Path Exploration in the Internet

    Page(s): 445 - 458
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1880 KB) |  | HTML iconHTML  

    Previous measurement studies have shown the existence of path exploration and slow convergence in the global Internet routing system, and a number of protocol enhancements have been proposed to remedy the problem. However, existing measurements were conducted only over a small number of testing prefixes. There has been no systematic study to quantify the pervasiveness of Border Gateway Protocol (BGP) slow convergence in the operational Internet, nor any known effort to deploy any of the proposed solutions. In this paper, we present our measurement results that identify BGP slow convergence events across the entire global routing table. Our data shows that the severity of path exploration and slow convergence varies depending on where prefixes are originated and where the observations are made in the Internet routing hierarchy. In general, routers in tier-1 Internet service providers (ISPs) observe less path exploration, hence they experience shorter convergence delays than routers in edge ASs; prefixes originated from tier-1 ISPs also experience less path exploration than those originated from edge ASs. Furthermore, our data show that the convergence time of route fail-over events is similar to that of new route announcements and is significantly shorter than that of route failures. This observation is contrary to the widely held view from previous experiments but confirms our earlier analytical results. Our effort also led to the development of a path-preference inference method based on the path usage time, which can be used by future studies of BGP dynamics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Oblivious Routing of Highly Variable Traffic in Service Overlays and IP Backbones

    Page(s): 459 - 472
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (894 KB) |  | HTML iconHTML  

    The emergence of new applications on the Internet like voice-over-IP, peer-to-peer, and video-on-demand has created highly dynamic and changing traffic patterns. In order to route such traffic with quality-of-service (QoS) guarantees without requiring detection of traffic changes in real-time or reconfiguring the network in response to it, a routing and bandwidth allocation scheme has been recently proposed that allows preconfiguration of the network such that all traffic patterns permissible within the network's natural ingress-egress capacity constraints can be handled in a capacity efficient manner. The scheme routes traffic in two phases. In the first phase, incoming traffic is sent from the source to a set of intermediate nodes and then, in the second phase, from the intermediate nodes to the final destination. The traffic in the first phase is distributed to the intermediate nodes in predetermined proportions that depend on the intermediate nodes. In this paper, we develop linear programming formulations and a fast combinatorial algorithm for routing under the scheme so as to maximize throughput (or, minimize maximum link utilization). We compare the throughput performance of the scheme with that of the optimal scheme among the class of all schemes that are allowed to even make the routing dependent on the traffic matrix. For our evaluations, we use actual Internet Service Provider topologies collected for the Rocketfuel project. We also bring out the versatility of the scheme in not only handling widely fluctuating traffic but also accommodating applicability to several widely differing networking scenarios, including i) economical Virtual Private Networks (VPNs); ii) supporting indirection in specialized service overlay models like Internet Indirection Infrastructure (i3); iii) adding QoS guarantees to services that require routing through a netwo- - rk-based middlebox; and iv) reducing IP layer transit traffic and handling extreme traffic variability in IP-over-optical networks without dynamic reconfiguration of the optical layer. The two desirable properties of supporting indirection in specialized service overlay models and static optical layer provisioning in IP-over-optical networks are not present in other approaches for routing variable traffic, such as direct source-destination routing along fixed paths. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple Routing Configurations for Fast IP Network Recovery

    Page(s): 473 - 486
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (715 KB) |  | HTML iconHTML  

    As the Internet takes an increasingly central role in our communications infrastructure, the slow convergence of routing protocols after a network failure becomes a growing problem. To assure fast recovery from link and node failures in IP networks, we present a new recovery scheme called Multiple Routing Configurations (MRC). Our proposed scheme guarantees recovery in all single failure scenarios, using a single mechanism to handle both link and node failures, and without knowing the root cause of the failure. MRC is strictly connectionless, and assumes only destination based hop-by-hop forwarding. MRC is based on keeping additional routing information in the routers, and allows packet forwarding to continue on an alternative output link immediately after the detection of a failure. It can be implemented with only minor changes to existing solutions. In this paper we present MRC, and analyze its performance with respect to scalability, backup path lengths, and load distribution after a failure. We also show how an estimate of the traffic demands in the network can be used to improve the distribution of the recovered traffic, and thus reduce the chances of congestion when MRC is used. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Traffic Engineering Approach for Placement and Selection of Network Services

    Page(s): 487 - 500
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (539 KB) |  | HTML iconHTML  

    Network services are provided by means of dedicated service gateways, through which traffic flows are directed. Existing work on service gateway placement has been primarily focused on minimizing the length of the routes through these gateways. Only limited attention has been paid to the effect these routes have on overall network performance. We propose a novel approach for the service placement problem, which takes into account traffic engineering considerations. Rather than trying to minimize the length of the traffic flow routes, we take advantage of these routes in order to enhance the overall network performance. We divide the problem into two subproblems: finding the best location for each service gateway, and selecting the best service gateway for each flow. We propose efficient algorithms for both problems and study their performance. Our main contribution is showing that placement and selection of network services can be used as effective tools for traffic engineering. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Directory Service for Perspective Access Networks

    Page(s): 501 - 514
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1043 KB) |  | HTML iconHTML  

    Network fragmentation occurs when the accessibility of a network-based resource to an observer is a function of how the observer is connected to the network. In the context of the Internet, network fragmentation is well known and occurs in many situations, including an increasing preponderance of network address translation, firewalls, and virtual private networks. Recently, however, new threats to Internet consistency have received media attention. Alternative namespaces have emerged as the result of formal objections to the process by which Internet names and addresses are provisioned. In addition, various governments and service providers around the world have deployed network technology that (accidentally or intentionally) restricts access to certain Internet content. Combined with the aforementioned sources of fragmentation, these new concerns provide ample motivation for a network that allows users the ability to specify not only the network location of Internet resources they want to view but also the perspectives from which they want to view them. Our vision of a perspective access network (PAN) is a peer-to-peer overlay network that incorporates routing and directory services that allow network perspective-sharing and nonhierarchical organization of the Internet. In this paper, we present the design, implementation, and evaluation of a directory service for such networks. We demonstrate its feasibility and efficacy using measurements from a test deployment on PlanetLab. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Capacity of Multichannel Wireless Networks Under the Protocol Model

    Page(s): 515 - 527
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (419 KB) |  | HTML iconHTML  

    This paper studies the capacity of a n node static wireless network with c channels and m radio interfaces per node under the protocol model of interference. In their seminal work, Gupta and Kumar have determined the capacity of a single channel network (c=1, m=1). Their results are also applicable to multichannel networks provided each node has one interface per channel (m=c) . However, in practice, it is often infeasible to equip each node with one interface per channel. Motivated by this observation, we establish the capacity of general multichannel networks (m les c). Equipping each node with fewer interfaces than channels in general reduces network capacity. However, we show that one important exception is a random network with up to O(logn) channels, where there is no capacity degradation even if each node has only one interface. Our initial analysis assumes that the interfaces are capable of switching channels instantaneously, but we later extend our analysis to account for interface switching delays seen in practice. Furthermore, some multichannel protocols proposed so far rarely require interfaces to switch, and therefore, we briefly study the capacity with fixed interfaces as well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Feasibility of the Link Abstraction in Wireless Mesh Networks

    Page(s): 528 - 541
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2079 KB) |  | HTML iconHTML  

    Outdoor community mesh networks based on IEEE 802.11 have seen tremendous growth in the recent past. The current understanding is that wireless link performance in these settings is inherently unpredictable, due to multipath delay spread. Consequently, researchers have focused on developing intelligent routing techniques to achieve the best possible performance. In this paper, we are specifically interested in mesh networks in rural locations. We first present detailed measurements to show that the PHY layer in these settings is indeed stable and predictable. There is a strong correlation between the error rate and the received signal strength. We show that interference, and not multipath fading, is the primary cause of unpredictable performance. This is in sharp contrast with current widespread knowledge from prior studies. Furthermore, we corroborate our view with a fresh analysis of data presented in these prior studies. While our initial measurements focus on 802.11b, we then use two different PHY technologies as well, operating in the 2.4-GHz ISM band: 802.11g and 802.15.4. These show similar results too. Based on our results, we argue that outdoor rural mesh networks can indeed be built with the link abstraction being valid. This has several design implications, including at the MAC and routing layers, and opens up a fresh perspective on a wide range of technical issues in this domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Orthogonal Rendezvous Routing Protocol for Wireless Mesh Networks

    Page(s): 542 - 555
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1817 KB) |  | HTML iconHTML  

    Routing in multi-hop wireless networks involves the indirection from a persistent name (or ID) to a locator. Concepts such as coordinate space embedding help reduce the number and dynamism complexity of bindings and state needed for this indirection. Routing protocols which do not use such concepts often tend to flood packets during route discovery or dissemination, and hence have limited scalability. In this paper, we introduce Orthogonal Rendezvous Routing Protocol (ORRP) for meshed wireless networks. ORRP is a lightweight-but-scalable routing protocol utilizing directional communications (such as directional antennas or free-space-optical transceivers) to relax information requirements such as coordinate space embedding and node localization. The ORRP source and ORRP destination send route discovery and route dissemination packets respectively in locally-chosen orthogonal directions. Connectivity happens when these paths intersect (i.e., rendezvous). We show that ORRP achieves connectivity with high probability even in sparse networks with voids. ORRP scales well without imposing DHT-like graph structures (eg: trees, rings, torus etc). The total state information required is O(N3/2) for N-node networks, and the state is uniformly distributed. ORRP does not resort to flooding either in route discovery or dissemination. The price paid by ORRP is suboptimality in terms of path stretch compared to the shortest path; however we characterize the average penalty and find that it is not severe. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pareto-Efficient and Goal-Driven Power Control in Wireless Networks: A Game-Theoretic Approach With a Novel Pricing Scheme

    Page(s): 556 - 569
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (597 KB) |  | HTML iconHTML  

    A Pareto-efficient, goal-driven, and distributed power control scheme for wireless networks is presented. We use a noncooperative game-theoretic approach to propose a novel pricing scheme that is linearly proportional to the signal-to-interference ratio (SIR) and analytically show that with a proper choice of prices (proportionality constants), the outcome of the noncooperative power control game is a unique and Pareto-efficient Nash equilibrium (NE). This can be utilized for constrained-power control to satisfy specific goals (such as fairness, aggregate throughput optimization, or trading off between these two goals). For each one of the above goals, the dynamic price for each user is also analytically obtained. In a centralized (base station) price setting, users should inform the base station of their path gains and their maximum transmit-powers. In a distributed price setting, for each goal, an algorithm for users to update their transmit-powers is also presented that converges to a unique fixed-point in which the corresponding goal is satisfied. Simulation results confirm our analytical developments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Optimal Wake-Up Scheduling Algorithm for Minimizing Energy Consumption While Limiting Maximum Delay in a Mesh Sensor Network

    Page(s): 570 - 581
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (426 KB) |  | HTML iconHTML  

    This paper presents an algorithm for maximizing the lifetime of a sensor network while guaranteeing an upper bound on the end-to-end delay. We prove that the proposed algorithm is optimal and requires simple computing operations that can be implemented by simple devices. To the best of our knowledge, this is the first paper to propose a sensor wake-up frequency that depends on the sensor's location in the routing paths. Using simulations, we show that the proposed algorithm significantly increases the lifetime of the network while guaranteeing a maximum on the end-to-end delay. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Low-Energy Fault-Tolerant Bounded-Hop Broadcast in Wireless Networks

    Page(s): 582 - 590
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (436 KB) |  | HTML iconHTML  

    This paper studies asymmetric power assignments in wireless ad hoc networks. The temporary, unfixed physical topology of wireless ad hoc networks is determined by the distribution of the wireless nodes as well as the transmission power (range) assignment of each node. We consider the problem of bounded-hop broadcast under k-fault resilience criterion for linear and planar layout of nodes. The topology that results from our power assignment allows a broadcast operation from a wireless node r to any other node in at most h hops and is k -fault resistant. We develop simple approximation algorithms for the two cases and obtain the following approximation ratios: linear case-O(k); planar case-we first prove a factor of O(k 3) , which is later decreased to O(k 2) by a finer analysis. Finally, we show a trivial power assignment with a cost O(h) times the optimum. To the best of our knowledge, these are the first nontrivial results for this problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Aggregation With Fragment Retransmission for Very High-Speed WLANs

    Page(s): 591 - 604
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1043 KB) |  | HTML iconHTML  

    In upcoming very high-speed wireless LANs (WLANs), the physical (PHY) layer rate may reach 600 Mbps. To achieve high efficiency at the medium access control (MAC) layer, we identify fundamental properties that must be satisfied by any CSMA-/CA-based MAC layers and develop a novel scheme called aggregation with fragment retransmission (AFR) that exhibits these properties. In the AFR scheme, multiple packets are aggregated into and transmitted in a single large frame. If errors happen during the transmission, only the corrupted fragments of the large frame are retransmitted. An analytic model is developed to evaluate the throughput and delay performance of AFR over noisy channels and to compare AFR with similar schemes in the literature. Optimal frame and fragment sizes are calculated using this model. Transmission delays are minimized by using a zero-waiting mechanism where frames are transmitted immediately once the MAC wins a transmission opportunity. We prove that zero-waiting can achieve maximum throughput. As a complement to the theoretical analysis, we investigate the impact of AFR on the performance of realistic application traffic with diverse requirements by simulations. We have implemented the AFR scheme in the NS-2 simulator and present detailed results for TCP, VoIP, and HDTV traffic. The AFR scheme described was developed as part of the IEEE 802.11n working group work. The analysis presented here is general enough to be extended to proposed schemes in the upcoming 802.11n standard. Trends indicated in this paper should extend to any well-designed aggregation schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of Detection Algorithms for MAC Layer Misbehavior: Theory and Experiments

    Page(s): 605 - 617
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (595 KB) |  | HTML iconHTML  

    We revisit the problem of detecting greedy behavior in the IEEE 802.11 MAC protocol by evaluating the performance of two previously proposed schemes: DOMINO and the sequential probability ratio test (SPRT). Our evaluation is carried out in four steps. We first derive a new analytical formulation of the SPRT that considers access to the wireless medium in discrete time slots. Then, we introduce an analytical model for DOMINO. As a third step, we evaluate the theoretical performance of SPRT and DOMINO with newly introduced metrics that take into account the repeated nature of the tests. This theoretical comparison provides two major insights into the problem: it confirms the optimality of SPRT, and motivates us to define yet another test: a nonparametric CUSUM statistic that shares the same intuition as DOMINO but gives better performance. We finalize the paper with experimental results, confirming the correctness of our theoretical analysis and validating the introduction of the new nonparametric CUSUM statistic. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Normalized Queueing Delay: Congestion Control Jointly Utilizing Delay and Marking

    Page(s): 618 - 631
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1340 KB) |  | HTML iconHTML  

    Depending upon the type of feedback that is primarily used as a congestion measure, congestion control methods can be generally classified into two categories: marking/loss-based or delay-based. While both marking and queueing delay provide information about the congestion state of a network, they have been largely treated with separate control strategies. In this paper, we propose the notion of the normalized queueing delay, which serves as a congestion measure by combining both delay and marking information. Utilizing normalized queueing delay (NQD), we propose an approach to congestion control that allows a source to scale its rate dynamically to prevailing network conditions through the use of a time-variant set-point. In ns-2 simulation studies, an NQD-enabled FAST TCP demonstrates a significant link utilization improvement over FAST TCP under certain conditions. In addition, we propose another NQD-based controller D + M TCP (Delay+Marking TCP) that achieves quick convergence to fair and stable rates with nearly full link utilization. Therefore, NQD is a suitable candidate as a congestion measure for practical congestion control. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimizing Internal Speedup for Performance Guaranteed Switches With Optical Fabrics

    Page(s): 632 - 645
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1420 KB) |  | HTML iconHTML  

    We consider traffic scheduling in an N times N packet switch with an optical switch fabric, where the fabric requires a reconfiguration overhead to change its switch configurations. To provide 100% throughput with bounded packet delay, a speedup in the switch fabric is necessary to compensate for both the reconfiguration overhead and the inefficiency of the scheduling algorithm. In order to reduce the implementation cost of the switch, we aim at minimizing the required speedup for a given packet delay bound. Conventional Birkhoff-von Neumann traffic matrix decomposition requires N2 - 2N + 2 configurations in the schedule, which lead to a very large packet delay bound. The existing DOUBLE algorithm requires a fixed number of only 2N configurations, but it cannot adjust its schedule according to different switch parameters. In this paper, we first design a generic approach to decompose a traffic matrix into an arbitrary number of Ns (N2 - 2N + 2 > NS > N) configurations. Then, by taking the reconfiguration overhead into account, we formulate a speedup function. Minimizing the speedup function results in an efficient scheduling algorithm ADAPT. We further observe that the algorithmic efficiency of ADAPT can be improved by better utilizing the switch bandwidth. This leads to a more efficient algorithm SRF (scheduling residue first). ADAPT and SRF can automatically adjust the number of configurations in a schedule according to different switch parameters. We show that both algorithms outperform the existing DOUBLE algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Complexity of Wavelength Assignment in Optical Network Optimization

    Page(s): 646 - 657
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (271 KB) |  | HTML iconHTML  

    We study the complexity of a set of design problems for optical networks. Under wavelength division multiplexing (WDM) technology, demands sharing a common fiber are transported on distinct wavelengths. Multiple fibers may be deployed on a physical link. Our basic goal is to design networks of minimum cost, minimum congestion and maximum throughput. This translates to three variants in the design objectives: 1) MlN-SUMFlBER: minimizing the total cost of fibers deployed to carry all demands; 2) MlN-MAXFlBER: minimizing the maximum number of fibers per link to carry all demands; and 3) MAX-THROUGHPUT: maximizing the carried demands using a given set of fibers. We also have two variants in the design constraints: 1) CHOOSEROUTE: Here we need to specify both a routing path and a wavelength for each demand; 2) FIXEDROUTE: Here we are given demand routes and we need to specify wavelengths only. The FIXEDROUTE variant allows us to study wavelength assignment in isolation. Combining these variants, we have six design problems. Previously we have shown that general instances of the problems MIN-SUMFIBER-CHOOSEROUTE and MIN-MAXFIBER-FIXEDROUTE have no constant-approximation algorithms. In this paper, we prove that a similar statement holds for all four other problems. Our main result shows that MIN-SUMFIBER-FIXEDROUTE cannot be approximated within any constant factor unless NP-hard problems have efficient algorithms. This, together with the previous hardness result of MIN-MAXFIBER-FIXEDROUTE, shows that the problem of wavelength assignment is inherently hard by itself. We also study the complexity of problems that arise when multiple demands can be time-multiplexed onto a single wavelength (as in time-domain wavelength interleaved networking (TWIN) networks) and when wavelength converters can be placed along the path of a demand. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Analytic Approach to Efficiently Computing Call Blocking Probabilities for Multiclass WDM Networks

    Page(s): 658 - 670
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (533 KB) |  | HTML iconHTML  

    For all-optical WDM networks that provide multiple classes of service, we present a methodology for computing approximate blocking probabilities of dynamic routing and wavelength assignment policies. Each service class is characterized by its resource requirements (number of wavelengths needed for a call) and expected call holding time (or subscription period). Under the wavelength continuity constraint on lightpaths and loss network formulation, we develop fixed-point approximation algorithms that compute approximate blocking probabilities of all classes. We then apply them to the random wavelength assignment policy for the following wavelength routing policies: fixed routing (FR), least loaded routing (LLR) and fixed alternate routing (FAR). Simulation results on different network topologies and routing policies considered demonstrate that the simulation results match closely with the blocking probabilities computed by our methods for different multiclass call traffic loading scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign