By Topic

Date 19-25 April 2009

Filter Results

Displaying Results 1 - 25 of 389
  • [Title page]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (90 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): ii
    Save to Project icon | Request Permissions | PDF file iconPDF (92 KB)  
    Freely Available from IEEE
  • General Chairs message

    Page(s): iii - v
    Save to Project icon | Request Permissions | PDF file iconPDF (1960 KB)  
    Freely Available from IEEE
  • Message from Technical Co-Chairs

    Page(s): vi - viii
    Save to Project icon | Request Permissions | PDF file iconPDF (613 KB)  
    Freely Available from IEEE
  • Message from the Miniconference Chairs

    Page(s): ix - x
    Save to Project icon | Request Permissions | PDF file iconPDF (1093 KB)  
    Freely Available from IEEE
  • Organizing Committee

    Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (122 KB)  
    Freely Available from IEEE
  • Technical Program Committee

    Page(s): xii - xx
    Save to Project icon | Request Permissions | PDF file iconPDF (126 KB)  
    Freely Available from IEEE
  • TPC Committee

    Page(s): xxi
    Save to Project icon | Request Permissions | PDF file iconPDF (92 KB)  
    Freely Available from IEEE
  • External reviewers

    Page(s): xxii - xxix
    Save to Project icon | Request Permissions | PDF file iconPDF (180 KB)  
    Freely Available from IEEE
  • Sponsors

    Page(s): xxx - xxxiv
    Save to Project icon | Request Permissions | PDF file iconPDF (1113 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): xxxv - lii
    Save to Project icon | Request Permissions | PDF file iconPDF (198 KB)  
    Freely Available from IEEE
  • RAPID: Shrinking the Congestion-Control Timescale

    Page(s): 1 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (577 KB) |  | HTML iconHTML  

    TCP congestion-control is fairly inefficient in achieving high throughput in high-speed and dynamic-bandwidth environments. The main culprit is the slow bandwidth-search process used by TCP, which may take up to several thousands of round-trip times (RTTs) in searching for and acquiring the end-to-end spare bandwidth. Even the recently-proposed "highspeed" transport protocols may take hundreds of RTTs for this. In this paper, we design a new approach for congestion-control that allows TCP connections to boldly search for, and adapt to, the available bandwidth within a single RTT. Our approach relies on carefully orchestrated packet sending times and estimates the available bandwidth based on the delays experienced by these. We instantiate our new protocol, referred to as RAPID, using mechanisms that promote efficiency, queue-friendliness, and fairness. Our experimental evaluations on gigabit networks indicate that RAPID: (i) converges to an updated value of bandwidth within 1-4 RTTs; (ii) helps maintain fairly small queues; (iii) has negligible impact on regular TCP traffic; and (iv) exhibits excellent intra-protocol fairness among co-existing RAPID transfers. The rate-based design allows RAPID to be truly RTT-fair. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Congestion Control using Efficient Explicit Feedback

    Page(s): 10 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB) |  | HTML iconHTML  

    This paper proposes a framework for congestion control, called binary marking congestion control (BMCC) for high bandwidth-delay product networks. The basic components of BMCC are i) a packet marking scheme for obtaining high resolution congestion estimates using the existing bits available in the IP header for explicit congestion notification (ECN) and ii) a set of load-dependent control laws that use these congestion estimates to achieve efficient and fair bandwidth allocations on high bandwidth-delay product networks, while maintaining a low persistent queue length and negligible packet loss rate. We present analytical models that predict and provide insights into the convergence properties of the protocol. Using extensive packet-level simulations, we assess the efficacy of BMCC and perform comparisons with several proposed schemes. BMCC outperforms VCP, MLCP, XCP, SACK+RED/ECN and in some cases RCP, in terms of average flow completion times for typical Internet flow sizes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stochastic Analysis of Scalable TCP

    Page(s): 19 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (257 KB) |  | HTML iconHTML  

    The unsatisfactory performance of TCP in high speed wide area networks has led to several versions of TCP- like H-TCP, Fast TCP, Scalable TCP, Compound or CUBIC, all aimed at speeding up the window update algorithm. In this paper we focus on Scalable TCP (STCP), a TCP version which belongs to the class of Multiplicative Increase Multiplicative Decrease (MIMD) congestion protocols. We present a new stochastic model for the evolution of the instantaneous throughput of a single STCP flow in the Congestion Avoidance phase, under the assumption of a constant per-packet loss probability. This model allows one to derive several closed-form expressions for the key stationary distributions associated with this protocol: we characterize the throughput obtained by the flow, the time separating Multiplicative Decrease events, the number of bits transmitted over certain time intervals and the size of rate decrease. Several applications leveraging these closed form expressions are considered with a particular emphasis on QoS guarantees in the context of dimensioning. A set of ns2 simulations highlights the model accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Is the ''Law of the Jungle'' Sustainable for the Internet?

    Page(s): 28 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (210 KB) |  | HTML iconHTML  

    In this paper we seek to characterize the behavior of the Internet in the absence of congestion control. More specifically, we assume all sources transmit at their maximum rate and recover from packet loss by the use of some ideal erasure coding scheme. We estimate the efficiency of resource utilization in terms of the maximum load the network can sustain, accounting for the random nature of traffic. Contrary to common belief, there is generally no congestion collapse. Efficiency remains higher than 90% for most network topologies as long as maximum source rates are less than link capacity by one or two orders of magnitude. Moreover, a simple fair drop policy enforcing fair sharing at flow level is sufficient to guarantee 100% efficiency in all cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multirate Anypath Routing in Wireless Mesh Networks

    Page(s): 37 - 45
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (210 KB) |  | HTML iconHTML  

    In this paper, we present a new routing paradigm that generalizes opportunistic routing in wireless mesh networks. In multirate anypath routing, each node uses both a set of next hops and a selected transmission rate to reach a destination. Using this rate, a packet is broadcast to the nodes in the set and one of them forwards the packet on to the destination. To date, there is no theory capable of jointly optimizing both the set of next hops and the transmission rate used by each node. We bridge this gap by introducing a polynomial-time algorithm to this problem and provide the proof of its optimality. The proposed algorithm runs in the same running time as regular shortest-path algorithms and is therefore suitable for deployment in link-state routing protocols. We conducted experiments in a 802.11b testbed network, and our results show that multirate anypath routing performs on average 80% and up to 6.4 times better than anypath routing with a fixed rate of 11 Mbps. If the rate is fixed at 1 Mbps instead, performance improves by up to one order of magnitude. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimizing End-to-End Delay: A Novel Routing Metric for Multi-Radio Wireless Mesh Networks

    Page(s): 46 - 54
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (394 KB) |  | HTML iconHTML  

    This paper studies how to select a path with the minimum cost in terms of expected end-to-end delay (EED) in a multi-radio wireless mesh network. Different from the previous efforts, the new EED metric takes the queuing delay into account, since the end-to-end delay consists of not only the transmission delay over the wireless links but also the queuing delay in the buffer. In addition to minimizing the end-to-end delay, the EED metric implies the concept of load balancing. We develop EED- based routing protocols for both single-channel and multi-channel wireless mesh networks. In particular for the multi-radio multichannel case, we develop a generic iterative approach to calculate a multi-radio achievable bandwidth (MRAB) for a path, taking the impacts of inter/intra-flow interference and space/channel diversity into account. The MRAB is then integrated with EED to form the metric of weighted end-to-end delay (WEED). As a byproduct of MRAB, a channel diversity coefficient can be defined to quantitatively represent the channel diversity along a given path. Both numerical analysis and simulation studies are presented to validate the performance of the routing protocol based on the EED/WEED metric, with comparison to some well- known routing metrics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Leveraging Partial Paths in Partially-Connected Networks

    Page(s): 55 - 63
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (383 KB) |  | HTML iconHTML  

    Mobile wireless network research focuses on scenarios at the extremes of the network connectivity continuum where the probability of all nodes being connected is either close to unity, assuming connected paths between all nodes (mobile ad hoc networks), or it is close to zero, assuming no multi-hop paths exist at all (delay-tolerant networks). In this paper, we argue that a sizable fraction of networks lies between these extremes and is characterized by the existence of partial paths, i.e., multi-hop path segments that allow forwarding data closer to the destination even when no end-to-end path is available. A fundamental issue in such networks is dealing with disruptions of end-to-end paths. Under a stochastic model, we compare the performance of the established end-to-end retransmission (ignoring partial paths), against a forwarding mechanism that leverages partial paths to forward data closer to the destination even during disruption periods. Perhaps surprisingly, the alternative mechanism is not necessarily superior. However, under a stochastic monotonicity condition between current vs. future path length, which we demonstrate to hold in typical network models, we manage to prove superiority of the alternative mechanism in stochastic dominance terms. We believe that this study could serve as a foundation to design more efficient data transfer protocols for partially-connected networks, which could potentially help reducing the gap between applications that can be supported over disconnected networks and those requiring full connectivity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Routing Metric Designs for Greedy, Face and Combined-Greedy-Face Routing

    Page(s): 64 - 72
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (227 KB) |  | HTML iconHTML  

    Different geographic routing protocols have different requirements on routing metric designs to ensure proper operation. Combining a wrong type of routing metric with a geographic routing protocol may produce unexpected results, such as geographic routing loops and unreachable nodes. In this paper, we propose a novel routing algebra system to investigate the compatibilities between routing metrics and three geographic routing protocols including greedy, face and combined-greedy- face routing. Four important algebraic properties, respectively named odd symmetry, transitivity, source independence and local minimum freeness, are defined in this algebra system. Based on these algebraic properties, the necessary and sufficient conditions for loop-free and delivery guaranteed routing are derived when greedy, face and combined-greedy-face routing serve as packet forwarding schemes or as path discovery algorithms respectively. Our work provides essential criterions for evaluating and designing geographic routing protocols. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Queuing Network Models for Multi-Channel P2P Live Streaming Systems

    Page(s): 73 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (206 KB) |  | HTML iconHTML  

    In recent years there have been several large-scale deployments of P2P live video systems. Existing and future P2P live video systems will offer a large number of channels, with users switching frequently among the channels. In this paper, we develop infinite-server queueing network models to analytically study the performance of multi-channel P2P streaming systems. Our models capture essential aspects of multi-channel video systems, including peer channel switching, peer churn, peer bandwidth heterogeneity, and Zipf-like channel popularity. We apply the queueing network models to two P2P streaming designs: the isolated channel design (ISO) and the View-Upload Decoupling (VUD) design. For both of these designs, we develop efficient algorithms to calculate critical performance measures, develop an asymptotic theory to provide closed-form results when the number of peers approaches infinity, and derive near- optimal provisioning rules for assigning peers to groups in VUD. We use the analytical results to compare VUD with ISO. We show that VUD design generally performs significantly better, particularly for systems with heterogeneous channel popularities and streaming rates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distilling Superior Peers in Large-Scale P2P Streaming Systems

    Page(s): 82 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (786 KB) |  | HTML iconHTML  

    In large-scale peer-to-peer (P2P) live streaming systems with a limited supply of server bandwidth, increasing the amount of upload bandwidth supplied by peers becomes critically important to the "well being" of streaming sessions in live channels. Intuitively, two types of peers are preferred to be kept up in a live session: peers that contribute a higher percentage of their upload capacities, and peers that are stable for a long period of time. The fundamental challenge is to identify, and satisfy the needs of, these types of "superior" peers in a live session, and to achieve this goal with minimum disruption to the traditional pull-based protocols that real-world live streaming protocols use. In this paper, we conduct a comprehensive and in-depth statistical analysis based on more than 130 GB worth of runtime traces from hundreds of streaming channels in a large- scale real-world live streaming system, UUSee (among the top three commercial systems in popularity in mainland China). Our objective is to discover critical factors that may influence the longevity and bandwidth contribution ratio of peers, using survival analysis techniques such as the Cox proportional hazards model and the Mantel-Haenszel test. Once these influential factors are found, they can be used to form a superiority index to distill superior peers from the general peer population. The index can be used in any way to favor superior peers, and we simulate the use of a simple ranking mechanism in a natural selection algorithm to show the effectiveness of the index, based on a replay of real-world traces from UUSee. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CPM: Adaptive Video-on-Demand with Cooperative Peer Assists and Multicast

    Page(s): 91 - 99
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (187 KB) |  | HTML iconHTML  

    We present CPM, a unified approach that exploits server multicast, assisted by peer downloads, to provide efficient video-on-demand (VoD) in a service provider environment. We describe our architecture and show how CPM is designed to dynamically adapt to a wide range of situations including highly different peer-upload bandwidths, content popularity, user request arrival patterns, video library size, and subscriber population. We demonstrate the effectiveness of CPM using simulations (based on an actual implementation codebase) across the range of situations described above and show that CPM does significantly better than traditional unicast, different forms of multicast, as well as peer-to-peer schemes. Along with synthetic parameters, we augment our experiments using data from a deployed VoD service to evaluate the performance of CPM. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • P2P-TV Systems under Adverse Network Conditions: A Measurement Study

    Page(s): 100 - 108
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (662 KB) |  | HTML iconHTML  

    In this paper we define a simple experimental setup to analyze the behavior of commercial P2P-TV applications under adverse network conditions. Our goal is to reveal the ability of different P2P-TV applications to adapt to dynamically changing conditions, such as delay, loss and available capacity, e.g., checking whether such systems implement some form of congestion control. We apply our methodology to four popular commercial P2P-TV applications: PPLive, SOPCast, TVants and TVUPlayer. Our results show that all the considered applications are in general capable to cope with packet losses and to react to congestion arising in the network core. Indeed, all applications keep trying to download data by avoiding bad paths and carefully selecting good peers. However, when the bottleneck affects all peers, e.g., it is at the access link, their behavior results rather aggressive, and potentially harmful for both other applications and the network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Surface Coverage in Wireless Sensor Networks

    Page(s): 109 - 117
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (536 KB) |  | HTML iconHTML  

    Coverage is a fundamental problem in Wireless Sensor Networks (WSNs). Existing studies on this topic focus on 2D ideal plane coverage and 3D full space coverage. The 3D surface of a targeted Field of Interest is complex in many real world applications; and yet, existing studies on coverage do not produce practical results. In this paper, we propose a new coverage model called surface coverage. In surface coverage, the targeted Field of Interest is a complex surface in 3D space and sensors can be deployed only on the surface. We show that existing 2D plane coverage is merely a special case of surface coverage. Simulations point out that existing sensor deployment schemes for a 2D plane cannot be directly applied to surface coverage cases. In this paper, we target two problems assuming cases of surface coverage to be true. One, under stochastic deployment, how many sensors are needed to reach a certain expected coverage ratio? Two, if sensor deployment can be planned, what is the optimal deployment strategy with guaranteed full coverage with the least number of sensors? We show that the latter problem is NP-complete and propose three approximation algorithms. We further prove that these algorithms have a provable approximation ratio. We also conduct comprehensive simulations to evaluate the performance of the proposed algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Double Mobility: Coverage of the Sea Surface with Mobile Sensor Networks

    Page(s): 118 - 126
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (754 KB) |  | HTML iconHTML  

    We are interested in the sensor networks for scientific applications to cover and measure statistics on the sea surface. Due to flows and waves, the sensor nodes may gradually lose their positions; leaving the points of interest uncovered. Manual readjustment is costly and cannot be performed in time. We argue that a network of mobile sensor nodes which can perform self-adjustment is the best candidate to maintain the coverage of the surface area. In our application, we face a unique double mobility coverage problem. That is, there is an uncontrollable mobility, U-Mobility, by the flows which breaks the coverage of the sensor network. Moreover, there is also a controllable mobility, C-Mobility, by the mobile nodes which we can utilize to reinstall the coverage. Our objective is to build an energy efficient scheme for the sensor network coverage issue with this double mobility behavior. A key observation of our scheme is that the motion of the flow is not only a curse but should also be considered as a fortune. The sensor nodes can be pushed by free to some locations that potentially help to improve the overall coverage. With that taken into consideration, more efficient movement decision can be made. To this end, we present a dominating set maintenance scheme to maximally exploit the U-Mobility and balance the energy consumption among all the sensor nodes. We prove that the coverage is guaranteed in our scheme. We further propose a fully distributed protocol that addresses a set of practical issues. Through extensive simulation, we demonstrate that the network lifetime can be significantly extended, compared to a straight forward back-to-original reposition scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.