By Topic

Networking, IEEE/ACM Transactions on

Issue 4 • Date Aug. 2009

Filter Results

Displaying Results 1 - 25 of 31
  • [Front cover]

    Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (412 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • Strong Performance Guarantees for Asynchronous Buffered Crossbar Schedulers

    Page(s): 1017 - 1028
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (365 KB) |  | HTML iconHTML  

    Crossbar-based switches are commonly used to implement routers with throughputs up to about 1 Tb/s. The advent of crossbar scheduling algorithms that provide strong performance guarantees now makes it possible to engineer systems that perform well, even under extreme traffic conditions. Until recently, such performance guarantees have only been developed for crossbars that switch cells rather than variable length packets. Cell-based crossbars incur a worst-case bandwidth penalty of up to a factor of two, since they must fragment variable length packets into fixed length cells. In addition, schedulers for cell-based crossbars may fail to deliver the expected performance guarantees when used in routers that forward packets. We show how to obtain performance guarantees for asynchronous crossbars that are directly comparable to those previously developed for synchronous, cell-based crossbars. In particular we define derivatives of the group by virtual output queue (GVOQ) scheduler of Chuang and the least occupied output first scheduler of Krishna and show that both can provide strong performance guarantees in systems with speedup 2. Specifically, we show that these schedulers are work-conserving and that they can emulate an output-queued switch using any queueing discipline in the class of restricted Push-In, First-Out queueing disciplines. We also show that there are schedulers for segment-based crossbars, (introduced recently by Katevenis and Passas) that can deliver strong performance guarantees with small buffer requirements and no bandwidth fragmentation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-Bandwidth Network Memory System Through Virtual Pipelines

    Page(s): 1029 - 1041
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1438 KB) |  | HTML iconHTML  

    As network bandwidth increases, designing an effective memory system for network processors becomes a significant challenge. The size of the routing tables, the complexity of the packet classification rules, and the amount of packet buffering required all continue to grow at a staggering rate. Simply relying on large, fast SRAMs alone is not likely to be scalable or cost-effective. Instead, trends point to the use of low-cost commodity DRAM devices as a means to deliver the worst-case memory performance that network data-plane algorithms demand. While DRAMs can deliver a great deal of throughput, the problem is that memory banking significantly complicates the worst-case analysis, and specialized algorithms are needed to ensure that specific types of access patterns are conflict-free. We introduce virtually pipelined memory, an architectural technique that efficiently supports high bandwidth, uniform latency memory accesses, and high-confidence throughput even under adversarial conditions. Virtual pipelining provides a simple-to-analyze programming model of a deep pipeline (deterministic latencies) with a completely different physical implementation (a memory system with banks and probabilistic mapping). This allows designers to effectively decouple the analysis of their algorithms and data structures from the analysis of the memory buses and banks. Unlike specialized hardware customized for a specific data-plane algorithm, our system makes no assumption about the memory access patterns. We present a mathematical argument for our system's ability to provably provide bandwidth with high confidence and demonstrate its functionality and area overhead through a synthesizable design. We further show that, even though our scheme is general purpose to support new applications such as packet reassembly, it outperforms the state-of-the-art in specialized packet buffering architectures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Source Models for Speech Traffic Revisited

    Page(s): 1042 - 1051
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (877 KB) |  | HTML iconHTML  

    In this paper, we analyze packet traces of widely used voice codecs and present analytical source models which describe their output by stochastic processes. Both the G.711 and the G.729.1 codec yield periodic packet streams with a fixed packet size, the G.723.1 as well as the iLBC codec use silence detection leading to an on/off process, and the GSM AMR and the iSAC codec produce periodic packet streams with variable packet sizes. We apply all codecs to a large set of typical speech samples and analyze the output of the codecs statistically. Based on these evaluations we provide quantitative models using standard and modified on/off processes as well as memory Markov chains. Our models are simple and easy to use. They are in good accordance with the original traces as they capture not only the complementary cumulative distribution function (CCDF) of the on/off phase durations and the packet sizes, but also the autocorrelation function (ACF) of consecutive packet sizes as well as the queueing properties of the original traces. In contrast, voice traffic models used in most of today's simulations or analytical studies fail to reproduce the ACF and the queueing properties of original traces. This possibly leads to underestimation of performance measures like the waiting time or loss probabilities. The models proposed in this paper do not suffer from this shortcoming and present an attractive alternative for use in future performance studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • PRIME: Peer-to-Peer Receiver-Driven Mesh-Based Streaming

    Page(s): 1052 - 1065
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (642 KB) |  | HTML iconHTML  

    The success of file swarming mechanisms such as BitTorrent has motivated a new approach for scalable streaming of live content that we call mesh-based Peer-to-Peer (P2P) streaming. In this approach, participating end-systems (or peers) form a randomly connected mesh and incorporate swarming content delivery to stream live content. Despite the growing popularity of this approach, neither the fundamental design tradeoffs nor the basic performance bottlenecks in mesh-based P2P streaming are well understood. In this paper, we follow a performance-driven approach to design PRIME, a scalable mesh-based P2P streaming mechanism for live content. The main design goal of PRIME is to minimize two performance bottlenecks, namely bandwidth bottleneck and content bottleneck. We show that the global pattern of delivery for each segment of live content should consist of a diffusion phase which is followed by a swarming phase. This leads to effective utilization of available resources to accommodate scalability and also minimizes content bottleneck. Using packet level simulations, we carefully examine the impact of overlay connectivity, packet scheduling scheme at individual peers and source behavior on the overall performance of the system. Our results reveal fundamental design tradeoffs of mesh-based P2P streaming for live content. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Packet Pacing in Small Buffer Optical Packet Switched Networks

    Page(s): 1066 - 1079
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1123 KB) |  | HTML iconHTML  

    In the absence of a cost-effective technology for storing optical signals, emerging optical packet switched (OPS) networks are expected to have severely limited buffering capability. To mitigate the performance degradation resulting from small buffers, this paper proposes that optical edge nodes ldquopacerdquo the injection of traffic into the OPS core. Our contributions relating to pacing in OPS networks are three-fold: first, we develop real-time pacing algorithms of poly-logarithmic complexity that are feasible for practical implementation in emerging high-speed OPS networks. Second, we provide an analytical quantification of the benefits of pacing in reducing traffic burstiness and traffic loss at a link with very small buffers. Third, we show via simulations of realistic network topologies that pacing can significantly reduce network losses at the expense of a small and bounded increase in end-to-end delay for real-time traffic flows. We argue that the loss-delay tradeoff mechanism provided by pacing can be instrumental in overcoming the performance hurdle arising from the scarcity of buffers in OPS networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths

    Page(s): 1080 - 1093
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (884 KB) |  | HTML iconHTML  

    In this paper, we consider the problem of fault localization in all-optical networks. We introduce the concept of monitoring cycles (MCs) and monitoring paths (MPs) for unique identification of single-link failures. MCs and MPs are required to pass through one or more monitoring locations. They are constructed such that any single-link failure results in the failure of a unique combination of MCs and MPs that pass through the monitoring location(s). For a network with only one monitoring location, we prove that three-edge connectivity is a necessary and sufficient condition for constructing MCs that uniquely identify any single-link failure in the network. For this case, we formulate the problem of constructing MCs as an integer linear program (ILP). We also develop heuristic approaches for constructing MCs in the presence of one or more monitoring locations. For an arbitrary network (not necessarily three-edge connected), we describe a fault localization technique that uses both MPs and MCs and that employs multiple monitoring locations. We also provide a linear-time algorithm to compute the minimum number of required monitoring locations. Through extensive simulations, we demonstrate the effectiveness of the proposed monitoring technique. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid Wireless-Optical Broadband Access Network (WOBAN): Network Planning Using Lagrangean Relaxation

    Page(s): 1094 - 1105
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (874 KB) |  | HTML iconHTML  

    The concept of a hybrid wireless-optical broadband access network (WOBAN) is a very attractive one. This is because it may be costly in several situations to run fiber to every home (or equivalent end-user premises) from the telecom central office (CO); also, providing wireless access from the CO to every end user may not be possible because of limited spectrum. Thus, running fiber as far as possible from the CO toward the end user and then having wireless access technologies take over may be an excellent compromise. How far should fiber penetrate before wireless takes over is an interesting engineering design and optimization problem, which we address in this paper. We propose and investigate the characteristics of an analytical model for network planning, namely optimum placements of base stations (BSs) and optical network units (ONUs) in a WOBAN (called the primal model, or PM). We develop several constraints to be satisfied: BS and ONU installation constraints, user assignment constraints, channel assignment constraints, capacity constraints, and signal-quality and interference constraints. To solve this PM with reasonable accuracy, we use ldquoLagrangean relaxationrdquo to obtain the corresponding ldquoLagrangean dualrdquo model. We solve this dual problem to obtain a lower bound (LB) of the primal problem. We also develop an algorithm (called the primal algorithm) to solve the PM to obtain an upper bound (UB). Via simulation, we compare this PM to a placement heuristic (called the cellular heuristic) and verify that the placement problem is quite sensitive to a set of chosen metrics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Asymptotic Connectivity in Wireless Ad Hoc Networks Using Directional Antennas

    Page(s): 1106 - 1117
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (873 KB) |  | HTML iconHTML  

    Connectivity is a crucial issue in wireless ad hoc networks (WANETs). Gupta and Kumar have shown that in WANETs using omnidirectional antennas, the critical transmission range to achieve asymptotic connectivity is O(radic(log n/n)) if n nodes are uniformly and independently distributed in a disk of unit area. In this paper, we investigate the connectivity problem when directional antennas are used. We first assume that each node in the network randomly beam forms in one beam direction. We find that there also exists a critical transmission range for a WANET to achieve asymptotic connectivity, which corresponds to a critical transmission power (CTP). Since CTP is dependent on the directional antenna pattern, the number of beams, and the propagation environment, we then formulate a non-linear programming problem to minimize the CTP. We show that when directional antennas use the optimal antenna pattern, the CTP in a WANET using directional antennas at both transmitter and receiver is smaller than that when either transmitter or receiver uses directional antenna and is further smaller than that when only omnidirectional antennas are used. Moreover, we revisit the connectivity problem assuming that two neighboring nodes using directional antennas can be guaranteed to beam form to each other to carry out the transmission. A smaller critical transmission range than that in the previous case is found, which implies smaller CTP. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Achievable Rate Region of 802.11-Scheduled Multihop Networks

    Page(s): 1118 - 1131
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (889 KB) |  | HTML iconHTML  

    In this paper, we characterize the achievable rate region for any IEEE 802.11-scheduled static multihop network. To do so, we first characterize the achievable edge-rate region, that is, the set of edge rates that are achievable on the given topology. This requires a careful consideration of the interdependence among edges since neighboring edges collide with and affect the idle time perceived by the edge under study. We approach this problem in two steps. First, we consider two-edge topologies and study the fundamental ways they interact. Then, we consider arbitrary multihop topologies, compute the effect that each neighboring edge has on the edge under study in isolation, and combine to get the aggregate effect. We then use the characterization of the achievable edge-rate region to characterize the achievable rate region. We verify the accuracy of our analysis by comparing the achievable rate region derived from simulations with the one derived analytically. We make a couple of interesting and somewhat surprising observations while deriving the rate regions. First, the achievable rate region with 802.11 scheduling is not necessarily convex. Second, the performance of 802.11 is surprisingly good. For example, in all the topologies used for model verification, the max-min allocation under 802.11 is at least 64% of the max-min allocation under a perfect scheduler. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Understanding the Capacity Region of the Greedy Maximal Scheduling Algorithm in Multihop Wireless Networks

    Page(s): 1132 - 1145
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (778 KB) |  | HTML iconHTML  

    In this paper, we characterize the performance of an important class of scheduling schemes, called greedy maximal scheduling (GMS), for multihop wireless networks. While a lower bound on the throughput performance of GMS has been well known, empirical observations suggest that it is quite loose and that the performance of GMS is often close to optimal. In this paper, we provide a number of new analytic results characterizing the performance limits of GMS. We first provide an equivalent characterization of the efficiency ratio of GMS through a topological property called the local-pooling factor of the network graph. We then develop an iterative procedure to estimate the local-pooling factor under a large class of network topologies and interference models. We use these results to study the worst-case efficiency ratio of GMS on two classes of network topologies. We show how these results can be applied to tree networks to prove that GMS achieves the full capacity region in tree networks under the K -hop interference model. Then, we show that the worst-case efficiency ratio of GMS in geometric unit-disk graphs is between 1/6 and 1/3. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Delay Analysis for Maximal Scheduling With Flow Control in Wireless Networks With Bursty Traffic

    Page(s): 1146 - 1159
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (411 KB) |  | HTML iconHTML  

    We consider the delay properties of one-hop networks with general interference constraints and multiple traffic streams with time-correlated arrivals. We first treat the case when arrivals are modulated by independent finite state Markov chains. We show that the well known maximal scheduling algorithm achieves average delay that grows at most logarithmically in the largest number of interferers at any link. Further, in the important special case when each Markov process has at most two states (such as bursty ON/OFF sources), we prove that average delay is independent of the number of nodes and links in the network, and hence is order-optimal. We provide tight delay bounds in terms of the individual auto-correlation parameters of the traffic sources. These are perhaps the first order-optimal delay results for controlled queueing networks that explicitly account for such statistical information. Our analysis treats cases both with and without flow control. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Opportunistic Use of Client Repeaters to Improve Performance of WLANs

    Page(s): 1160 - 1171
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1038 KB) |  | HTML iconHTML  

    Currently deployed IEEE 802.11 WLANs (Wi-Fi networks) share access point (AP) bandwidth on a per-packet basis. However, various stations communicating with the AP often have different signal qualities, resulting in different transmission rates. This induces a phenomenon known as the rate anomaly problem, in which stations with lower signal quality transmit at lower rates and consume a significant majority of airtime, thereby dramatically reducing the throughput of stations transmitting at higher rates. We propose SoftRepeater, a practical, deployable system in which stations cooperatively address the rate anomaly problem. Specifically, higher rate Wi-Fi stations opportunistically transform themselves into repeaters for lower rate stations when transmitting data to/from the AP. The key challenge is to determine when it is beneficial to enable the repeater functionality. In view of this, we propose an initiation protocol that ensures that repeater functionality is enabled only when appropriate. Also, our system can run directly on top of today's 802.11 infrastructure networks. In addition, we describe a novel, zero-overhead network coding scheme that further alleviates undesirable symptoms of the rate anomaly problem. Using simulation and testbed implementation, we find that SoftRepeater can improve cumulative throughput by up to 200%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Accurate and Asymmetry-Aware Measurement of Link Quality in Wireless Mesh Networks

    Page(s): 1172 - 1185
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1108 KB) |  | HTML iconHTML  

    This paper presents a highly efficient and accurate link-quality measurement framework, called efficient and accurate link-quality monitor (EAR), for multihop wireless mesh networks (WMNs) that has several salient features. First, it exploits three complementary measurement schemes: passive, cooperative, and active monitoring. By adopting one of these schemes dynamically and adaptively, EAR maximizes the measurement accuracy, and its opportunistic use of the unicast application traffic present in the network minimizes the measurement overhead. Second, EAR effectively identifies the existence of wireless link asymmetry by measuring the quality of each link in both directions of the link, thus improving the utilization of network capacity by up to 114%. Finally, its cross-layer architecture across both the network layer and the IEEE 802.11-based device driver makes EAR easily deployable in existing multihop wireless mesh networks without system recompilation or MAC firmware modification. EAR has been evaluated extensively via both ns-2-based simulation and experimentation on our Linux-based implementation in a real-life testbed. Both simulation and experimentation results have shown EAR to provide highly accurate link-quality measurements with minimum overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Error Control in Wireless Sensor Networks: A Cross Layer Analysis

    Page(s): 1186 - 1199
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (569 KB) |  | HTML iconHTML  

    Error control is of significant importance for wireless sensor networks (WSNs) because of their severe energy constraints and the low power communication requirements. In this paper, a cross-layer methodology for the analysis of error control schemes in WSNs is presented such that the effects of multi-hop routing and the broadcast nature of the wireless channel are investigated. More specifically, the cross-layer effects of routing, medium access, and physical layers are considered. This analysis enables a comprehensive comparison of forward error correction (FEC) codes, automatic repeat request (ARQ), and hybrid ARQ schemes in WSNs. The validation results show that the developed framework closely follows simulation results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Economic Framework for Dynamic Spectrum Access and Service Pricing

    Page(s): 1200 - 1213
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (497 KB) |  | HTML iconHTML  

    The concept of dynamic spectrum access will allow the radio spectrum to be traded in a market like scenario allowing wireless service providers (WSPs) to lease chunks of spectrum on a short-term basis. Such market mechanisms will lead to competition among WSPs where they not only compete to acquire spectrum but also attract and retain users. Currently, there is little understanding on how such a dynamic trading system will operate so as to make the system feasible under economic terms. In this paper, we propose an economic framework that can be used to guide i) the dynamic spectrum allocation process and ii) the service pricing mechanisms that the providers can use. We propose a knapsack based auction model that dynamically allocates spectrum to the WSPs such that revenue and spectrum usage are maximized. We borrow techniques from game theory to capture the conflict of interest between WSPs and end users. A dynamic pricing strategy for the providers is also proposed. We show that even in a greedy and non-cooperative behavioral game model, it is in the best interest of the WSPs to adhere to a price and channel threshold which is a direct consequence of price equilibrium. Through simulation results, we show that the proposed auction model entices WSPs to participate in the auction, makes optimal use of the spectrum, and avoids collusion among WSPs. We demonstrate how pricing can be used as an effective tool for providing incentives to the WSPs to upgrade their network resources and offer better services. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and Performance of Wireless Data Gathering Networks Based on Unicast Random Walk Routing

    Page(s): 1214 - 1227
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (410 KB) |  | HTML iconHTML  

    Wireless environment monitoring applications with significantly relaxed quality-of-service constraints are emerging. Hence, the possibility to use rough low knowledge routing in sensor networks to reduce hardware resource and software complexity is questionable. Moreover, low knowledge handling allows better genericity, which is of interest, for instance, for basic operation enabling system set-up. In this framework, this paper revisits stateless unicast random walk routing in wireless sensor networks. Based on random walk theory, original closed-form expressions of the delay, the power consumption and related spatial behaviors are provided according to the scale of the system. Basic properties of such a random routing are discussed. Exploiting its properties, data gathering schemes that fulfill the requirements of the application with rather good energy efficiency are then identified. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Virtual-Coordinate-Based Delivery-Guaranteed Routing Protocol in Wireless Sensor Networks

    Page(s): 1228 - 1241
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1411 KB) |  | HTML iconHTML  

    In this paper, we first propose a method, ABVCap, to construct a virtual coordinate system in a wireless sensor network. ABVCap assigns each node multiple 5-tuple virtual coordinates. Subsequently, we introduce a protocol, ABVCap routing, to route packets based on the ABVCap virtual coordinate system. ABVCap routing guarantees packet delivery without the computation and storage of the global topological features. Finally, we demonstrate an approach, ABVCap maintenance, to reconstruct an ABVCap virtual coordinate system in a network with node failures. Simulations show ABVCap routing ensures moderate routing path length, as compared to virtual-coordinate-based routing, GLIDER, Hop ID, GLDR, and VCap. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scaling Laws for Data-Centric Storage and Querying in Wireless Sensor Networks

    Page(s): 1242 - 1255
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB) |  | HTML iconHTML  

    We use a constrained optimization framework to derive scaling laws for data-centric storage and querying in wireless sensor networks. We consider both unstructured sensor networks, which use blind sequential search for querying, and structured sensor networks, which use efficient hash-based querying. We find that the scalability of a sensor network's performance depends upon whether the increase in energy and storage resources with more nodes is outweighed by the concomitant application-specific increase in event and query loads. We derive conditions that determine: 1) whether the energy requirement per node grows without bound with the network size for a fixed-duration deployment, 2) whether there exists a maximum network size that can be operated for a specified duration on a fixed energy budget, and 3) whether the network lifetime increases or decreases with the size of the network for a fixed energy budget. An interesting finding of this work is that three-dimensional (3D) uniform deployments are inherently more scalable than two-dimensional (2D) uniform deployments, which in turn are more scalable than one-dimensional (1D) uniform deployments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • PRESTO: Feedback-Driven Data Management in Sensor Networks

    Page(s): 1256 - 1269
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (634 KB) |  | HTML iconHTML  

    This paper presents PRESTO, a novel two-tier sensor data management architecture comprising proxies and sensors that cooperate with one another for acquiring data and processing queries. PRESTO proxies construct time-series models of observed trends in the sensor data and transmit the parameters of the model to sensors. Sensors check sensed data with model-predicted values and transmit only deviations from the predictions back to the proxy. Such a model-driven push approach is energy-efficient, while ensuring that anomalous data trends are never missed. In addition to supporting queries on current data, PRESTO also supports queries on historical data using interpolation and local archival at sensors. PRESTO can adapt model and system parameters to data and query dynamics to further extract energy savings. We have implemented PRESTO on a sensor testbed comprising Intel Stargates and Telos Motes. Our experiments show that in a temperature monitoring application, PRESTO yields one to two orders of magnitude reduction in energy requirements over on-demand, proactive or model-driven pull approaches. PRESTO also results in an order of magnitude reduction in query latency in a 1% duty-cycled five hop sensor network over a system that forwards all queries to remote sensor nodes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rethinking Enterprise Network Control

    Page(s): 1270 - 1283
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (901 KB) |  | HTML iconHTML  

    This paper presents Ethane, a new network architecture for the enterprise. Ethane allows managers to define a single network-wide fine-grain policy and then enforces it directly. Ethane couples extremely simple flow-based Ethernet switches with a centralized controller that manages the admittance and routing of flows. While radical, this design is backwards-compatible with existing hosts and switches. We have implemented Ethane in both hardware and software, supporting both wired and wireless hosts. We also show that it is compatible with existing high-fanout switches by porting it to popular commodity switching chipsets. We have deployed and managed two operational Ethane networks, one in the Stanford University Computer Science Department supporting over 300 hosts, and another within a small business of 30 hosts. Our deployment experiences have significantly affected Ethane's design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scalable Network-Layer Defense Against Internet Bandwidth-Flooding Attacks

    Page(s): 1284 - 1297
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (482 KB) |  | HTML iconHTML  

    In a bandwidth-flooding attack, compromised sources send high-volume traffic to the target with the purpose of causing congestion in its tail circuit and disrupting its legitimate communications. In this paper, we present active Internet traffic filtering (AITF), a network-layer defense mechanism against such attacks. AITF enables a receiver to contact misbehaving sources and ask them to stop sending it traffic; each source that has been asked to stop is policed by its own Internet service provider (ISP), which ensures its compliance. An ISP that hosts misbehaving sources either supports AITF (and accepts to police its misbehaving clients), or risks losing all access to the complaining receiver-this is a strong incentive to cooperate, especially when the receiver is a popular public-access site. We show that AITF preserves a significant fraction of a receiver's bandwidth in the face of bandwidth flooding, and does so at a per-client cost that is already affordable for today's ISPs; this per-client cost is not expected to increase, as long as botnet-size growth does not outpace Moore's law. We also show that even the first two networks that deploy AITF can maintain their connectivity to each other in the face of bandwidth flooding. We conclude that the network-layer of the Internet can provide an effective, scalable, and incrementally deployable solution against bandwidth-flooding attacks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Placement of Infrastructure Overlay Nodes

    Page(s): 1298 - 1311
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (667 KB) |  | HTML iconHTML  

    Overlay routing has emerged as a promising approach to improving performance and reliability of Internet paths. To fully realize the potential of overlay routing under the constraints of deployment costs in terms of hardware, network connectivity and human effort, it is critical to carefully place infrastructure overlay nodes to balance the tradeoff between performance and resource constraints. In this paper, we investigate approaches to perform intelligent placement of overlay nodes to facilitate (i) resilient routing and (ii) TCP performance improvement. We formulate objective functions to capture application behavior: reliability and TCP performance, and develop several placement algorithms, which offer a wide range of tradeoffs in complexity and required knowledge of the client-server location and traffic load. Using simulations on synthetic and real Internet topologies, and PlanetLab experiments, we demonstrate the effectiveness of the placement algorithms and objective functions developed, respectively. We conclude that a hybrid approach combining greedy and random approaches provides the best tradeoff between computational efficiency and accuracy. We also uncover the fundamental challenge in simultaneously optimizing for reliability and TCP performance, and propose a simple unified algorithm to achieve both. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Iterative Optimal Resource Allocation With Concurrent Updates of Routing and Flow Control Variables

    Page(s): 1312 - 1325
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (522 KB) |  | HTML iconHTML  

    Consider a set of active elastic sessions over a network. Session traffic is routed at each hop (potentially through multiple network paths) based only on its destination. Each session is associated with a concave increasing utility function of its transfer rate. The transfer rates of all sessions and the routing policy define the operating point of the network. We construct a metric f of the goodness of this operating point. f is an increasing function of the session utilities and a decreasing function of the extent of congestion in the network. We define ldquogoodrdquo operating points as those that maximize f, subject to the capacity constraints in the network. This paper presents a distributed, iterative algorithm for adapting the session rates and the routing policy across the network so as to converge asymptotically to the set of ldquogoodrdquo operating points. The algorithm updates session rates and routing variables concurrently and is, therefore, amenable to distributed online implementation. The convergence of the concurrent update scheme is proved rigorously. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign