By Topic

Networking, IEEE/ACM Transactions on

Issue 4 • Date Aug. 2012

Filter Results

Displaying Results 1 - 25 of 31
  • [Front cover]

    Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (431 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • Network-Level Access Control Policy Analysis and Transformation

    Page(s): 985 - 998
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2634 KB) |  | HTML iconHTML  

    Network-level access control policies are often specified by various people (network, application, and security administrators), and this may result in conflicts or suboptimal policies. We have defined a new formal model for policy representation that is independent of the actual enforcement elements, along with a procedure that allows the easy identification and removal of inconsistencies and anomalies. Additionally, the policy can be translated to the model used by the target access control element to prepare it for actual deployment. In particular, we show that every policy can be translated into one that uses the “First Matching Rule” resolution strategy. Our policy model and optimization procedure have been implemented in a tool that experimentally demonstrates its applicability to real-life cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Joint Approach to Routing Metrics and Rate Adaptation in Wireless Mesh Networks

    Page(s): 999 - 1009
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (984 KB) |  | HTML iconHTML  

    This paper presents MARA, a joint mechanism for automatic rate selection and route quality evaluation in wireless mesh networks. This mechanism targets at avoiding the problems of lack of synchronization between metric and rate selection decisions and inaccurate link quality estimates, common to main existing proposals of multihop wireless routing metrics and automatic rate adaptation. In this proposal, the statistics collected by the routing protocol are used by the rate adaptation algorithm to compute the best rate for each wireless link. This coordinated decision aims at providing better routing and rate choices. In addition to the basic MARA algorithm, two variations are proposed: MARA-P and MARA-RP. The first considers the size of each packet in the transmission rate decision. The second variation considers the packet size also for the routing choices. For evaluation purposes, experiments were conducted on both real and simulated environments. In these experiments, MARA was compared to a number of rate adaptation algorithms and routing metrics. Results from both environments indicate that MARA may lead to an overall network performance improvement. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Taming the Mobile Data Deluge With Drop Zones

    Page(s): 1010 - 1023
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1838 KB) |  | HTML iconHTML  

    Human communication has changed by the advent of smartphones. Using commonplace mobile device features, they started uploading large amounts of content that increases. This increase in demand will overwhelm capacity and limits the providers' ability to provide the quality of service demanded by their users. In the absence of technical solutions, cellular network providers are considering changing billing plans to address this. Our contributions are twofold. First, by analyzing user content upload behavior, we find that the user-generated content problem is a user behavioral problem. Particularly, by analyzing user mobility and data logs of 2 million users of one of the largest US cellular providers, we find that: 1) users upload content from a small number of locations; 2) because such locations are different for users, we find that the problem appears ubiquitous. However, we find that: 3) there exists a significant lag between content generation and uploading times, and 4) with respect to users, it is always the same users to delay. Second, we propose a cellular network architecture. Our approach proposes capacity upgrades at a select number of locations called Drop Zones. Although not particularly popular for uploads originally, Drop Zones seamlessly fall within the natural movement patterns of a large number of users. They are therefore suited for uploading larger quantities of content in a postponed manner. We design infrastructure placement algorithms and demonstrate that by upgrading infrastructure in only 963 base stations across the entire US, it is possible to deliver 50% of content via Drop Zones. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Transport Protocol to Exploit Multipath Diversity in Wireless Networks

    Page(s): 1024 - 1039
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1881 KB) |  | HTML iconHTML  

    Wireless networks (including wireless mesh networks) provide opportunities for using multiple paths. Multihoming of hosts, possibly using different technologies and providers, also makes it attractive for end-to-end transport connections to exploit multiple paths. In this paper, we propose a multipath transport protocol, based on a carefully crafted set of enhancements to TCP, that effectively utilizes the available bandwidth and diversity provided by heterogeneous, lossy wireless paths. Our Multi-Path LOss-Tolerant (MPLOT) transport protocol can be used to obtain significant goodput gains in wireless networks, subject to bursty, correlated losses with average loss rates as high as 50%. MPLOT is built around the principle of separability of reliability and congestion control functions in an end-to-end transport protocol. Congestion control is performed separately on individual paths, and the reliability mechanism works over the aggregate set of paths available for an end-to-end session. MPLOT distinguishes between congestion and link losses through Explicit Congestion Notification (ECN), and uses Forward Error Correction (FEC) coding to recover from data losses. MPLOT uses a dynamic packet mapping based on the current path characteristics to choose a path for a packet. Use of erasure codes and block-level recovery ensures that in MPLOT the receiving transport entity can recover all data as long as a necessary number of packets in the block are received, irrespective of which packets are lost. We present a theoretical analysis of the different design choices of MPLOT and show that MPLOT chooses its policies and parameters such that a desirable tradeoff between goodput with data recovery delay is attained. We evaluate MPLOT, through simulations, under a variety of test scenarios and demonstrate that it effectively exploits path diversity in addition to efficiently aggregating path bandwidths while remaining fair to a conventional TCP flow on each path. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DRAM-Based Statistics Counter Array Architecture With Performance Guarantee

    Page(s): 1040 - 1053
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2657 KB) |  | HTML iconHTML  

    The problem of efficiently maintaining a large number (say millions) of statistics counters that need to be updated at very high speeds (e.g., 40 Gb/s) has received considerable research attention in recent years. This problem arises in a variety of router management and data streaming applications where large arrays of counters are used to track various network statistics and implement various counting sketches. It proves too costly to store such large counter arrays entirely in SRAM, while DRAM is viewed as too slow for providing wirespeed updates at such high line rates. In particular, we propose a DRAM-based counter architecture that can effectively maintain wirespeed updates to large counter arrays. The proposed approach is based on the observation that modern commodity DRAM architectures, driven by aggressive performance roadmaps for consumer applications, such as video games, have advanced architecture features that can be exploited to make a DRAM-based solution practical. In particular, we propose a randomized DRAM architecture that can harness the performance of modern commodity DRAM offerings by interleaving counter updates to multiple memory banks. The proposed architecture makes use of a simple randomization scheme, a small cache, and small request queues to statistically guarantee a near-perfect load-balancing of counter updates to the DRAM banks. The statistical guarantee of the proposed randomized scheme is proven using a novel combination of convex ordering and large deviation theory. Our proposed counter scheme can support arbitrary increments and decrements at wirespeed, and they can support different number representations, including both integer and floating point number representations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Some Fundamental Results on Base Station Movement Problem for Wireless Sensor Networks

    Page(s): 1054 - 1067
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3844 KB) |  | HTML iconHTML  

    The benefits of using a mobile base station to prolong sensor network lifetime have been well recognized. However, due to the complexity of the problem (time-dependent network topology and traffic routing), theoretical performance limits and provably optimal algorithms remain difficult to develop. This paper fills this important gap by contributing some theoretical results regarding the optimal movement of a mobile base station. Our main result hinges upon two key intermediate results. In the first result, we show that a time-dependent joint base station movement and flow routing problem can be transformed into a location-dependent problem. In the second result, we show that, for $(1- varepsilon)$ optimality, the infinite possible locations for base station movement can be reduced to a finite set of locations via several constructive steps [i.e., discretization of energy cost through a geometric sequence, division of a disk into a finite number of subareas, and representation of each subarea with a fictitious cost point (FCP)]. Subsequently, for each FCP, we can obtain the optimal sojourn time for the base station (as well as the corresponding location-dependent flow routing) via a simple linear program. We prove that the proposed solution can guarantee the achieved network lifetime is at least $(1- varepsilon)$ of the maximum (unknown) network lifetime, where $varepsilon$ can be made arbitrarily small depending on the required precision. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DSASync: Managing End-to-End Connections in Dynamic Spectrum Access Wireless LANs

    Page(s): 1068 - 1081
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1379 KB) |  | HTML iconHTML  

    Wireless LANs (WLANs) have been widely deployed as edge access networks that provide the important service of Internet access to wireless devices. Therefore, performance of end-to-end connections to/from such WLANs is of great importance. The advent of Dynamic Spectrum Access (DSA) technology is expected to play a key role in improving wireless communication. With DSA capability, WLANs opportunistically access licensed channels in order to improve spectrum-usage efficiency and provide better network performance. In this paper, we identify the key issues that impact end-to-end connection performance when a DSA-enabled WLAN is integrated with the wired cloud. We propose a new network management framework, called DSASync, to mitigate the identified performance issues. DSASync achieves this objective by managing the connections at the transport layer as a third-party supervisor and targets both TCP streams and UDP flows. DSASync requires no modifications to the network infrastructure or the existing network stack and protocols while ensuring transport protocol (TCP or UDP) semantics to be obeyed. It mainly consists of a combination of buffering and traffic-shaping algorithms to minimize the adverse side-effects of DSA on active connections. DSASync is evaluated using a prototype implementation and deployment in a testbed. The results show significant improvement in end-to-end connection performance, with substantial gains on QoS metrics like goodput, delay, and jitter. Thus, DSASync is a promising step toward applying DSA technology in consumer WLANs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Control-Theoretic Utility Maximization in Multihop Wireless Networks Under Mission Dynamics

    Page(s): 1082 - 1095
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2964 KB) |  | HTML iconHTML  

    Both bandwidth and energy become important resource constraints when multihop wireless networks are used to transport high-data-rate traffic for a moderately long duration. In such networks, it is important to control the traffic rates to not only conform to the link capacity bounds, but also to ensure that the energy of battery-powered forwarding nodes is utilized judiciously to avoid premature exhaustion (i.e., the network lasts as long as the applications require data from the sources) without being unnecessarily conservative (i.e., ensuring that the applications derive the maximum utility possible). Unlike prior work that focuses on the instantaneous distributed optimization of such networks, we consider the more challenging question of how such optimal usage of both link capacity and node energy may be achieved over a time horizon. Our key contributions are twofold. We first show how the formalism of optimal control may be used to derive optimal resource usage strategies over a time horizon, under a variety of both deterministic and statistically uncertain variations in various parameters, such as the duration for which individual applications are active or the time-varying recharge characteristics of renewable energy sources (e.g., solar cell batteries). In parallel, we also demonstrate that these optimal adaptations can be embedded, with acceptably low signaling overhead, into a distributed, utility-based rate adaptation protocol. Simulation studies, based on a combination of synthetic and real data traces, validate the close-to-optimal performance characteristics of these practically realizable protocols. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Queue-Length Asymptotics for Generalized Max-Weight Scheduling in the Presence of Heavy-Tailed Traffic

    Page(s): 1096 - 1111
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3668 KB) |  | HTML iconHTML  

    We investigate the asymptotic behavior of the steady-state queue-length distribution under generalized max-weight scheduling in the presence of heavy-tailed traffic. We consider a system consisting of two parallel queues, served by a single server. One of the queues receives heavy-tailed traffic, and the other receives light-tailed traffic. We study the class of throughput-optimal max-weight-$alpha $ scheduling policies and derive an exact asymptotic characterization of the steady-state queue-length distributions. In particular, we show that the tail of the light queue distribution is at least as heavy as a power-law curve, whose tail coefficient we obtain explicitly. Our asymptotic characterization also shows that the celebrated max-weight scheduling policy leads to the worst possible tail coefficient of the light queue distribution, among all nonidling policies. Motivated by the above negative result regarding the max-weight-$alpha $ policy, we analyze a log-max-weight (LMW) scheduling policy. We show that the LMW policy guarantees an exponentially decaying light queue tail while still being throughput-optimal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring the Throughput Boundaries of Randomized Schedulers in Wireless Networks

    Page(s): 1112 - 1124
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3470 KB) |  | HTML iconHTML  

    Randomization is a powerful and pervasive strategy for developing efficient and practical transmission scheduling algorithms in interference-limited wireless networks. Yet, despite the presence of a variety of earlier works on the design and analysis of particular randomized schedulers, there does not exist an extensive study of the limitations of randomization on the efficient scheduling in wireless networks. In this paper, we aim to fill this gap by proposing a common modeling framework and three functional forms of randomized schedulers that utilize queue-length information to probabilistically schedule nonconflicting transmissions. This framework not only models many existing schedulers operating under a timescale separation assumption as special cases, but it also contains a much wider class of potential schedulers that have not been analyzed. We identify some sufficient and some necessary conditions on the network topology and on the functional forms used in the randomization for throughput optimality. Our analysis reveals an exponential and a subexponential class of functions that exhibit differences in the throughput optimality. Also, we observe the significance of the network's scheduling diversity for throughput optimality as measured by the number of maximal schedules each link belongs to. We further validate our theoretical results through numerical studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On a Noncooperative Model for Wavelength Assignment in Multifiber Optical Networks

    Page(s): 1125 - 1137
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2447 KB) |  | HTML iconHTML  

    We propose and investigate Selfish Path MultiColoring games as a natural model for noncooperative wavelength assignment in multifiber optical networks. In this setting, we view the wavelength assignment process as a strategic game in which each communication request selfishly chooses a wavelength in an effort to minimize the maximum congestion that it encounters on the chosen wavelength. We measure the cost of a certain wavelength assignment as the maximum, among all physical links, number of parallel fibers employed by this assignment. We start by settling questions related to the existence and computation of and convergence to pure Nash equilibria in these games. Our main contribution is a thorough analysis of the price of anarchy of such games, that is, the worst-case ratio between the cost of a Nash equilibrium and the optimal cost. We first provide upper bounds on the price of anarchy for games defined on general network topologies. Along the way, we obtain an upper bound of 2 for games defined on star networks. We next show that our bounds are tight even in the case of tree networks of maximum degree 3, leading to nonconstant price of anarchy for such topologies. In contrast, for network topologies of maximum degree 2, the quality of the solutions obtained by selfish wavelength assignment is much more satisfactory: We prove that the price of anarchy is bounded by 4 for a large class of practically interesting games defined on ring networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Timescale Decoupled Routing and Rate Control in Intermittently Connected Networks

    Page(s): 1138 - 1151
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2591 KB) |  | HTML iconHTML  

    We study an intermittently connected network (ICN) composed of multiple clusters of wireless nodes. Within each cluster, nodes can communicate directly using the wireless links. However, these clusters are far away from each other such that direct communication between the clusters is impossible except through “mobile” contact nodes. These mobile contact nodes are data carriers that shuffle between clusters and transport data from the source to the destination clusters. There are several applications of our network model, such as clusters of mobile soldiers connected via unmanned aerial vehicles. Our work here focuses on a queue-based cross-layer technique known as the back-pressure algorithm. The algorithm is known to be throughput-optimal, as well as resilient to disruptions in the network, making it an ideal candidate communication protocol for our intermittently connected network. In this paper, we design a back-pressure routing/rate control algorithm for ICNs. Though it is throughput-optimal, the back-pressure algorithm has several drawbacks when used in ICNs, including long end-to-end delays, large number of potential queues needed, and loss in throughput due to intermittency. We present a modified back-pressure algorithm that addresses these issues. We implement our algorithm on a 16-node experimental testbed and present our experimental results in this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Intrusion Detection in Mobile Sensor Network

    Page(s): 1152 - 1161
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1580 KB) |  | HTML iconHTML  

    Intrusion detection is an important problem in sensor networks. Prior works in static sensor environments show that constructing sensor barriers with random sensor deployment can be effective for intrusion detection. In response to the recent surge of interest in mobile sensor applications, this paper studies the intrusion detection problem in a mobile sensor network, where it is believed that mobile sensors can improve barrier coverage. Specifically, we focus on providing $k$-barrier coverage against moving intruders. This problem becomes particularly challenging given that the trajectories of sensors and intruders need to be captured. We first demonstrate that this problem is similar to the classical kinetic theory of gas molecules in physics. We then derive the inherent relationship between barrier coverage performance and a set of crucial system parameters including sensor density, sensing range, and sensor and intruder mobility. We examine the correlations and sensitivity from the system parameters, and we derive the minimum number of mobile sensors that needs to be deployed in order to maintain the $k$ -barrier coverage for a mobile sensor network. Finally, we show that the coverage performance can be improved by an order of magnitude with the same number of sensors when compared to that of the static sensor environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Little Engine(s) That Could: Scaling Online Social Networks

    Page(s): 1162 - 1175
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1767 KB) |  | HTML iconHTML  

    The difficulty of partitioning social graphs has introduced new system design challenges for scaling of online social networks (OSNs). Vertical scaling by resorting to full replication can be a costly proposition. Scaling horizontally by partitioning and distributing data among multiple servers using, for e.g., distributed hash tables (DHTs), can suffer from expensive interserver communication. Such challenges have often caused costly rearchitecting efforts for popular OSNs like Twitter and Facebook. We design, implement, and evaluate SPAR, a Social Partitioning and Replication middleware that mediates transparently between the application and the database layer of an OSN. SPAR leverages the underlying social graph structure in order to minimize the required replication overhead for ensuring that users have their neighbors' data colocated in the same machine. The gains from this are multifold: Application developers can assume local semantics, i.e., develop as they would for a single machine; scalability is achieved by adding commodity machines with low memory and network I/O requirements; and N+K redundancy is achieved at a fraction of the cost. We provide a complete system design, extensive evaluation based on datasets from Twitter, Orkut, and Facebook, and a working implementation. We show that SPAR incurs minimum overhead, can help a well-known Twitter clone reach Twitter's scale without changing a line of its application logic, and achieves higher throughput than Cassandra, a popular key-value store database. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Caching for BitTorrent-Like P2P Systems: A Simple Fluid Model and Its Implications

    Page(s): 1176 - 1189
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1936 KB) |  | HTML iconHTML  

    Peer-to-peer file-sharing systems are responsible for a significant share of the traffic between Internet service providers (ISPs) in the Internet. In order to decrease their peer-to-peer-related transit traffic costs, many ISPs have deployed caches for peer-to-peer traffic in recent years. We consider how the different types of peer-to-peer caches—caches already available on the market and caches expected to become available in the future—can possibly affect the amount of inter-ISP traffic. We develop a fluid model that captures the effects of the caches on the system dynamics of peer-to-peer networks and show that caches can have adverse effects on the system dynamics depending on the system parameters. We combine the fluid model with a simple model of inter-ISP traffic and show that the impact of caches cannot be accurately assessed without considering the effects of the caches on the system dynamics. We identify scenarios when caching actually leads to increased transit traffic. Motivated by our findings, we propose a proximity-aware peer-selection mechanism that avoids the increase of the transit traffic and improves the cache efficiency. We support the analytical results by extensive simulations and experiments with real BitTorrent clients. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design, Implementation, and Performance of a Load Balancer for SIP Server Clusters

    Page(s): 1190 - 1202
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (976 KB) |  | HTML iconHTML  

    This paper introduces several novel load-balancing algorithms for distributing Session Initiation Protocol (SIP) requests to a cluster of SIP servers. Our load balancer improves both throughput and response time versus a single node while exposing a single interface to external clients. We present the design, implementation, and evaluation of our system using a cluster of Intel x86 machines running Linux. We compare our algorithms to several well-known approaches and present scalability results for up to 10 nodes. Our best algorithm, Transaction Least-Work-Left (TLWL), achieves its performance by integrating several features: knowledge of the SIP protocol, dynamic estimates of back-end server load, distinguishing transactions from calls, recognizing variability in call length, and exploiting differences in processing costs for different SIP transactions. By combining these features, our algorithm provides finer-grained load balancing than standard approaches, resulting in throughput improvements of up to 24% and response-time improvements of up to two orders of magnitude. We present a detailed analysis of occupancy to show how our algorithms significantly reduce response time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Networked Computing in Wireless Sensor Networks for Structural Health Monitoring

    Page(s): 1203 - 1216
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1930 KB) |  | HTML iconHTML  

    This paper studies the problem of distributed computation over a network of wireless sensors. While this problem applies to many emerging applications, to keep our discussion concrete, we will focus on sensor networks used for structural health monitoring. Within this context, the heaviest computation is to determine the singular value decomposition (SVD) to extract mode shapes (eigenvectors) of a structure. Compared to collecting raw vibration data and performing SVD at a central location, computing SVD within the network can result in significantly lower energy consumption and delay. Using recent results on decomposing SVD, a well-known centralized operation, we seek to determine a near-optimal communication structure that enables the distribution of this computation and the reassembly of the final results, with the objective of minimizing energy consumption subject to a computational delay constraint. We show that this reduces to a generalized clustering problem and establish that it is NP-hard. By relaxing the delay constraint, we derive a lower bound. We then propose an integer linear program (ILP) to solve the constrained problem exactly as well as an approximate algorithm with a proven approximation ratio. We further present a distributed version of the approximate algorithm. We present both simulation and experimentation results to demonstrate the effectiveness of these algorithms . View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Impact of TLS on SIP Server Performance: Measurement and Modeling

    Page(s): 1217 - 1230
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1576 KB) |  | HTML iconHTML  

    Securing Voice over IP (VoIP) is a crucial requirement for its successful adoption. A key component of this is securing the signaling path, which is performed by the Session Initiation Protocol (SIP). Securing SIP can be accomplished by using Transport Layer Security (TLS) instead of UDP as the transport protocol. However, using TLS for SIP is not yet widespread, perhaps due to concerns about the performance overhead. This paper studies the performance impact of using TLS as a transport protocol for SIP servers. We evaluate the cost of TLS experimentally using a testbed with OpenSIPS, OpenSSL, and Linux running on an Intel-based server. We analyze TLS costs using application, library, and kernel profiling and use the profiles to illustrate when and how different costs are incurred. We show that using TLS can reduce performance by up to a factor of 17 compared to the typical case of SIP-over-UDP. The primary factor in determining performance is whether and how TLS connection establishment is performed due to the heavy costs of RSA operations used for session negotiation. This depends both on how the SIP proxy is deployed and what TLS operation modes are used. The cost of symmetric key operations such as AES, in contrast, tends to be small. Network operators deploying SIP-over-TLS should attempt to maximize the persistence of secure connections and will need to assess the server resources required. To aid them, we provide a measurement-driven cost model for use in provisioning SIP servers using TLS. Our cost model predicts performance within 15% on average. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Geographic Routing Strategy for North Atlantic In-Flight Internet Access Via Airborne Mesh Networking

    Page(s): 1231 - 1244
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2723 KB) |  | HTML iconHTML  

    The Airborne Internet is a vision of a large-scale multihop wireless mesh network consisting of commercial passenger aircraft connected via long-range highly directional air-to-air radio links. We propose a geographic load sharing strategy to fully exploit the total air-to-ground capacity available at any given time. When forwarding packets for a given destination, a node considers not one but a set of next-hop candidates and spreads traffic among them based on queue dynamics. In addition, load balancing is performed among Internet Gateways by using a congestion-aware handover strategy. Our simulations using realistic North Atlantic air traffic demonstrate the ability of such a load sharing mechanism to approach the maximum theoretical throughput in the network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Cell-Counting-Based Attack Against Tor

    Page(s): 1245 - 1261
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2776 KB) |  | HTML iconHTML  

    Various low-latency anonymous communication systems such as Tor and Anonymizer have been designed to provide anonymity service for users. In order to hide the communication of users, most of the anonymity systems pack the application data into equal-sized cells (e.g., 512 B for Tor, a known real-world, circuit-based, low-latency anonymous communication network). Via extensive experiments on Tor, we found that the size of IP packets in the Tor network can be very dynamic because a cell is an application concept and the IP layer may repack cells. Based on this finding, we investigate a new cell-counting-based attack against Tor, which allows the attacker to confirm anonymous communication relationship among users very quickly. In this attack, by marginally varying the number of cells in the target traffic at the malicious exit onion router, the attacker can embed a secret signal into the variation of cell counter of the target traffic. The embedded signal will be carried along with the target traffic and arrive at the malicious entry onion router. Then, an accomplice of the attacker at the malicious entry onion router will detect the embedded signal based on the received cells and confirm the communication relationship among users. We have implemented this attack against Tor, and our experimental data validate its feasibility and effectiveness. There are several unique features of this attack. First, this attack is highly efficient and can confirm very short communication sessions with only tens of cells. Second, this attack is effective, and its detection rate approaches 100% with a very low false positive rate. Third, it is possible to implement the attack in a way that appears to be very difficult for honest participants to detect (e.g., using our hopping-based signal embedding). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FlashTrie: Beyond 100-Gb/s IP Route Lookup Using Hash-Based Prefix-Compressed Trie

    Page(s): 1262 - 1275
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2179 KB) |  | HTML iconHTML  

    It is becoming apparent that the next-generation IP route lookup architecture needs to achieve speeds of 100 Gb/s and beyond while supporting IPv4 and IPv6 with fast real-time updates to accommodate ever-growing routing tables. Some of the proposed multibit-trie-based schemes, such as TreeBitmap, have been used in today's high-end routers. However, their large data structures often require multiple external memory accesses for each route lookup. A pipelining technique is widely used to achieve high-speed lookup with the cost of using many external memory chips. Pipelining also often leads to poor memory load-balancing. In this paper, we propose a new IP route lookup architecture called FlashTrie that overcomes the shortcomings of the multibit-trie-based approaches. We use a hash-based membership query to limit off-chip memory accesses per lookup and to balance memory utilization among the memory modules. By compacting the data structure size, the lookup depth of each level can be increased. We also develop a new data structure called Prefix-Compressed Trie that reduces the size of a bitmap by more than 80%. Our simulation and implementation results show that FlashTrie can achieve 80-Gb/s worst-case throughput while simultaneously supporting 2 M prefixes for IPv4 and 318 k prefixes for IPv6 with one lookup engine and two Double-Data-Rate (DDR3) SDRAM chips. When implementing five lookup engines on a state-of-the-art field programmable gate array (FPGA) chip and using 10 DDR3 memory chips, we expect FlashTrie to achieve 1-Gpps (packet per second) throughput, equivalent to 400 Gb/s for IPv4 and 600 Gb/s for IPv6. FlashTrie also supports incremental real-time updates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applications of Belief Propagation in CSMA Wireless Networks

    Page(s): 1276 - 1289
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (595 KB) |  | HTML iconHTML  

    “Belief propagation” (BP) is an efficient way to solve “inference” problems in graphical models, such as Bayesian networks and Markov random fields. It has found great success in many application areas due to its simplicity, high accuracy, and distributed nature. This paper is a first attempt to apply BP algorithms in CSMA wireless networks. Compared to prior CSMA optimization algorithms such as ACSMA, which are measurement-based, BP-based algorithms are proactive and computational, without the need for network probing and traffic measurement. Consequently, BP-based algorithms are not affected by the temporal throughput fluctuations and can converge faster. Specifically, this paper explores three applications of BP. 1) We show how BP can be used to compute the throughputs of different links in the network given their access intensities, defined as the mean packet transmission time divided by the mean backoff countdown time. 2) We propose an inverse-BP algorithm to solve the reverse problem of how to set the access intensities of different links to meet their target throughputs. 3) We introduce a BP-adaptive CSMA algorithm to find the link access intensities that can achieve optimal system utility. The first two applications are NP-hard problems, and BP provides good approximations to them. The advantage of BP is that it can converge faster compared to prior algorithms like ACSMA, especially in CSMA networks with temporal throughput fluctuations. Furthermore, this paper goes beyond BP and considers a generalized version of it, GBP, to improve accuracy in networks with a loopy contention graph. The distributed implementation of GBP is nontrivial to construct. A contribution of this paper is to show that a “maximal clique” method of forming regions in GBP: 1) yields accurate results; and 2) is amenable to distributed implementation in CSMA networks, with messages passed between one-hop neighbors only. We show that both B- and GBP algorithms for all three applications can yield solutions within seconds in real operation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Channel-Aware Distributed Medium Access Control

    Page(s): 1290 - 1303
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2331 KB) |  | HTML iconHTML  

    In this paper, we solve a fundamental problem: how to use distributed random access to achieve the performance of centralized schedulers. We consider wireless networks with arbitrary topologies and spatial traffic distributions, where users can receive traffic from or send traffic to different users and different communication links may interfere with each other. The channels are assumed heterogeneous, and the random channel gains of different links may have different distributions. To resolve the network contention in a distributed way, each frame is divided into contention and transmission periods. The contention period is used to resolve conflicts, while the transmission period is used to send payload in collision-free scenarios. We design a multistage channel-aware Aloha scheme for the contention period to enable users with relatively better channel states to have higher probabilities of contention success while assuring fairness among all users. We show analytically that the proposed scheme completely resolves network contention and achieves throughput close to that of centralized schedulers. Furthermore, the proposed scheme is robust to any uncertainty in channel estimation. Simulation results demonstrate that it significantly improves network performance while maintaining fairness among different users. The proposed random access approach can be applied to different wireless networks, such as cellular, sensor, and mobile ad hoc networks, to improve quality of service. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign