By Topic

Networking, IEEE/ACM Transactions on

Issue 3 • Date June 2014

Filter Results

Displaying Results 1 - 25 of 32
  • [Front cover]

    Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (599 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (125 KB)  
    Freely Available from IEEE
  • Effects of Internet Path Selection on Video-QoE: Analysis and Improvements

    Page(s): 689 - 702
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1372 KB) |  | HTML iconHTML  

    This paper presents large-scale Internet measurements to understand and improve the effects of Internet path selection on perceived video quality, or quality of experience (QoE). We systematically study a large number of Internet paths between popular video destinations and clients to create an empirical understanding of location, persistence, and recurrence of failures. These failures are mapped to perceived video quality by reconstructing video clips and conducting surveys. We then investigate ways to recover from QoE degradation by choosing one-hop detour paths that preserve application-specific policies. We seek simple, scalable path selection strategies without the need for background path monitoring. Using five different measurement overlays spread across the globe, we show that a source can recover from over 75% of the degradations by attempting to restore QoE with any k randomly chosen nodes in an overlay, where k is bounded by O(ln(N)). We argue that our results are robust across datasets. Finally, we design and implement a prototype packet forwarding module called source initiated frame restoration (SIFR). We deployed SIFR on PlanetLab nodes and compared the performance of SIFR to the default Internet routing. We show that SIFR outperforms IP-path selection by providing higher on-screen perceptual quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Price Differentiation for Communication Networks

    Page(s): 703 - 716
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2646 KB) |  | HTML iconHTML  

    We study the optimal usage-based pricing problem in a resource-constrained network with one profit-maximizing service provider and multiple groups of surplus-maximizing users. With the assumption that the service provider knows the utility function of each user (thus complete information), we find that the complete price differentiation scheme can achieve a large revenue gain (e.g., 50%) compared to no price differentiation, when the total network resource is comparably limited and the high-willingness-to-pay users are minorities. However, the complete price differentiation scheme may lead to a high implementational complexity. To trade off the revenue against the implementational complexity, we further study the partial price differentiation scheme and design a polynomial-time algorithm that can compute the optimal partial differentiation prices. We also consider the incomplete information case where the service provider does not know to which group each user belongs. We show that it is still possible to realize price differentiation under this scenario and provide the sufficient and necessary condition under which an incentive-compatible differentiation scheme can achieve the same revenue as under complete information. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Distributed P2P Streaming Under Node Degree Bounds

    Page(s): 717 - 730
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3454 KB) |  | HTML iconHTML  

    We study the problem of maximizing the broadcast rate in peer-to-peer (P2P) systems under node degree bounds, i.e., the number of neighbors a node can simultaneously connect to is upper-bounded. The problem is critical for supporting high-quality video streaming in P2P systems and is challenging due to its combinatorial nature. In this paper, we address this problem by providing the first distributed solution that achieves near-optimal broadcast rate under arbitrary node degree bounds and over arbitrary overlay graph. It runs on individual nodes and utilizes only the measurement from their one-hop neighbors, making the solution easy to implement and adaptable to peer churn and network dynamics. Our solution consists of two distributed algorithms proposed in this paper that can be of independent interests: a network-coding-based broadcasting algorithm that optimizes the broadcast rate given a topology, and a Markov-chain guided topology hopping algorithm that optimizes the topology. Our distributed broadcasting algorithm achieves the optimal broadcast rate over arbitrary P2P topology, while previously proposed distributed algorithms obtain optimality only for P2P complete graphs. We prove the optimality of our solution and its convergence to a neighborhood around the optimal equilibrium under noisy measurements or without time-scale separation assumptions. We demonstrate the effectiveness of our solution in simulations using uplink bandwidth statistics of Internet hosts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Payoff Mechanisms in Peer-Assisted Services With Multiple Content Providers: Rationality and Fairness

    Page(s): 731 - 744
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3392 KB) |  | HTML iconHTML  

    This paper studies an incentive structure for cooperation and its stability in peer-assisted services when there exist multiple content providers, using a coalition game-theoretic approach. We first consider a generalized coalition structure consisting of multiple providers with many assisting peers, where peers assist providers to reduce the operational cost in content distribution. To distribute the profit from cost reduction to players (i.e, providers and peers), we then establish a generalized formula for individual payoffs when a “Shapley-like” payoff mechanism is adopted. We show that the grand coalition is unstable, even when the operational cost functions are concave, which is in sharp contrast to the recently studied case of a single provider where the grand coalition is stable. We also show that irrespective of stability of the grand coalition, there always exist coalition structures that are not convergent to the grand coalition under a dynamic among coalition structures. Our results give us an incontestable fact that a provider does not tend to cooperate with other providers in peer-assisted services and is separated from them. Three facets of the noncooperative (selfish) providers are illustrated: 1) underpaid peers; 2) service monopoly; and 3) oscillatory coalition structure. Lastly, we propose a stable payoff mechanism that improves fairness of profit sharing by regulating the selfishness of the players as well as grants the content providers a limited right of realistic bargaining. Our study opens many new questions such as realistic and efficient incentive structures and the tradeoffs between fairness and individual providers' competition in peer-assisted services. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Algorithms for Wireless Capacity

    Page(s): 745 - 755
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2633 KB) |  | HTML iconHTML  

    In this paper, we address two basic questions in wireless communication. First, how long does it take to schedule an arbitrary set of communication requests? Second, given a set of communication requests, how many of them can be scheduled concurrently? Our results are derived in the signal-to-interference-plus-noise ratio (SINR) interference model with geometric path loss and consist of efficient algorithms that find a constant approximation for the second problem and a logarithmic approximation for the first problem. In addition, we show that the interference model is robust to various factors that can influence the signal attenuation. More specifically, we prove that as long as influences on the signal attenuation are constant, they affect the capacity only by a constant factor. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cooperation Versus Multiplexing: Multicast Scheduling Algorithms for OFDMA Relay Networks

    Page(s): 756 - 769
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2121 KB) |  | HTML iconHTML  

    With the next-generation cellular networks making a transition toward smaller cells, two-hop orthogonal frequency-division multiple access (OFDMA) relay networks have become a dominant, mandatory component in the 4G standards (WiMAX 802.16j, 3GPP LTE-Adv). While unicast flows have received reasonable attention in two-hop OFDMA relay networks, not much light has been shed on the design of efficient scheduling algorithms for multicast flows. Given the growing importance of multimedia broadcast and multicast services (MBMS) in 4G networks, the latter forms the focus of this paper. We show that while relay cooperation is critical for improving multicast performance, it must be carefully balanced with the ability to multiplex multicast sessions and hence maximize aggregate multicast flow. To this end, we highlight strategies that carefully group relays for cooperation to achieve this balance. We then solve the multicast scheduling problem under two OFDMA subchannelization models. We establish the NP-hardness of the scheduling problem even for the simpler model and provide efficient algorithms with approximation guarantees under both models. Evaluation of the proposed solutions reveals the efficiency of the scheduling algorithms as well as the significant benefits obtained from the multicasting strategy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting Multichannel Diversity for Cooperative Multicast in Cognitive Radio Mesh Networks

    Page(s): 770 - 783
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2339 KB) |  | HTML iconHTML  

    Cognitive radio networks (CRNs) have emerged as a promising, yet challenging, solution to enhance spectrum utilization, thanks to the technology of cognitive radios. A well-known property of CRNs is the potential heterogeneity in channel availability among secondary users. Therefore, multicast throughput in CRNs may suffer from significant degradation because of this property since a link-level broadcast of a frame may only reach a small subset of destinations that are able to receive on the same channel. This may necessitate multiple sequential transmissions of the same frame by the source on different channels to guarantee delivery to all receivers in the destination set. In case of high data generation rate, delivery delay will be high due to the repeated transmissions by the source. In this paper, we propose an assistance strategy to reduce the effect of the channel heterogeneity property on the multicast throughput in cognitive radio wireless mesh networks (CR-WMNs). This assistance strategy is composed of two main activities: first, allowing multicast receivers to assist the source in delivering the data, and second, allowing the transmission of coded packets so that multicast receivers belonging to different multicast groups can decode and extract their data concurrently. Results show that the proposed assistance paradigm reduces multicast time and increases throughput significantly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Topology Preserving Maps—Extracting Layout Maps of Wireless Sensor Networks From Virtual Coordinates

    Page(s): 784 - 797
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2537 KB) |  | HTML iconHTML  

    A method for obtaining topology-preserving maps (TPMs) from virtual coordinates (VCs) of wireless sensor networks is presented. In a virtual coordinate system (VCS), a node is identified by a vector containing its distances, in hops, to a small subset of nodes called anchors. Layout information such as physical voids, shape, and even relative physical positions of sensor nodes with respect to x- y directions are absent in a VCS description. The proposed technique uses Singular Value Decomposition to isolate dominant radial information and to extract topological information from the VCS for networks deployed on 2-D/3-D surfaces and in 3-D volumes. The transformation required for TPM extraction can be generated using the coordinates of a subset of nodes, resulting in sensor-network-friendly implementation alternatives. TPMs of networks representing a variety of topologies are extracted. Topology preservation error ( ETP), a metric that accounts for both the number and degree of node flips, is defined and used to evaluate 2-D TPMs. The techniques extract TPMs with ( ETP) less than 2%. Topology coordinates provide an economical alternative to physical coordinates for many sensor networking algorithms. View full abstract»

    Open Access
  • Newton: Securing Virtual Coordinates by Enforcing Physical Laws

    Page(s): 798 - 811
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2243 KB) |  | HTML iconHTML  

    Virtual coordinate systems (VCSs) provide accurate estimations of latency between arbitrary hosts on a network, while conducting a small amount of actual measurements and relying on node cooperation. While these systems have good accuracy under benign settings, they suffer a severe decrease of their effectiveness when under attack by compromised nodes acting as insider attackers. Previous defenses mitigate such attacks by using machine learning techniques to differentiate good behavior (learned over time) from bad behavior. However, these defense schemes have been shown to be vulnerable to advanced attacks that make the schemes learn malicious behavior as good behavior. We present Newton, a decentralized VCS that is robust to a wide class of insider attacks. Newton uses an abstraction of a real-life physical system, similar to that of Vivaldi, but in addition uses safety invariants derived from Newton's laws of motion. As a result, Newton does not need to learn good behavior and can tolerate a significantly higher percentage of malicious nodes. We show through simulations and real-world experiments on the PlanetLab testbed that Newton is able to mitigate all known attacks against VCSs while providing better accuracy than Vivaldi, even in benign settings. Finally, we show how to design a VCS that better matches a real physical system, thus allowing for more intuitive and tighter system parameters that are even more difficult to exploit by attackers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bounds of Asymptotic Performance Limits of Social-Proximity Vehicular Networks

    Page(s): 812 - 825
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3975 KB) |  | HTML iconHTML  

    In this paper, we investigate the asymptotic performance limits (throughput capacity and average packet delay) of social-proximity vehicular networks. The considered network involves N vehicles moving and communicating on a scalable grid-like street layout following the social-proximity model: Each vehicle has a restricted mobility region around a specific social spot and transmits via a unicast flow to a destination vehicle that is associated with the same social spot. Moreover, the spatial distribution of the vehicle decays following a power-law distribution from the central social spot toward the border of the mobility region. With vehicles communicating using a variant of the two-hop relay scheme, the asymptotic bounds of throughput capacity and average packet delay are derived in terms of the number of social spots, the size of the mobility region, and the decay factor of the power-law distribution. By identifying these key impact factors of performance mathematically, we find three possible regimes for the performance limits. Our results can be applied to predict the network performance of real-world scenarios and provide insight on the design and deployment of future vehicular networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Video Telephony for End-Consumers: Measurement Study of Google+, iChat, and Skype

    Page(s): 826 - 839
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2300 KB) |  | HTML iconHTML  

    Video telephony requires high-bandwidth and low-delay voice and video transmissions between geographically distributed users. It is challenging to deliver high-quality video telephony to end-consumers through the best-effort Internet. In this paper, we present our measurement study on three popular video telephony systems on the Internet: Google+, iChat, and Skype. Through a series of carefully designed active and passive measurements, we uncover important information about their key design choices and performance, including application architecture, video generation and adaptation schemes, loss recovery strategies, end-to-end voice and video delays, resilience against random and bursty losses, etc. The obtained insights can be used to guide the design of applications that call for high-bandwidth and low-delay data transmissions under a wide range of “best-effort” network conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Degraded Service Provisioning in Mixed-Line-Rate WDM Backbone Networks Using Multipath Routing

    Page(s): 840 - 849
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1362 KB) |  | HTML iconHTML  

    Traffic in optical backbone networks is increasing and becoming more heterogeneous with respect to bandwidth and QoS requirements due to the popularity of high-bandwidth services (such as cloud computing, e-science, telemedicine, etc.), which need to coexist with traditional services (HTTP, etc.). Mixed-line-rate (MLR) networks that support lightpaths of different rates such as 10, 40, 100 Gb/s, etc., are being studied to better support the heterogeneous traffic demands. Here, we study the important topic of degraded services in MLR networks, where a service can accept some degradation (i.e., reduction) in bandwidth in case of a failure in exchange for a lower cost, a concept called partial protection. Network operators may wish to support degraded services to optimize network resources and reduce cost. We propose using multipath routing to support degraded services in MLR networks, a problem that has not been studied before and is significantly more challenging than in single-line-rate (SLR) networks. We consider minimum-cost MLR network design (i.e., choosing which transponder rates to use at each node), considering the opportunity to exploit multipath routes to support degraded services. We propose a mixed-integer-linear-program (MILP) solution and a computationally efficient heuristic, and consider two partial-protection models. Our illustrative numerical results show that significant cost savings can be achieved due to partial protection versus full protection and is highly beneficial for network operators. We also note that multipath routing in MLR networks exploits volume discount of higher-line-rate transponders by cost-effectively grooming requests over appropriate line rates to maximize transponder reuse versus SLR. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Energy Efficiency in TDMA-Based Next-Generation Passive Optical Access Networks

    Page(s): 850 - 863
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2616 KB) |  | HTML iconHTML  

    Next-generation passive optical network (PON) has been considered in the past few years as a cost-effective broadband access technology. With the ever-increasing power saving concern, energy efficiency has been an important issue in its operations. In this paper, we propose a novel sleep-time sizing and scheduling framework for the implementation of green bandwidth allocation (GBA) in TDMA-PONs. The proposed framework leverages the batch-mode transmission feature of GBA to minimize the overhead due to frequent ONU on-off transitions. The optimal sleeping time sequence of each ONU is determined in every cycle without violating the maximum delay requirement. With multiple ONUs possibly accessing the shared media simultaneously, a collision may occur. To address this problem, we propose a new sleep-time sizing mechanism, namely Sort-And-Shift (SAS), in which the ONUs are sorted according to their expected transmission start times, and their sleep times are shifted to resolve any possible collision while ensuring maximum energy saving. Results show the effectiveness of the proposed framework and highlight the merits of our solutions . View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Content Caching and Scheduling in Wireless Networks With Elastic and Inelastic Traffic

    Page(s): 864 - 874
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2650 KB) |  | HTML iconHTML  

    The rapid growth of wireless content access implies the need for content placement and scheduling at wireless base stations. We study a system under which users are divided into clusters based on their channel conditions, and their requests are represented by different queues at logical front ends. Requests might be elastic (implying no hard delay constraint) or inelastic (requiring that a delay target be met). Correspondingly, we have request queues that indicate the number of elastic requests, and deficit queues that indicate the deficit in inelastic service. Caches are of finite size and can be refreshed periodically from a media vault. We consider two cost models that correspond to inelastic requests for streaming stored content and real-time streaming of events, respectively. We design provably optimal policies that stabilize the request queues (hence ensuring finite delays) and reduce average deficit to zero [hence ensuring that the quality-of-service (QoS) target is met] at small cost. We illustrate our approach through simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Transient Behavior of CHOKe

    Page(s): 875 - 888
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3030 KB) |  | HTML iconHTML  

    CHOKe is a simple and stateless active queue management (AQM) scheme. Apart from low operational overhead, a highly attractive property of CHOKe is that it can protect responsive TCP flows from unresponsive UDP flows. Particularly, previous works have proven that CHOKe is able to bound both bandwidth share and buffer share of (a possible aggregate) UDP traffic (flow) on a link. However, these studies consider, and pertain only to, a steady state where the queue reaches equilibrium in the presence of many (long-lived) TCP flows and an unresponsive UDP flow of fixed arrival rate. If the steady-state conditions are perturbed, particularly when UDP traffic rate changes over time, it is unclear whether the protection property of CHOKe still holds. Indeed, it can be examined, for example, that when UDP rate suddenly becomes 0 (i.e., flow stops), the unresponsive flow may assume close to full utilization in sub-round-trip-time (sub-RTT) scales, potentially starving out the TCP flows. To explain this apparent discrepancy, this paper investigates CHOKe queue properties in a transient regime, which is the time period of transition between two steady states of the queue, initiated when the rate of the unresponsive flow changes. Explicit expressions that characterize flow throughputs in transient regimes are derived. These results provide additional understanding of CHOKe and give some explanation on its intriguing behavior in the transient regime. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Channel-Hopping-Based Communication Rendezvous in Cognitive Radio Networks

    Page(s): 889 - 902
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2747 KB) |  | HTML iconHTML  

    Cognitive radio (CR) networks have an ample but dynamic amount of spectrum for communications. Communication rendezvous in CR networks is the process of establishing a control channel between radios before they can communicate. Designing a communication rendezvous protocol that can take advantage of all the available spectrum at the same time is of great importance, because it alleviates load on control channels, and thus further reduces probability of collisions. In this paper, we present ETCH, efficient channel-hopping-based MAC-layer protocols for communication rendezvous in CR networks. Compared to the existing solutions, ETCH fully exploits spectrum diversity in communication rendezvous by allowing all the rendezvous channels to be utilized at the same time. We propose two protocols, SYNC-ETCH, which is a synchronous protocol assuming CR nodes can synchronize their channel hopping processes, and ASYNC-ETCH, which is an asynchronous protocol not relying on global clock synchronization. Our theoretical analysis and ns-2-based evaluation show that ETCH achieves better performances of time-to-rendezvous and throughput than the existing work. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exact and Heuristic Algorithms for Data-Gathering Cluster-Based Wireless Sensor Network Design Problem

    Page(s): 903 - 916
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1922 KB) |  | HTML iconHTML  

    Data-gathering wireless sensor networks (WSNs) are operated unattended over long time horizons to collect data in several applications such as those in climate monitoring and a variety of ecological studies. Typically, sensors have limited energy (e.g., an on-board battery) and are subject to the elements in the terrain. In-network operations, which largely involve periodically changing network flow decisions to prolong the network lifetime, are managed remotely, and the collected data are retrieved by a user via internet. In this paper, we study an integrated topology control and routing problem in cluster-based WSNs. To prolong network lifetime via efficient use of the limited energy at the sensors, we adopt a hierarchical network structure with multiple sinks at which the data collected by the sensors are gathered through the clusterheads (CHs). We consider a mixed-integer linear programming (MILP) model to optimally determine the sink and CH locations as well as the data flow in the network. Our model effectively utilizes both the position and the energy-level aspects of the sensors while selecting the CHs and avoids the highest-energy sensors or the sensors that are well-positioned sensors with respect to sinks being selected as CHs repeatedly in successive periods. For the solution of the MILP model, we develop an effective Benders decomposition (BD) approach that incorporates an upper bound heuristic algorithm, strengthened cuts, and an ε-optimal framework for accelerated convergence. Computational evidence demonstrates the efficiency of the BD approach and the heuristic in terms of solution quality and time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Server Migration for Scalable Internet Service Deployment

    Page(s): 917 - 930
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1905 KB) |  | HTML iconHTML  

    The effectiveness of service provisioning in large-scale networks is highly dependent on the number and location of service facilities deployed at various hosts. The classical, centralized approach to determining the latter would amount to formulating and solving the uncapacitated k-median (UKM) problem (if the requested number of facilities is fixed- k) or the uncapacitated facility location (UFL) problem (if the number of facilities is also to be optimized). Clearly, such centralized approaches require knowledge of global topological and demand information, and thus do not scale and are not practical for large networks. The key question posed and answered in this paper is the following: “How can we determine in a distributed and scalable manner the number and location of service facilities?” In this paper, we develop a scalable and distributed approach that answers our key question through an iterative reoptimization of the location and the number of facilities within network neighborhoods. We propose an innovative approach to migrate, add, or remove servers within limited-scope network neighborhoods by utilizing only local information about the topology and demand. We show that even with limited information about the network topology and demand, within one or two hops, our distributed approach achieves performance, under various synthetic and real Internet topologies and workloads, that is comparable to that of optimal, centralized approaches requiring full topology and demand information. We also show that it is responsive to volatile demand. Our approach leverages recent advances in virtualization technology toward an automated placement of services on the Internet. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Behavior Analysis of Internet Traffic via Bipartite Graphs and One-Mode Projections

    Page(s): 931 - 942
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1893 KB) |  | HTML iconHTML  

    As Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to understand behavior patterns of end-hosts and network applications. This paper presents a novel approach based on behavioral graph analysis to study the behavior similarity of Internet end-hosts. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of end-hosts. By applying simple and efficient clustering algorithms on the similarity matrices and clustering coefficient of one-mode projection graphs, we perform network-aware clustering of end-hosts in the same network prefixes into different end-host behavior clusters and discover inherent clustered groups of Internet applications. Our experiment results based on real datasets show that end-host and application behavior clusters exhibit distinct traffic characteristics that provide improved interpretations on Internet traffic. Finally, we demonstrate the practical benefits of exploring behavior similarity in profiling network behaviors, discovering emerging network applications, and detecting anomalous traffic patterns. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Characterizing Web Page Complexity and Its Impact

    Page(s): 943 - 956
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1977 KB) |  | HTML iconHTML  

    Over the years, the Web has evolved from simple text content from one server to a complex ecosystem with different types of content from servers spread across several administrative domains. There is anecdotal evidence of users being frustrated with high page load times. Because page load times are known to directly impact user satisfaction, providers would like to understand if and how the complexity of their Web sites affects the user experience. While there is an extensive literature on measuring Web graphs, Web site popularity, and the nature of Web traffic, there has been little work in understanding how complex individual Web sites are, and how this complexity impacts the clients' experience. This paper is a first step to address this gap. To this end, we identify a set of metrics to characterize the complexity of Web sites both at a content level (e.g., number and size of images) and service level (e.g., number of servers/origins). We find that the distributions of these metrics are largely independent of a Web site's popularity rank. However, some categories (e.g., News) are more complex than others. More than 60% of Web sites have content from at least five non-origin sources, and these contribute more than 35% of the bytes downloaded. In addition, we analyze which metrics are most critical for predicting page render and load times and find that the number of objects requested is the most important factor. With respect to variability in load times, however, we find that the number of servers is the best indicator. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Hybrid Hardware Architecture for High-Speed IP Lookups and Fast Route Updates

    Page(s): 957 - 969
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1253 KB) |  | HTML iconHTML  

    As network link rates are being pushed beyond 40 Gb/s, IP lookup in high-speed routers is moving to hardware. The ternary content addressable memory (TCAM)-based IP lookup engine and the static random access memory (SRAM)-based IP lookup pipeline are the two most common ways to achieve high throughput. However, route updates in both engines degrade lookup performance and may lead to packet drops. Moreover, there is a growing interest in virtual IP routers where more frequent updates happen. Finding solutions that achieve both fast lookup and low update overhead becomes critical. In this paper, we propose a hybrid IP lookup architecture to address this challenge. The architecture is based on an efficient trie partitioning scheme that divides the forwarding information base (FIB) into two prefix sets: a large disjoint leaf prefix set mapped into an external TCAM-based lookup engine and a small overlapping prefix set mapped into an on-chip SRAM-based lookup pipeline. Critical optimizations are developed on both IP lookup engines to reduce the update overhead. We show how to extend the proposed hybrid architecture to support virtual routers. Our implementation shows a throughput of 250 million lookups per second (equivalent to 128 Gb/s with 64-B packets). The update overhead is significantly lower than that of previous work, the memory consumption is reasonable, and the utilization ratio of most external TCAMs is up to 100%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discount Counting for Fast Flow Statistics on Flow Size and Flow Volume

    Page(s): 970 - 981
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2293 KB) |  | HTML iconHTML  

    A complete flow statistics report should include both flow size (the number of packets in a flow) counting and flow volume (the number of bytes in a flow) counting. Although previous studies have contributed a lot to the flow size counting problem, it is still a great challenge to well support the flow volume statistics due to the demanding requirements on both memory size and memory bandwidth in monitoring device. In this paper, we propose a DIScount COunting (DISCO) method, which is designed for both flow size and flow bytes counting. For each incoming packet of length l, DISCO increases the corresponding counter assigned to the flow with an increment that is less than l. With an elaborate design on the counter update rule and the inverse estimation, DISCO saves memory consumption while providing an accurate unbiased estimator. The method is evaluated thoroughly under theoretical analysis and simulations with synthetic and real traces. The results demonstrate that DISCO is more accurate than related work given the same counter sizes. DISCO is also implemented on the network processor Intel IXP2850 for a performance test. Using only one microengine (ME) in IXP2850, the throughput can reach up to 11.1 Gb/s under a traditional traffic pattern. The throughput increases to 39 Gb/s when employing four MEs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-Throughput and Memory-Efficient Multimatch Packet Classification Based on Distributed and Pipelined Hash Tables

    Page(s): 982 - 995
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1932 KB) |  | HTML iconHTML  

    The emergence of new network applications, such as the network intrusion detection system and packet-level accounting, requires packet classification to report all matched rules instead of only the best matched rule. Although several schemes have been proposed recently to address the multimatch packet classification problem, most of them require either huge memory or expensive ternary content addressable memory (TCAM) to store the intermediate data structure, or they suffer from steep performance degradation under certain types of classifiers. In this paper, we decompose the operation of multimatch packet classification from the complicated multidimensional search to several single-dimensional searches, and present an asynchronous pipeline architecture based on a signature tree structure to combine the intermediate results returned from single-dimensional searches. By spreading edges of the signature tree across multiple hash tables at different stages, the pipeline can achieve a high throughput via the interstage parallel access to hash tables. To exploit further intrastage parallelism, two edge-grouping algorithms are designed to evenly divide the edges associated with each stage into multiple work-conserving hash tables. To avoid collisions involved in hash table lookup, a hybrid perfect hash table construction scheme is proposed. Extensive simulation using realistic classifiers and traffic traces shows that the proposed pipeline architecture outperforms HyperCuts and B2PC schemes in classification speed by at least one order of magnitude, while having a similar storage requirement. Particularly, with different types of classifiers of 4K rules, the proposed pipeline architecture is able to achieve a throughput between 26.8 and 93.1 Gb/s using perfect hash tables. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign