By Topic

Networking, IEEE/ACM Transactions on

Issue 6 • Date Dec. 2008

Filter Results

Displaying Results 1 - 24 of 24
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (335 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • Internet Traffic Behavior Profiling for Network Security Monitoring

    Page(s): 1241 - 1252
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1245 KB) |  | HTML iconHTML  

    Recent spates of cyber-attacks and frequent emergence of applications affecting Internet traffic dynamics have made it imperative to develop effective techniques that can extract, and make sense of, significant communication patterns from Internet traffic data for use in network operations and security management. In this paper, we present a general methodology for building comprehensive behavior profiles of Internet backbone traffic in terms of communication patterns of end-hosts and services. Relying on data mining and entropy-based techniques, the methodology consists of significant cluster extraction, automatic behavior classification and structural modeling for in-depth interpretive analyses. We validate the methodology using data sets from the core of the Internet. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Large-Scale IP Traceback in High-Speed Internet: Practical Techniques and Information-Theoretic Foundation

    Page(s): 1253 - 1266
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (757 KB) |  | HTML iconHTML  

    Tracing attack packets to their sources, known as IP traceback, is an important step to counter distributed denial-of-service (DDoS) attacks. In this paper, we propose a novel packet logging based (i.e., hash-based) traceback scheme that requires an order of magnitude smaller processing and storage cost than the hash-based scheme proposed by Snoeren , thereby being able to scalable to much higher link speed (e.g., OC-768). The baseline idea of our approach is to sample and log a small percentage (e.g., 3.3%) of packets. The challenge of this low sampling rate is that much more sophisticated techniques need to be used for traceback. Our solution is to construct the attack tree using the correlation between the attack packets sampled by neighboring routers. The scheme using naive independent random sampling does not perform well due to the low correlation between the packets sampled by neighboring routers. We invent a sampling scheme that improves this correlation and the overall efficiency significantly. Another major contribution of this work is that we introduce a novel information-theoretic framework for our traceback scheme to answer important questions on system parameter tuning and the fundamental tradeoff between the resource used for traceback and the traceback accuracy. Simulation results based on real-world network topologies (e.g., Skitter) match very well with results from the information-theoretic analysis. The simulation results also demonstrate that our traceback scheme can achieve high accuracy, and scale very well to a large number of attackers (e.g., 5000+). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TVA: A DoS-Limiting Network Architecture

    Page(s): 1267 - 1280
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (613 KB) |  | HTML iconHTML  

    We motivate the capability approach to network denial-of-service (DoS) attacks, and evaluate the traffic validation architecture (TVA) architecture which builds on capabilities. With our approach, rather than send packets to any destination at any time, senders must first obtain ldquopermission to sendrdquo from the receiver, which provides the permission in the form of capabilities to those senders whose traffic it agrees to accept. The senders then include these capabilities in packets. This enables verification points distributed around the network to check that traffic has been authorized by the receiver and the path in between, and hence to cleanly discard unauthorized traffic. To evaluate this approach, and to understand the detailed operation of capabilities, we developed a network architecture called TVA. TVA addresses a wide range of possible attacks against communication between pairs of hosts, including spoofed packet floods, network and host bottlenecks, and router state exhaustion. We use simulations to show the effectiveness of TVA at limiting DoS floods, and an implementation on Click router to evaluate the computational costs of TVA. We also discuss how to incrementally deploy TVA into practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • One More Bit is Enough

    Page(s): 1281 - 1294
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1538 KB) |  | HTML iconHTML  

    Achieving efficient and fair bandwidth allocation while minimizing packet loss and bottleneck queue in high bandwidth-delay product networks has long been a daunting challenge. Existing end-to-end congestion control (e.g., TCP) and traditional congestion notification schemes (e.g., TCP+AQM/ECN) have significant limitations in achieving this goal. While the XCP protocol addresses this challenge, it requires multiple bits to encode the congestion-related information exchanged between routers and end-hosts. Unfortunately, there is no space in the IP header for these bits, and solving this problem involves a non-trivial and time-consuming standardization process. In this paper, we design and implement a simple, low-complexity protocol, called variable-structure congestion control protocol (VCP), that leverages only the existing two ECN bits for network congestion feedback, and yet achieves comparable performance to XCP, i.e., high utilization, negligible packet loss rate, low persistent queue length, and reasonable fairness. On the downside, VCP converges significantly slower to a fair allocation than XCP. We evaluate the performance of VCP using extensive ns2 simulations over a wide range of network scenarios and find that it significantly outperforms many recently-proposed TCP variants, such as HSTCP, FAST, CUBIC, etc. To gain insight into the behavior of VCP, we analyze a simplified fluid model and prove its global stability for the case of a single bottleneck shared by synchronous flows with identical round-trip times. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Impact of Hot-Potato Routing Changes in IP Networks

    Page(s): 1295 - 1307
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (894 KB) |  | HTML iconHTML  

    Despite the architectural separation between intradomain and interdomain routing in the Internet, intradomain protocols do influence the path-selection process in the Border Gateway Protocol (BGP). When choosing between multiple equally-good BGP routes, a router selects the one with the closest egress point, based on the intradomain path cost. Under such hot-potato routing, an intradomain event can trigger BGP routing changes. To characterize the influence of hot-potato routing, we propose a technique for associating BGP routing changes with events visible in the intradomain protocol, and apply our algorithm to a tier-1 ISP backbone network. We show that (i) BGP updates can lag 60 seconds or more behind the intradomain event; (ii) the number of BGP path changes triggered by hot-potato routing has a nearly uniform distribution across destination prefixes; and (iii) the fraction of BGP messages triggered by intradomain changes varies significantly across time and router locations. We show that hot-potato routing changes lead to longer delays in forwarding-plane convergence, shifts in the flow of traffic to neighboring domains, extra externally-visible BGP update messages, and inaccuracies in Internet performance measurements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Label Space Reduction in MPLS Networks: How Much Can A Single Stacked Label Do?

    Page(s): 1308 - 1320
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1046 KB) |  | HTML iconHTML  

    Most network operators have considered reducing LSR label spaces (number of labels used) as a way of simplifying management of underlaying virtual private networks (VPNs) and therefore reducing operational expenditure (OPEX). The IETF outlined the label merging feature in MPLS-allowing the configuration of multipoint-to-point connections (MP2P)-as a means of reducing label space in LSRs. We found two main drawbacks in this label space reduction scheme: a) it should be separately applied to a set of LSPs with the same egress LSR-which decreases the options for better reductions, and b) LSRs close to the edge of the network experience a greater label space reduction than those close to the core. The later implies that MP2P connections reduce the number of labels asymmetrically. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bandwidth Guaranteed Routing With Fast Restoration Against Link and Node Failures

    Page(s): 1321 - 1330
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (588 KB) |  | HTML iconHTML  

    An important feature of MPLS networks is local restoration where detour paths are set-up a priori. The detour is such that failed links or nodes can be bypassed locally from the first node that is upstream from the failures. This local bypass activation from the first detection point for failures permits much faster recovery than end-to-end path based mechanisms that require failure information to propagate to the network edges. However, local restoration of bandwidth guaranteed connections can be expensive in the additional network capacity needed. Hence, it is important to minimize and share restoration capacity. The problem of routing with local restoration requirements has been studied previously in a dynamic on-line setting. However, there are no satisfactory algorithms for the problem of pre-provisioning fast restorable connections when the aggregate traffic demands are known (as would be the case when a set of routers are to be interconnected over an optical network or for pre-provisioned ATM over MPLS overlays). The contribution of this paper is a fast combinatorial approximation algorithm for maximizing throughput when the routed traffic is required to be locally restorable. To the best of our knowledge, this is the first combinatorial algorithm for the problem with a performance guarantee. Our algorithm is a fully polynomial time approximation scheme (FPTAS), i.e., for any given epsiv > 0, it guarantees (1+epsiv)-factor closeness to the optimal solution, and runs in time polynomial in the network size and [ 1/( epsiv)]. We compare the throughput of locally restorable routing with that of unprotected routing and 1+1-dedicated path protection on actual US/European ISP topologies taken from the Rocketfuel project . View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliable Routings in Networks With Generalized Link Failure Events

    Page(s): 1331 - 1339
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (430 KB) |  | HTML iconHTML  

    We study routing problems in networks that require guaranteed reliability against multiple correlated link failures. We consider two different routing objectives: The first ensures ldquolocal reliability,rdquo i.e., the goal is to route so that each connection in the network is as reliable as possible. The second ensures ldquoglobal reliability,rdquo i.e., the goal is to route so that as few as possible connections are affected by any possible failure. We exhibit a trade-off between the two objectives and resolve their complexity and approximability for several classes of networks. Furthermore, we propose approximation algorithms and heuristics. We perform experiments to evaluate the heuristics against optimal solutions that are obtained using an integer linear programming solver. We also investigate up to what degree the routing trade-offs occur in randomly generated instances. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Buffered Cross-Bar Switches, Revisited: Design Steps, Proofs and Simulations Towards Optimal Rate and Minimum Buffer Memory

    Page(s): 1340 - 1351
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (524 KB) |  | HTML iconHTML  

    Regarding the packet-switching problem, we prove that the weighed max-min fair service rates comprise the unique Nash equilibrium point of a strategic game, specifically a throughput auction based on a ldquoleast-demanding first-servedrdquo principle. We prove that a buffered crossbar switch can converge to this equilibrium with no pre-computation or internal acceleration, with either randomized or deterministic schedulers, (the latter with a minimum buffering of a single-packet per crosspoint). Finally, we present various simulation results that corroborate and extend our analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Supporting Multiple Protection Strategies in Optical Networks

    Page(s): 1352 - 1365
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1797 KB) |  | HTML iconHTML  

    This paper develops a framework to support multiple protection strategies in optical networks, which is in general applicable to any connection-oriented network. The capacity available on a link for routing primary and backup connections are computed depending on the protection strategy. The paper also develops a model for computing service outage and failure recovery times for a connection where notifications of failure location are broadcast in the network. The effectiveness of employing multiple protection strategies is established by studying the performance of three networks for traffic with four types of protection requirement. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Comprehensive Study on Backup-Bandwidth Reprovisioning After Network-State Updates in Survivable Telecom Mesh Networks

    Page(s): 1366 - 1377
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (579 KB) |  | HTML iconHTML  

    The capacity of a telecom fiber is very high and continues to increase, due to the advances in wavelength-division multiplexing (WDM) technology. Thus, a fiber-link failure may cause huge data (and revenue) loss. Reprovisioning (or re-optimization) of backup (or protection) bandwidth is an effective approach to improve network survivability while preventing existing services from unnecessary interruption. Most research works to date focus on applying backup-resource reprovisioning when a network failure occurs, or at some particular intervals over a certain time period. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lifetime Maximization for Connected Target Coverage in Wireless Sensor Networks

    Page(s): 1378 - 1391
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1066 KB) |  | HTML iconHTML  

    In this paper, we consider the connected target coverage (CTC) problem with the objective of maximizing the network lifetime by scheduling sensors into multiple sets, each of which can maintain both target coverage and connectivity among all the active sensors and the sink. We model the CTC problem as a maximum cover tree (MCT) problem and prove that the MCT problem is NP-Complete. We determine an upper bound on the network lifetime for the MCT problem and then develop a (1+w)H(M circ) approximation algorithm to solve it, where w is an arbitrarily small number, H(M circ)=1 lesilesM circ(1/i) and M circ is the maximum number of targets in the sensing area of any sensor. As the protocol cost of the approximation algorithm may be high in practice, we develop a faster heuristic algorithm based on the approximation algorithm called Communication Weighted Greedy Cover (CWGC) algorithm and present a distributed implementation of the heuristic algorithm. We study the performance of the approximation algorithm and CWGC algorithm by comparing them with the lifetime upper bound and other basic algorithms that consider the coverage and connectivity problems independently. Simulation results show that the approximation algorithm and CWGC algorithm perform much better than others in terms of the network lifetime and the performance improvement can be up to 45% than the best-known basic algorithm. The lifetime obtained by our algorithms is close to the upper bound. Compared with the approximation algorithm, the CWGC algorithm can achieve a similar performance in terms of the network lifetime with a lower protocol cost. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal and Distributed Protocols for Cross-Layer Design of Physical and Transport Layers in MANETs

    Page(s): 1392 - 1405
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (734 KB) |  | HTML iconHTML  

    We seek distributed protocols that attain the global optimum allocation of link transmitter powers and source rates in a cross-layer design of a mobile ad hoc network. Although the underlying network utility maximization is nonconvex, convexity plays a major role in our development. We provide new convexity results surrounding the Shannon capacity formula, allowing us to abandon suboptimal high-SIR approximations that have almost become entrenched in the literature. More broadly, these new results can be back-substituted into many existing problems for similar benefit. Three protocols are developed. The first is based on a convexification of the underlying problem, relying heavily on our new convexity results. We provide conditions under which it produces a globally optimum resource allocation. We show how it may be distributed through message passing for both rate- and power-allocation. Our second protocol relaxes this requirement and involves a novel sequence of convex approximations, each exploiting existing TCP protocols for source rate allocation. Message passing is only used for power control. Our convexity results again provide sufficient conditions for global optimality. Our last protocol, motivated by a desire of power control devoid of message passing, is a near optimal scheme that makes use of noise measurements and enjoys a convergence rate that is orders of magnitude faster than existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Throughput Maximization in Wireless Mesh Networks via Pre-Partitioning

    Page(s): 1406 - 1419
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (599 KB) |  | HTML iconHTML  

    This paper considers the interaction between channel assignment and distributed scheduling in multi-channel multi-radio Wireless Mesh Networks (WMNs). Recently, a number of distributed scheduling algorithms for wireless networks have emerged. Due to their distributed operation, these algorithms can achieve only a fraction of the maximum possible throughput. As an alternative to increasing the throughput fraction by designing new algorithms, we present a novel approach that takes advantage of the inherent multi-radio capability of WMNs. We show that this capability can enable partitioning of the network into subnetworks in which simple distributed scheduling algorithms can achieve 100% throughput. The partitioning is based on the notion of Local Pooling. Using this notion, we characterize topologies in which 100% throughput can be achieved distributedly. These topologies are used in order to develop a number of centralized channel assignment algorithms that are based on a matroid intersection algorithm. These algorithms pre-partition a network in a manner that not only expands the capacity regions of the subnetworks but also allows distributed algorithms to achieve these capacity regions. We evaluate the performance of the algorithms via simulation and show that they significantly increase the distributedly achievable capacity region. We note that while the identified topologies are of general interference graphs, the partitioning algorithms are designed for networks with primary interference constraints. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Uplink Power Control for Optimal SIR Assignment in Cellular Data Networks

    Page(s): 1420 - 1433
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (731 KB) |  | HTML iconHTML  

    This paper solves the joint power control and SIR assignment problem through distributed algorithms in the uplink of multi-cellular wireless networks. The 1993 Foschini-Miljanic distributed power control can attain a given fixed and feasible SIR target. In data networks, however, SIR needs to be jointly optimized with transmit powers in wireless data networks. In the vast research literature since the mid-1990s, solutions to this joint optimization problem are either distributed but suboptimal, or optimal but centralized. For convex formulations of this problem, we report a distributed and optimal algorithm. The main issue that has been the research bottleneck is the complicated, coupled constraint set, and we resolve it through a re-parametrization via the left Perron Frobenius eigenvectors, followed by development of a locally computable ascent direction. A key step is a new characterization of the feasible SIR region in terms of the loads on the base stations, and an indication of the potential interference from mobile stations, which we term spillage. Based on this load-spillage characterization, we first develop a distributed algorithm that can achieve any Pareto-optimal SIR assignment, then a distributed algorithm that picks out a particular Pareto-optimal SIR assignment and the associated powers through utility maximization. Extensions to power-constrained and interference-constrained cases are carried out. The algorithms are theoretically sound and practically implementable: we present convergence and optimality proofs as well as simulations using 3GPP network and path loss models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Framework for Mitigating Attacks Against Measurement-Based Adaptation Mechanisms in Unstructured Multicast Overlay Networks

    Page(s): 1434 - 1446
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (642 KB) |  | HTML iconHTML  

    Many multicast overlay networks maintain application-specific performance goals by dynamically adapting the overlay structure when the monitored performance becomes inadequate. This adaptation results in an unstructured overlay where no neighbor selection constraints are imposed. Although such networks provide resilience to benign failures, they are susceptible to attacks conducted by adversaries that compromise overlay nodes. Previous defense solutions proposed to address attacks against overlay networks rely on strong organizational constraints and are not effective for unstructured overlays. In this work, we identify, demonstrate and mitigate insider attacks against measurement-based adaptation mechanisms in unstructured multicast overlay networks. We propose techniques to decrease the number of incorrect adaptations by using outlier detection and limit the impact of malicious nodes by aggregating local information to derive global reputation for each node. We demonstrate the attacks and mitigation techniques through real-life deployments of a mature overlay multicast system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Traffic Modeling and Proportional Partial Caching for Peer-to-Peer Systems

    Page(s): 1447 - 1460
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (545 KB) |  | HTML iconHTML  

    Peer-to-peer (P2P) file sharing systems generate a major portion of the Internet traffic, and this portion is expected to increase in the future. We explore the potential of deploying proxy caches in different autonomous systems (ASes) with the goal of reducing the cost incurred by Internet service providers and alleviating the load on the Internet backbone. We conduct an eight-month measurement study to analyze the P2P traffic characteristics that are relevant to caching, such as object popularity, popularity dynamics, and object size. Our study shows that the popularity of P2P objects can be modeled by a Mandelbrot-Zipf distribution, and that several workloads exist in P2P traffic. Guided by our findings, we develop a novel caching algorithm for P2P traffic that is based on object segmentation, and proportional partial admission and eviction of objects. Our trace-based simulations show that with a relatively small cache size, a byte hit rate of up to 35% can be achieved by our algorithm, which is close to the byte hit rate achieved by an off-line optimal algorithm with complete knowledge of future requests. Our results also show that our algorithm achieves a byte hit rate that is at least 40% more, and at most triple, the byte hit rate of the common Web caching algorithms. Furthermore, our algorithm is robust in face of aborted downloads, which is a common case in P2P systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Comparative Analysis of Server Selection in Content Replication Networks

    Page(s): 1461 - 1474
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (825 KB) |  | HTML iconHTML  

    Server selection plays an essential role in content replication networks, such as peer-to-peer (P2P) and content delivery networks (CDNs). In this paper, we perform an analytical investigation of the strengths and weaknesses of existing server selection policies, based initially on an M/G/1 processor sharing (PS) queueing-theoretic model. We develop a theoretical benchmark to evaluate the performance of two general server selection policies, referred to as EQ_DELAY and EQ_LOAD, which characterize a wide range of existing server selection algorithms. We find that EQ_LOAD achieves an average delay always higher than or equal to that of EQ_DELAY. A key theoretical result of this paper is that in an N-server system, the worst case ratio between the average delay of EQ_DELAY or EQ_LOAD and the minimal average delay (obtained from the benchmark) is precisely N. We constructively show how this worst case scenario can arise in highly heterogeneous systems. This result, when interpreted in the context of selfish routing, means that the price of anarchy in unbounded delay networks depends on the topology, and can potentially be very large. Our analytical findings are extended in asymptotic regimes to the G/G/1 first-come first-serve and multi-class M/G/1-PS models and supported by simulations run for various arrival and service processes, scheduling disciplines, and workload exhibiting temporal locality. These results indicate that our analysis is applicable to realistic scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Static and Dynamic Partitioning Behavior of Large-Scale P2P Networks

    Page(s): 1475 - 1488
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1106 KB) |  | HTML iconHTML  

    In this paper, we analyze the problem of network disconnection in the context of large-scale P2P networks and understand how both static and dynamic patterns of node failure affect the resilience of such graphs. We start by applying classical results from random graph theory to show that a large variety of deterministic and random P2P graphs almost surely (i.e., with probability 1-o(1)) remain connected under random failure if and only if they have no isolated nodes. This simple, yet powerful, result subsequently allows us to derive in closed-form the probability that a P2P network develops isolated nodes, and therefore partitions, under both types of node failure. We finish the paper by demonstrating that our models match simulations very well and that dynamic P2P systems are extremely resilient under node churn as long as the neighbor replacement delay is much smaller than the average user lifetime. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2008 Index IEEE/ACM Transactions on Networking Vol. 16

    Page(s): 1489 - 1500
    Save to Project icon | Request Permissions | PDF file iconPDF (148 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking society information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking Information for authors

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (29 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign