Scheduled System Maintenance:
On Monday, April 27th, IEEE Xplore will undergo scheduled maintenance from 1:00 PM - 3:00 PM ET (17:00 - 19:00 UTC). No interruption in service is anticipated.
By Topic

Networking, IEEE/ACM Transactions on

Issue 1 • Date Feb. 2010

Filter Results

Displaying Results 1 - 25 of 32
  • [Front cover]

    Publication Year: 2010 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (427 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking publication information

    Publication Year: 2010 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • POPI: A User-Level Tool for Inferring Router Packet Forwarding Priority

    Publication Year: 2010 , Page(s): 1 - 14
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (759 KB) |  | HTML iconHTML  

    Packet forwarding prioritization (PFP) in routers is one of the mechanisms commonly available to network operators. PFP can have a significant impact on the accuracy of network measurements, the performance of applications and the effectiveness of network troubleshooting procedures. Despite its potential impacts, no information on PFP settings is readily available to end users. In this paper, we present an end-to-end approach for PFP inference and its associated tool, POPI. This is the first attempt to infer router packet forwarding priority through end-to-end measurement. POPI enables users to discover such network policies through measurements of packet losses of different packet types. We evaluated our approach via statistical analysis, simulation and wide-area experimentation in PlanetLab. We employed POPI to analyze 156 paths among 162 PlanetLab sites. POPI flagged 15 paths with multiple priorities, 13 of which were further validated through hop-by-hop loss rates measurements. In addition, we surveyed all related network operators and received responses for about half of them all confirming our inferences. Besides, we compared POPI with the inference mechanisms through other metrics such as packet reordering [called out-of-order (OOO)]. OOO is unable to find many priority paths such as those implemented via traffic policing. On the other hand, interestingly, we found it can detect existence of the mechanisms which induce delay differences among packet types such as slow processing path in the router and port-based load sharing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computational Analysis and Efficient Algorithms for Micro and Macro OFDMA Downlink Scheduling

    Publication Year: 2010 , Page(s): 15 - 26
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (481 KB) |  | HTML iconHTML  

    Orthogonal frequency-division multiple access (OFDMA) is one of the most important modulation and access methods for the future mobile networks. Before transmitting a frame on the downlink, an OFDMA base station has to invoke an algorithm that determines which of the pending packets will be transmitted, what modulation should be used for each of them, and how to construct the complex OFDMA frame matrix as a collection of rectangles that fit into a single matrix with fixed dimensions. We propose efficient algorithms, with performance guarantee, that solve this intricate OFDMA scheduling problem by breaking it down into two subproblems, referred to as macro and micro scheduling. We analyze the computational complexity of these subproblems and develop efficient algorithms for solving them. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Suitability of Euclidean Embedding for Host-Based Network Coordinate Systems

    Publication Year: 2010 , Page(s): 27 - 40
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1563 KB) |  | HTML iconHTML  

    In this paper, we investigate the suitability of embedding Internet hosts into a Euclidean space given their pairwise distances (as measured by round-trip time). Using the classical scaling and matrix perturbation theories, we first establish the (sum of the) magnitude of negative eigenvalues of the (doubly centered, squared) distance matrix as a measure of suitability of Euclidean embedding. We then show that the distance matrix among Internet hosts contains negative eigenvalues of large magnitude, implying that embedding the Internet hosts in a Euclidean space would incur relatively large errors. Motivated by earlier studies, we demonstrate that the inaccuracy of Euclidean embedding is caused by a large degree of triangle inequality violation (TIV) in the Internet distances, which leads to negative eigenvalues of large magnitude. Moreover, we show that the TIVs are likely to occur locally; hence the distances among these close-by hosts cannot be estimated accurately using a global Euclidean embedding. In addition, increasing the dimension of embedding does not reduce the embedding errors. Based on these insights, we propose a new hybrid model for embedding the network nodes using only a two-dimensional Euclidean coordinate system and small error adjustment terms. We show that the accuracy of the proposed embedding technique is as good as, if not better than, that of a seven-dimensional Euclidean embedding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection of Intelligent Mobile Target in a Mobile Sensor Network

    Publication Year: 2010 , Page(s): 41 - 52
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1537 KB) |  | HTML iconHTML  

    We study the problem of a mobile target (the mouse) trying to evade detection by one or more mobile sensors (we call such a sensor a cat) in a closed network area. We view our problem as a game between two players: the mouse, and the collection of cats forming a single (meta-)player. The game ends when the mouse falls within the sensing range of one or more cats. A cat tries to determine its optimal strategy to minimize the worst case expected detection time of the mouse. The mouse tries to determine an optimal counter movement strategy to maximize the expected detection time. We divide the problem into two cases based on the relative sensing capabilities of the cats and the mouse. When the mouse has a sensing range smaller than or equal to the cats', we develop a dynamic programming solution for the mouse's optimal strategy, assuming high level information about the cats' movement model. We discuss how the cats' chosen movement model will affect its presence matrix in the network, and hence its payoff in the game. When the mouse has a larger sensing range than the cats, we show how the mouse can determine its optimal movement strategy based on local observations of the cats' movements. We further present a coordination protocol for the cats to collaboratively catch the mouse by: 1) forming opportunistically a cohort to limit the mouse's degree of freedom in escaping detection; and 2) minimizing the overlap in the spatial coverage of the cohort's members. Extensive experimental results verify and illustrate the analytical results, and evaluate the game's payoffs as a function of several important system parameters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Thwarting Zero-Day Polymorphic Worms With Network-Level Length-Based Signature Generation

    Publication Year: 2010 , Page(s): 53 - 66
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (846 KB) |  | HTML iconHTML  

    It is crucial to detect zero-day polymorphic worms and to generate signatures at network gateways or honeynets so that we can prevent worms from propagating at their early phase. However, most existing network-based signatures are specific to exploit and can be easily evaded. In this paper, we propose generating vulnerability-driven signatures at network level without any host-level analysis of worm execution or vulnerable programs. As the first step, we design a network-based length-based signature generator (LESG) for the worms exploiting buffer overflow vulnerabilities. The signatures generated are intrinsic to buffer overflows, and are very difficult for attackers to evade. We further prove the attack resilience bounds even under worst-case attacks with deliberate noise injection. Moreover, LESG is fast and noise-tolerant and has efficient signature matching. Evaluation based on real-world vulnerabilities of various protocols and real network traffic demonstrates that LESG is promising in achieving these goals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 1+{\rm N} Network Protection for Mesh Networks: Network Coding-Based Protection Using p-Cycles

    Publication Year: 2010 , Page(s): 67 - 80
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB) |  | HTML iconHTML  

    p-cycles have been proposed for preprovisioned 1:N protection in optical mesh networks. Although the protection circuits are preconfigured, the detection of failures and the rerouting of traffic can be a time consuming operation. Another survivable mode of operation is the 1+1 protection mode, in which a signal is transmitted to the destination on two link disjoint circuits, hence recovery from failures is expeditious. However, this requires a large number of protection circuits. In this paper, we introduce a new concept in protection: 1+N protection, in which a p-cycle, similar to FIPP p-cycles, can be used to protect a number of bidirectional connections, which are mutually link disjoint, and also link disjoint from all links of the p-cycle. However, data units from different circuits are combined using network coding, which can be implemented in a number of technologies, such as next generation SONET (NGS), MPLS/GMPLS, or IP-over-WDM. The maximum outage time under this protection scheme can be limited to no more than the p-cycle propagation delay. It is also shown how to implement a hybrid 1+N and 1:N protection scheme, in which on-cycle links are protected using 1:N protection, while straddling links, or paths, are protected using 1+N protection. Extensions of this technique to protect multipoint connections are also introduced. A performance study based on optimal formulations of the 1+1, 1+N and the hybrid scheme is introduced. Although 1+N speed of recovery is comparable to that of 1+1 protection, numerical results for small networks indicate that 1+N is about 30% more efficient than 1+1 protection, in terms of the amount of protection resources, especially as the network graph density increases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SUSE: Superior Storage-Efficiency for Routing Tables Through Prefix Transformation and Aggregation

    Publication Year: 2010 , Page(s): 81 - 94
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1179 KB) |  | HTML iconHTML  

    A novel storage design for IP routing table construction is introduced on the basis of a single set-associative hash table to support fast longest prefix matching (LPM). The proposed design involves two key techniques to lower table storage required drastically: 1) storing transformed prefix representations; and 2) accommodating multiple prefixes per table entry via prefix aggregation, achieving superior storage-efficiency (SUSE). With each prefix (p(x)) maneuvered as a polynomial, p(x) - q(x) x g(a;) + r(x) based on a divisor g(x), SUSE keeps only q(x) rather than full and long p(x) in an r(x)-indexed table with 2degree(g(x)) entries, because q(x) and r(x) uniquely identify p(x). Additionally, using r(x) as the hash index exhibits better distribution than do original prefixes, reducing hash collisions, which can be tolerated further by the set-associative design. Given a set of chosen prefix lengths (called "treads"), all prefixes are rounded down to nearest treads under SUSE before hashed to the table using their transformed representations so that prefix aggregation opportunities abound in hash entries. SUSE yields significant table storage reduction and enjoys fast lookups and speedy incremental updates not possible for a typical trie-based design, with the worst-case lookup time shown upper-bounded theoretically by the number of treads (??) but found experimentally to be 4 memory accesses when ?? equals 8. SUSE makes it possible to fit a large routing table with 256 K (or even 1 M) prefixes in on-chip SRAM by today's ASIC technology. It solves both the memory- and the bandwidth-intensive problems faced by IP routing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Message-Efficient Beaconless Georouting With Guaranteed Delivery in Wireless Sensor, Ad Hoc, and Actuator Networks

    Publication Year: 2010 , Page(s): 95 - 108
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1266 KB) |  | HTML iconHTML  

    Beaconless georouting algorithms are fully reactive and work without prior knowledge of their neighbors. However, existing approaches can either not guarantee delivery or they require the exchange of complete neighborhood information. We describe two general methods for completely reactive face routing with guaranteed delivery. The beaconless forwarder planarization (BFP) scheme determines correct edges of a local planar subgraph without hearing from all neighbors. Face routing then continues properly. Angular relaying determines directly the next hop of a face traversal. Both schemes are based on the select-and-protest principle. Neighbors respond according to a delay function, but only if they do not violate a planar subgraph condition. Protest messages are used to remove falsely selected neighbors that are not in the planar subgraph. We show that a correct beaconless planar subgraph construction is not possible without protests. We also show the impact of the chosen planar subgraph on the message complexity. With the new circlunar neighborhood graph (CNG) we can bound the worst case message complexity of BFP, which is not possible when using the Gabriel graph (GG) for planarization. Simulation results show similar message complexities in the average case when using CNG and GG. Angular relaying uses a delay function that is based on the angular distance to the previous hop. We develop a theoretical framework for delay functions and show both theoretically and in simulations that with a function of angle and distance we can reduce the number of protests by a factor of 2 compared to a simple angle-based delay function. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The (In)Completeness of the Observed Internet AS-level Structure

    Publication Year: 2010 , Page(s): 109 - 122
    Cited by:  Papers (29)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1093 KB) |  | HTML iconHTML  

    Despite significant efforts to obtain an accurate picture of the Internet's connectivity structure at the level of individual autonomous systems (ASes), much has remained unknown in terms of the quality of the inferred AS maps that have been widely used by the research community. In this paper, we assess the quality of the inferred Internet maps through case studies of a sample set of ASes. These case studies allow us to establish the ground truth of connectivity between this set of ASes and their directly connected neighbors. A direct comparison between the ground truth and inferred topology maps yield insights into questions such as which parts of the actual topology are adequately captured by the inferred maps, which parts are missing and why, and what is the percentage of missing links in these parts. This information is critical in assessing, for each class of real-world networking problems, whether the use of currently inferred AS maps or proposed AS topology models is, or is not, appropriate. More importantly, our newly gained insights also point to new directions towards building realistic and economically viable Internet topology maps. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient and Dynamic Routing Topology Inference From End-to-End Measurements

    Publication Year: 2010 , Page(s): 123 - 135
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (933 KB) |  | HTML iconHTML  

    Inferring the routing topology and link performance from a node to a set of other nodes is an important component in network monitoring and application design. In this paper, we propose a general framework for designing topology inference algorithms based on additive metrics. The framework can flexibly fuse information from multiple measurements to achieve better estimation accuracy. We develop computationally efficient (polynomial-time) topology inference algorithms based on the framework. We prove that the probability of correct topology inference of our algorithms converges to one exponentially fast in the number of probing packets. In particular, for applications where nodes may join or leave frequently such as overlay network construction, application-layer multicast, and peer-to-peer file sharing/streaming, we propose a novel sequential topology inference algorithm that significantly reduces the probing overhead and can efficiently handle node dynamics. We demonstrate the effectiveness of the proposed inference algorithms via Internet experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global Access Network Evolution

    Publication Year: 2010 , Page(s): 136 - 149
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (785 KB) |  | HTML iconHTML  

    In this paper, we propose to tackle the problem of updating the access network in order to connect new subscribers and to satisfy the new class of service requirements for the existing subscribers to offer, for instance, new services such as high-definition television (HDTV) over the Internet protocol (IPTV). Four important access network architectures/technologies are considered: the digital subscriber line (xDSL) technologies deployed directly from the central office (CO), the fiber-to-the-node (FTTN), the fiber-to-the-micro-node (FTTn) and the fiber-to-the-premises (FTTP). An integer mathematical programming model is proposed for this network planning problem. Next, a heuristic algorithm based on the tabu search principle is proposed to find ??good?? feasible solutions within a reasonable amount of computational time. Finally, numerical results are presented and analyzed. To assess the quality of the solutions found with the proposed algorithm, they are compared to the optimal solutions found using a commercial implementation of the branch-and-bound algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Dynamic En-route Filtering Scheme for Data Reporting in Wireless Sensor Networks

    Publication Year: 2010 , Page(s): 150 - 163
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (798 KB) |  | HTML iconHTML  

    In wireless sensor networks, adversaries can inject false data reports via compromised nodes and launch DoS attacks against legitimate reports. Recently, a number of filtering schemes against false reports have been proposed. However, they either lack strong filtering capacity or cannot support highly dynamic sensor networks very well. Moreover, few of them can deal with DoS attacks simultaneously. In this paper, we propose a dynamic en-route filtering scheme that addresses both false report injection and DoS attacks in wireless sensor networks. In our scheme, each node has a hash chain of authentication keys used to endorse reports; meanwhile, a legitimate report should be authenticated by a certain number of nodes. First, each node disseminates its key to forwarding nodes. Then, after sending reports, the sending nodes disclose their keys, allowing the forwarding nodes to verify their reports. We design the hill climbing key dissemination approach that ensures the nodes closer to data sources have stronger filtering capacity. Moreover, we exploit the broadcast property of wireless communication to defeat DoS attacks and adopt multipath routing to deal with the topology changes of sensor networks. Simulation results show that compared to existing solutions, our scheme can drop false reports earlier with a lower memory requirement, especially in highly dynamic sensor networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weighted Spectral Distribution for Internet Topology Analysis: Theory and Applications

    Publication Year: 2010 , Page(s): 164 - 176
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2023 KB)  

    Comparing graphs to determine the level of underlying structural similarity between them is a widely encountered problem in computer science. It is particularly relevant to the study of Internet topologies, such as the generation of synthetic topologies to represent the Internet's AS topology. We derive a new metric that enables exactly such a structural comparison: the weighted spectral distribution. We then apply this metric to three aspects of the study of the Internet's AS topology. i) We use it to quantify the effect of changing the mixing properties of a simple synthetic network generator. ii) We use this quantitative understanding to examine the evolution of the Internet's AS topology over approximately seven years, finding that the distinction between the Internet core and periphery has blurred over time. iii) We use the metric to derive optimal parameterizations of several widely used AS topology generators with respect to a large-scale measurement of the real AS topology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gradually Reconfiguring Virtual Network Topologies Based on Estimated Traffic Matrices

    Publication Year: 2010 , Page(s): 177 - 189
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1271 KB) |  | HTML iconHTML  

    Traffic matrix is essential to traffic engineering (TE) methods. Because it is difficult to monitor traffic matrices directly, several methods for estimating them from link loads have been proposed. However, estimated traffic matrix includes estimation errors which degrade the performance of TE significantly. In this paper, we propose a method that reduces estimation errors while reconfiguring the virtual network topology (VNT) by cooperating with the VNT reconfiguration. In our method, the VNT reconfiguration is divided into multiple stages instead of reconfiguring the suitable VNT at once. By dividing the VNT reconfiguration into multiple stages, our traffic matrix estimation method calibrates and reduces the estimation errors in each stage by using information monitored in prior stages. We also investigate the effectiveness of our proposal using simulations. The results show that our method can improve the accuracy of the traffic matrix estimation and achieve an adequate VNT as is the case with the reconfiguration using the actual traffic matrices. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Resource Sharing in Low-Latency Wireless Ad Hoc Networks

    Publication Year: 2010 , Page(s): 190 - 201
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (628 KB) |  | HTML iconHTML  

    With the growing abundance of portable wireless communication devices, a challenging question that arises is whether one can efficiently harness the collective communication and computation power of these devices. In this paper, we investigate this question by studying a streaming application. Consider a network of N wireless nodes, each of power P, in which one or more nodes are interested in receiving a data stream from a fixed server node S. We ask whether distributed communication mechanisms exist to route media packets from S to the arbitrary but fixed receiver, such that 1) the average communication delay L is short, 2) the load is balanced, i.e., all nodes in the ensemble spend roughly the same amount of average power, and, more importantly, 3) power resources of all nodes are optimally shared, i.e., the lifetime of the network is comparable to an optimally designed network with L nodes whose total power is N ? P. We develop a theoretical framework for incorporation of random long range routes into wireless ad hoc networking protocols that can achieve such performance. Surprisingly, we show that wireless ad hoc routing algorithms, based on this framework, exist that can deliver this performance. The proposed solution is a randomized network structuring and packet routing framework whose communication latency is only L = O(log2 N) hops, on average, compared to O(?(N)) in nearest neighbor communications while distributing the power requirement almost equally over all nodes. Interestingly, all network formation and routing algorithms are completely decentralized, and the packets arriving at a node are routed randomly and independently, based only on the source and destination locations. The distributed nature of the algorithm allows it to be implemented within standard wireless ad hoc communication protocols and makes the proposed framework a compelling candidate for harnessing collective network resources in a truly large-scale wireless ad hoc networ- - king environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coverage-Time Optimization for Clustered Wireless Sensor Networks: A Power-Balancing Approach

    Publication Year: 2010 , Page(s): 202 - 215
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (544 KB) |  | HTML iconHTML  

    In this paper, we investigate the maximization of the coverage time for a clustered wireless sensor network by optimal balancing of power consumption among cluster heads (CHs). Clustering significantly reduces the energy consumption of individual sensors, but it also increases the communication burden on CHs. To investigate this tradeoff, our analytical model incorporates both intra- and intercluster traffic. Depending on whether location information is available or not, we consider optimization formulations under both deterministic and stochastic setups, using a Rayleigh fading model for intercluster communications. For the deterministic setup, sensor nodes and CHs are arbitrarily placed, but their locations are known. Each CH routes its traffic directly to the sink or relays it through other CHs. We present a coverage-time-optimal joint clustering/routing algorithm, in which the optimal clustering and routing parameters are computed using a linear program. For the stochastic setup, we consider a cone-like sensing region with uniformly distributed sensors and provide optimal power allocation strategies that guarantee (in a probabilistic sense) an upper bound on the end-to-end (inter-CH) path reliability. Two mechanisms are proposed for achieving balanced power consumption in the stochastic case: a routing-aware optimal cluster planning and a clustering-aware optimal random relay. For the first mechanism, the problem is formulated as a signomial optimization, which is efficiently solved using generalized geometric programming. For the second mechanism, we show that the problem is solvable in linear time. Numerical examples and simulations are used to validate our analysis and study the performance of the proposed schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deploying Sensor Networks With Guaranteed Fault Tolerance

    Publication Year: 2010 , Page(s): 216 - 228
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (848 KB) |  | HTML iconHTML  

    We consider the problem of deploying or repairing a sensor network to guarantee a specified level of multipath connectivity (k-connectivity) between all nodes. Such a guarantee simultaneously provides fault tolerance against node failures and high overall network capacity (by the max-flow min-cut theorem). We design and analyze the first algorithms that place an almost-minimum number of additional sensors to augment an existing network into a k -connected network, for any desired parameter k . Our algorithms have provable guarantees on the quality of the solution. Specifically, we prove that the number of additional sensors is within a constant factor of the absolute minimum, for any fixed k . We have implemented greedy and distributed versions of this algorithm, and demonstrate in simulation that they produce high-quality placements for the additional sensors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Measurement-Based Analysis, Modeling, and Synthesis of the Internet Delay Space

    Publication Year: 2010 , Page(s): 229 - 242
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1318 KB) |  | HTML iconHTML  

    Understanding the characteristics of the Internet delay space (i.e., the all-pairs set of static round-trip propagation delays among edge networks in the Internet) is important for the design of global-scale distributed systems. For instance, algorithms used in overlay networks are often sensitive to violations of the triangle inequality and to the growth properties within the Internet delay space. Since designers of distributed systems often rely on simulation and emulation to study design alternatives, they need a realistic model of the Internet delay space. In this paper, we analyze measured delay spaces among thousands of Internet edge networks and quantify key properties that are important for distributed system design. Our analysis shows that existing delay space models do not adequately capture these important properties of the Internet delay space. Furthermore, we derive a simple model of the Internet delay space based on our analytical findings. This model preserves the relevant metrics far better than existing models, allows for a compact representation, and can be used to synthesize delay data for simulations and emulations at a scale where direct measurement and storage are impractical. We present the design of a publicly available delay space synthesizer tool called DS 2 and demonstrate its effectiveness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Downlink Capacity of Hybrid Cellular Ad Hoc Networks

    Publication Year: 2010 , Page(s): 243 - 256
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1198 KB) |  | HTML iconHTML  

    Augmenting cellular networks with shorter multihop wireless links that carry traffic to/from a base station can be expected to facilitate higher rates and improved spatial reuse, therefore potentially yielding increased wireless capacity. The resulting network is referred to as a hybrid network. However, while this approach can result in shorter range higher rate links and improved spatial reuse, which together favor a capacity increase, it relies on multihop forwarding, which is detrimental to the overall capacity. In this paper, our objective is to evaluate the impact of these conflicting factors on the overall capacity of the hybrid network. We formally define the capacity of the network as the maximum possible downlink throughput under the constraint of max-min fairness. We analytically compute the capacity of both one- and two-dimensional hybrid networks with regular placement of base stations and users. While almost no capacity benefits are possible with linear networks due to poor spatial reuse, significant capacity improvements with two-dimensional networks are possible in certain parametric regimes. Our simulations also demonstrate that in both cases, if the users are placed randomly, the behavioral results are similar to those with regular placement of users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Understanding and Modeling the Internet Topology: Economics and Evolution Perspective

    Publication Year: 2010 , Page(s): 257 - 270
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB) |  | HTML iconHTML  

    In this paper, we seek to understand the intrinsic reasons for the well-known phenomenon of heavy-tailed degree in the Internet AS graph and argue that in contrast to traditional models based on preferential attachment and centralized optimization, the Pareto degree of the Internet can be explained by the evolution of wealth associated with each ISP. The proposed topology model utilizes a simple multiplicative stochastic process that determines each ISP's wealth at different points in time and several ??maintenance?? rules that keep the degree of each node proportional to its wealth. Actual link formation is determined in a decentralized fashion based on random walks, where each ISP individually decides when and how to increase its degree. Simulations show that the proposed model, which we call Wealth-based Internet Topology (WIT), produces scale-free random graphs with tunable exponent ?? and high clustering coefficients (between 0.35 and 0.5) that stay invariant as the size of the graph increases. This evolution closely mimics that of the Internet observed since 1997. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • QoS-Based Manycasting Over Optical Burst-Switched (OBS) Networks

    Publication Year: 2010 , Page(s): 271 - 283
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1218 KB) |  | HTML iconHTML  

    Many distributed applications require a group of destinations to be coordinated with a single source. Multicasting is a communication paradigm to implement these distributed applications. However in multicasting, if at least one of the members in the group cannot satisfy the service requirement of the application, the multicast request is said to be blocked. On the contrary in manycasting, destinations can join or leave the group, depending on whether it satisfies the service requirement or not. This dynamic membership based destination group decreases request blocking. We study the behavior of manycasting over optical burst-switched networks (OBS) based on multiple quality of service (QoS) constraints. These multiple constraints can be in the form of physical-layer impairments, transmission delay, and reliability of the link. Each application requires its own QoS threshold attributes. Destinations qualify only if they satisfy the required QoS constraints set up by the application. We have developed a mathematical model based on lattice algebra for this multiconstraint problem. Due to multiple constraints, burst blocking could be high. We propose two algorithms to minimize request blocking for the multiconstrained manycast (MCM) problem. Using extensive simulation results, we have calculated the average request blocking for the proposed algorithms. Our simulation results show that MCM-shortest path tree (MCM-SPT) algorithm performs better than MCM-dynamic membership (MCM-DM) for delay constrained services and real-time service, where as data services can be better provisioned using MCM-DM algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ILP Formulations for p -Cycle Design Without Candidate Cycle Enumeration

    Publication Year: 2010 , Page(s): 284 - 295
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1194 KB) |  | HTML iconHTML  

    The concept of p-cycle (preconfigured protection cycle) allows fast and efficient span protection in wavelength division multiplexing (WDM) mesh networks. To design p-cycles for a given network, conventional algorithms need to enumerate cycles in the network to form a candidate set, and then use an integer linear program (ILP) to find a set of p -cycles from the candidate set. Because the size of the candidate set increases exponentially with the network size, candidate cycle enumeration introduces a huge number of ILP variables and slows down the optimization process. In this paper, we focus on p-cycle design without candidate cycle enumeration. Three ILPs for solving the problem of spare capacity placement (SCP) are first formulated. They are based on recursion, flow conservation, and cycle exclusion, respectively. We show that the number of ILP variables/constraints in our cycle exclusion approach only increases linearly with the network size. Then, based on cycle exclusion, we formulate an ILP for solving the joint capacity placement (JCP) problem. Numerical results show that our ILPs are very efficient in generating p -cycle solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximation Algorithms for Minimum Energy Transmission Scheduling in Rate and Duty-Cycle Constrained Wireless Networks

    Publication Year: 2010 , Page(s): 296 - 306
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (408 KB) |  | HTML iconHTML  

    We consider a constrained energy optimization called minimum energy scheduling problem (MESP) for a wireless network of N users transmitting over M time slots, where the constraints arise because of interference between wireless nodes that limits their transmission rates along with load and duty-cycle (on-off) restrictions. Since traditional optimization methods using Lagrange multipliers do not work well and are computationally expensive given the nonconvex constraints, we consider approximation schemes for finding the optimal (minimum energy) transmission schedule by discretizing power levels over the interference channel. First, we show the toughness of approximating MESP for an arbitrary number of users N even with a fixed M. For any r > 0, we demonstrate that there does not exist any (r, r)-bicriteria approximation for this MESP, unless P = NP . Conversely, we show that there exist good approximations for MESP with given N users transmitting over an arbitrary number of M time slots by developing fully polynomial (1,1+??) approximation schemes (FPAS). For any ?? > 0, we develop an algorithm for computing the optimal number of discrete power levels per time slot (O(1/??)), and use this to design a (1, 1+??)-FPAS that consumes no more energy than the optimal while violating each rate constraint by at most a 1+??-factor. For wireless networks with low-cost transmitters, where nodes are restricted to transmitting at a fixed power over active time slots, we develop a two-factor approximation for finding the optimal fixed transmission power value P opt that results in the minimum energy schedule. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign