By Topic

Networking, IEEE/ACM Transactions on

Issue 5 • Date Oct. 2009

Filter Results

Displaying Results 1 - 25 of 29
  • [Front cover]

    Publication Year: 2009 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (417 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking publication information

    Publication Year: 2009 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • Analyzing the Video Popularity Characteristics of Large-Scale User Generated Content Systems

    Publication Year: 2009 , Page(s): 1357 - 1370
    Cited by:  Papers (55)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1834 KB) |  | HTML iconHTML  

    User generated content (UGC), now with millions of video producers and consumers, is reshaping the way people watch video and TV. In particular, UGC sites are creating new viewing patterns and social interactions, empowering users to be more creative, and generating new business opportunities. Compared to traditional video-on-demand (VoD) systems, UGC services allow users to request videos from a potentially unlimited selection in an asynchronous fashion. To better understand the impact of UGC services, we have analyzed the world's largest UGC VoD system, YouTube, and a popular similar system in Korea, Daum Videos. In this paper, we first empirically show how UGC services are fundamentally different from traditional VoD services. We then analyze the intrinsic statistical properties of UGC popularity distributions and discuss opportunities to leverage the latent demand for niche videos (or the so-called "the Long Tail" potential), which is not reached today due to information filtering or other system scarcity distortions. Based on traces collected across multiple days, we study the popularity lifetime of UGC videos and the relationship between requests and video age. Finally, we measure the level of content aliasing and illegal content in the system and show the problems aliasing creates in ranking the video popularity accurately. The results presented in this paper are crucial to understanding UGC VoD systems and may have major commercial and technical implications for site administrators and content owners. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Long Term Study of Peer Behavior in the kad DHT

    Publication Year: 2009 , Page(s): 1371 - 1384
    Cited by:  Papers (44)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1452 KB) |  | HTML iconHTML  

    Distributed hash tables (DHTs) have been actively studied in literature and many different proposals have been made on how to organize peers in a DHT. However, very few DHTs have been implemented in real systems and deployed on a large scale. One exception is KAD, a DHT based on Kademlia, which is part of eDonkey, a peer-to-peer file sharing system with several million simultaneous users. We have been crawling a representative subset of KAD every five minutes for six months and obtained information about geographical distribution of peers, session times, daily usage, and peer lifetime. We have found that session times are Weibull distributed and we show how this information can be exploited to make the publishing mechanism much more efficient. Peers are identified by the so-called KAD ID, which up to now was assumed to be persistent. However, we observed that a fraction of peers changes their KAD ID as frequently as once a session. This change of KAD IDs makes it difficult to characterize end-user behavior. For this reason we have been crawling the entire KAD network once a day for more than a year to track end-users with static IP addresses, which allows us to estimate end-user lifetime and the fraction of end-users changing their KAD ID. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Taking the Skeletons Out of the Closets: A Simple and Efficient Topology Discovery Scheme for Large Ethernet LANs

    Publication Year: 2009 , Page(s): 1385 - 1398
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (594 KB) |  | HTML iconHTML  

    We propose a simple and efficient algorithmic solution for discovering the physical topology of large, heterogeneous Ethernet LANs that may include multiple subnets as well as uncooperative network elements, like hubs. Our scheme utilizes only generic MIB information and does not require any hardware or software modification of the underlying network elements. By rigorous analysis, we prove that our method correctly infers the network topology and has low communication and computational overheads. Our simulation results show that the scheme successfully infers the complete topology in the vast majority of the cases, including many instances that cannot be inferred by other methods. Finally, our proof-of-concept implementation demonstrates the practicality of the proposed scheme for network management. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scalable Key Management Algorithms for Location-Based Services

    Publication Year: 2009 , Page(s): 1399 - 1412
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (603 KB) |  | HTML iconHTML  

    Secure media broadcast over the Internet poses unique security challenges. One important problem for public broadcast location-based services (LBS) is to enforce access control on a large number of subscribers. In such a system, a user typically subscribes to an LBS for a time interval (a, b) and a spatial region (xbi, ybi, xtr,ytr) according to a 3-dimensional spatial-temporal authorization model. In this paper, we argue that current approaches to access control using key management protocols are not scalable. Our proposal, STauth, minimizes the number of keys that needs to be distributed and is thus scalable to a large number of subscribers and the dimensionality of the authorization model. We also demonstrate applications of our algorithm to quantified-temporal access control (using V and 3 quantifications) and partial-order tree-based authorization models. We describe two implementations of our key management protocols on two diverse platforms: a broadcast service operating on top of a publish/subscribe infrastructure and an extension to the Google Maps API to support quality (resolution)-based access control. We analytically and experimentally show the performance and scalability benefits of our approach over traditional key management approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Unified Approach to Congestion Control and Node-Based Multipath Routing

    Publication Year: 2009 , Page(s): 1413 - 1426
    Cited by:  Papers (9)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (896 KB) |  | HTML iconHTML  

    The paper considers a TCP/IP-style network with flow control at end-systems based on congestion feedback and routing decisions at network nodes on a per-destination basis. The main generalization with respect to standard IP is to allow routers to split their traffic in a controlled way between the outgoing links. We formulate global optimization criteria, combining those used in the congestion control and traffic engineering, and propose decentralized controllers at sources and routers to reach these optimal points, based on congestion price feedback. We first consider adapting the traffic splits at routers to follow the negative price gradient; we prove this is globally stabilizing when combined with primal congestion control, but can exhibit oscillations in the case of dual congestion control. We then propose an alternative anticipatory control of routing, proving its stability for the case of dual congestion control. We present a concrete implementation of such algorithms, based on queueing delay as congestion price. We use TCP-FAST for congestion control and develop a multipath variant of the distance vector routing protocol RIP. We demonstrate through ns2-simulations the collective behavior of the system, in particular that it reaches the desired equilibrium points. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Guaranteed Performance Routing of Unpredictable Traffic With Fast Path Restoration

    Publication Year: 2009 , Page(s): 1427 - 1438
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB) |  | HTML iconHTML  

    Two-phase routing, where traffic is first distributed to intermediate nodes before being routed to the final destination, has been recently proposed for handling widely fluctuating traffic without the need to adapt network routing to changing traffic. Preconfiguring the network in a traffic-independent manner using two-phase routing simplifies network operation considerably. In this paper, we extend this routing scheme by providing resiliency against link failures through fast path restoration along disjoint end-to-end backup paths. We view this as important progress toward adding carrier-class reliability to the robustness of the scheme so as to facilitate its future deployment in Internet service provider (ISP) networks. On the theoretical side, the main contribution of the paper is the development of linear-programming-based and fast combinatorial algorithms for two-phase routing with fast path restoration so as to minimize the maximum utilization of any link in the network, or equivalently, maximize the throughput. The algorithms developed are fully polynomial time approximation schemes (FPTAS)-for any given epsiv > 0, an FPTAS guarantees a solution that is within a (1+epsiv)-factor of the optimum and runs in time polynomial in the input size and [ 1/(relax epsiv)]. To the best of our knowledge, this is the first work in the literature that considers making the scheme resilient to link failures through preprovisioned fast restoration mechanisms. We evaluate the performance of fast path restoration (in terms of throughput) and compare it to that of unprotected routing. For our experiments, we use actual ISP network topologies collected for the Rocketfuel project and three research network topologies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Oblivious Routing in Fat-Tree Based System Area Networks With Uncertain Traffic Demands

    Publication Year: 2009 , Page(s): 1439 - 1452
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (772 KB) |  | HTML iconHTML  

    We study oblivious routing in fat-tree-based system area networks with deterministic routing under the assumption that the traffic demand is uncertain. The performance of a routing algorithm under uncertain traffic demands is characterized by the oblivious performance ratio that bounds the relative performance of the routing algorithm with respect to the optimal algorithm for any given traffic demand. We consider both single-path routing, where only one path is used to carry the traffic between each source-destination pair, and multipath routing, where multiple paths are allowed. For single-path routing, we derive lower bounds of the oblivious performance ratio for different fat-trees and develop routing schemes that achieve the optimal oblivious performance ratios for commonly used topologies. Our evaluation results indicate that the proposed oblivious routing schemes not only provide the optimal worst-case performance guarantees but also outperform existing schemes in average cases. For multipath routing, we show that it is possible to obtain an optimal scheme for all traffic demands (an oblivious performance ratio of 1). These results quantitatively demonstrate the performance difference between single-path routing and multipath routing in fat-trees. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nash Bargaining and Proportional Fairness for Wireless Systems

    Publication Year: 2009 , Page(s): 1453 - 1466
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (370 KB) |  | HTML iconHTML  

    Nash bargaining and proportional fairness are popular strategies for distributing resources among competing users. Under the conventional assumption of a convex compact utility set, both techniques yield the same unique solution. In this paper, we show that uniqueness is preserved for a broader class of logarithmically convex sets. Then, we study a scenario where the performance of each user is measured by its signal-to-interference ratio (SIR). The SIR is modeled by an axiomatic framework of log-convex interference functions. No power constraints are assumed. It is shown how existence and uniqueness of a proportionally fair optimizer depends on the interference coupling among the users. Finally, we analyze the feasible SIR set. Conditions are derived under which the Nash bargaining strategy has a single-valued solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Link Scheduling With Constant Overhead

    Publication Year: 2009 , Page(s): 1467 - 1480
    Cited by:  Papers (37)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB) |  | HTML iconHTML  

    This paper proposes a new class of simple, distributed algorithms for scheduling in multihop wireless networks under the primary interference model. The class is parameterized by integers k ges 1. We show that algorithm k of our class achieves k/(k + 2) of the capacity region, for every k ges 1. The algorithms have small and constant worst-case overheads. In particular, algorithm k generates a new schedule using a) time less than 4k + 2 round-trip times between neighboring nodes in the network and b) at most three control transmissions by any given node for any k. The control signals are explicitly specified and face the same interference effects as normal data transmissions. Our class of distributed wireless scheduling algorithms are the first ones guaranteed to achieve any fixed fraction of the capacity region while using small and constant overheads that do not scale with network size. The parameter k explicitly captures the tradeoff between control overhead and throughput performance and provides a tuning-knob protocol designers can use to harness this tradeoff in practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance of Random Access Scheduling Schemes in Multi-Hop Wireless Networks

    Publication Year: 2009 , Page(s): 1481 - 1493
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (558 KB) |  | HTML iconHTML  

    The scheduling problem in multi-hop wireless networks has been extensively investigated. Although throughput optimal scheduling solutions have been developed in the literature, they are unsuitable for multi-hop wireless systems because they are usually centralized and have very high complexity. In this paper, we develop a random-access based scheduling scheme that utilizes local information. The important features of this scheme include constant-time complexity, distributed operations, and a provable performance guarantee. Analytical results show that it guarantees a larger fraction of the optimal throughput performance than the state-of-the-art. Through simulations with both single-hop and multi-hop traffics, we observe that the scheme provides high throughput, close to that of a well-known highly efficient centralized greedy solution called the greedy maximal scheduler. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Stochastic Policies for Distributed Data Aggregation in Wireless Sensor Networks

    Publication Year: 2009 , Page(s): 1494 - 1507
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1021 KB) |  | HTML iconHTML  

    The scenario of distributed data aggregation in wireless sensor networks is considered, where sensors can obtain and estimate the information of the whole sensing field through local data exchange and aggregation. An intrinsic tradeoff between energy and aggregation delay is identified, where nodes must decide optimal instants for forwarding samples. The samples could be from a node's own sensor readings or an aggregation with samples forwarded from neighboring nodes. By considering the randomness of the sample arrival instants and the uncertainty of the availability of the multiaccess communication channel, a sequential decision process model is proposed to analyze this problem and determine optimal decision policies with local information. It is shown that, once the statistics of the sample arrival and the availability of the channel satisfy certain conditions, there exist optimal control-limit-type policies that are easy to implement in practice. In the case that the required conditions are not satisfied, the performance loss of using the proposed control-limit-type policies is characterized. In general cases, a finite-state approximation is proposed and two on-line algorithms are provided to solve it. Practical distributed data aggregation simulations demonstrate the effectiveness of the developed policies, which also achieve a desired energy-delay tradeoff. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Sleep/Wake Scheduling for Time-Synchronized Sensor Networks With QoS Guarantees

    Publication Year: 2009 , Page(s): 1508 - 1521
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB) |  | HTML iconHTML  

    We study sleep/wake scheduling for low-duty cycle sensor networks. Our work explicitly considers the effect of synchronization error. We focus on a widely used synchronization scheme and show that its synchronization error is nonnegligible and using a conservative guard time is energy wasteful. We formulate an optimization problem that aims to set the capture probability threshold for messages from each individual node such that the expected energy consumption is minimized, and the collective quality of service (QoS) over the nodes is guaranteed. The problem is nonconvex. Nonetheless, we are able to obtain a solution with energy consumption that is provably at most 37% larger than the optimal solution. Simulations demonstrate the efficacy of our solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Capacity Scaling in Ad Hoc Networks With Heterogeneous Mobile Nodes: The Super-Critical Regime

    Publication Year: 2009 , Page(s): 1522 - 1535
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (401 KB) |  | HTML iconHTML  

    We analyze the capacity scaling laws of mobile ad hoc networks comprising heterogeneous nodes and spatial inhomogeneities. Most of previous work relies on the assumption that nodes are identical and uniformly visit the entire network space. Experimental data, however, show that the mobility pattern of individual nodes is usually restricted over the area, while the overall node density is often largely inhomogeneous due to the presence of node concentration points. In this paper we introduce a general class of mobile networks which incorporates both restricted mobility and inhomogeneous node density, and describe a methodology to compute the asymptotic throughput achievable in these networks by the store-carry-forward communication paradigm. We show how the analysis can be mapped, under mild assumptions, into a Maximum Concurrent Flow (MCF) problem over an associated Generalized Random Geometric Graph (GRGG). Moreover, we propose an asymptotically optimal scheduling and routing scheme that achieves the maximum network capacity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scalability and Performance Evaluation of Hierarchical Hybrid Wireless Networks

    Publication Year: 2009 , Page(s): 1536 - 1549
    Cited by:  Papers (8)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1142 KB) |  | HTML iconHTML  

    This paper considers the problem of scaling ad hoc wireless networks now being applied to urban mesh and sensor network scenarios. Previous results have shown that the inherent scaling problems of a multihop ldquoflatrdquo ad hoc wireless network can be improved by a ldquohybrid networkrdquo with an appropriate proportion of radio nodes with wired network connections. In this work, we generalize the system model to a hierarchical hybrid wireless network with three tiers of radio nodes: low-power end-user mobile nodes (MNs) at the lowest tier, higher power radio forwarding nodes (FNs) that support multihop routing at intermediate level, and wired access points (APs) at the highest level. Scalability properties of the proposed three-tier hierarchical hybrid wireless network are analyzed, leading to an identification of the proportion of FNs and APs as well as transmission range required for linear increase in end-user throughput. In particular, it is shown analytically that in a three-tier hierarchical network with nA APs, nF FNs, and nM MNs, the low-tier capacity increases linearly with nF, and the high-tier capacity increases linearly with nA when nA = Omega(radic{nF}) and n A = O(nF). This analytical result is validated via ns-2 simulations for an example dense network scenario, and the model is used to study scaling behavior and performance as a function of key parameters such as AP and FN node densities for different traffic patterns and bandwidth allocation at each tier of the network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimizing 802.11 Wireless Mesh Networks Based on Physical Carrier Sensing

    Publication Year: 2009 , Page(s): 1550 - 1563
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1232 KB) |  | HTML iconHTML  

    Multi-hop ad hoc networks suffer from the "hidden" and "exposed" node problems which diminish aggregate network throughput. While there are various approaches to mitigating these problems, in this work we focus exclusively on the role of physical carrier sensing (PCS). Specifically, tuning the PCS threshold leads to a trade-off between the hidden and exposed cases; reducing one typically increases the other, implying the existence of an optimal PCS threshold setting maximizes the aggregate network throughput. The contributions of this work are two-fold: i. We develop an analytical model to determine the optimal PCS threshold for a homogeneous network with constant link distances and show that setting the carrier sensing range close to the interference range is a robust close-to-optimal setting for network optimization in many scenarios. As an extension to more pragmatic network topologies with non-uniform link distances, a rate-to-link allocation scheme is proposed based on rendering the interference range equal for all links that allows a single carrier sense range to be used for the whole network, ii. The above suggests the need for on-line adaptation of tunable PCS threshold in general. The proposed algorithm is based on the key concept of loss differentiation (LD), which disambiguates the cause of packet loss event due to link layer interference (hidden terminals) and collisions respectively. Extensive simulation results show that the proposed PCS adaptations make the PCS threshold converge to its optimal value and thus outperform schemes without PCS adaptation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling Spatial and Temporal Dependencies of User Mobility in Wireless Mobile Networks

    Publication Year: 2009 , Page(s): 1564 - 1577
    Cited by:  Papers (32)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1069 KB) |  | HTML iconHTML  

    Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the non-homogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a time-variant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic reappearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Crossing Over the Bounded Domain: From Exponential to Power-Law Intermeeting Time in Mobile Ad Hoc Networks

    Publication Year: 2009 , Page(s): 1578 - 1591
    Cited by:  Papers (46)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (816 KB) |  | HTML iconHTML  

    Intermeeting time between mobile nodes is one of the key metrics in a mobile ad hoc network (MANET) and central to the end-to-end delay of forwarding algorithms. It is typically assumed to be exponentially distributed in many performance studies of MANET or numerically shown to be exponentially distributed under most existing mobility models in the literature. However, recent empirical results show otherwise: The intermeeting time distribution, in fact, follows a power-law. This outright discrepancy potentially undermines our understanding of the performance tradeoffs in MANET obtained under the exponential distribution of the intermeeting time and, thus, calls for further study on the power-law intermeeting time including its fundamental cause, mobility modeling, and its effect. In this paper, we rigorously prove that a finite domain, on which most of the current mobility models are defined, plays an important role in creating the exponential tail of the intermeeting time. We also prove that by simply removing the boundary in a simple two-dimensional isotropic random walk model, we are able to obtain the empirically observed power-law decay of the intermeeting time. We then discuss the relationship between the size of the boundary and the relevant timescale of the network scenario under consideration. Our results thus provide guidelines on the mobility modeling with power-law intermeeting time distribution, new protocols including packet-forwarding algorithms, as well as their performance analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Opportunistic Energy-Efficient Contact Probing in Delay-Tolerant Applications

    Publication Year: 2009 , Page(s): 1592 - 1605
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1351 KB) |  | HTML iconHTML  

    In many delay-tolerant applications, information is opportunistically exchanged between mobile devices that encounter each other. In order to affect such information exchange, mobile devices must have knowledge of other devices in their vicinity. We consider scenarios in which there is no infrastructure and devices must probe their environment to discover other devices. This can be an extremely energy-consuming process and highlights the need for energy-conscious contact-probing mechanisms. If devices probe very infrequently, they might miss many of their contacts. On the other hand, frequent contact probing might be energy inefficient. In this paper, we investigate the tradeoff between the probability of missing a contact and the contact-probing frequency. First, via theoretical analysis, we characterize the tradeoff between the probability of a missed contact and the contact-probing interval for stationary processes. Next, for time-varying contact arrival rates, we provide an optimization framework to compute the optimal contact-probing interval as a function of the arrival rate. We characterize real-world contact patterns via Bluetooth phone contact-logging experiments and show that the contact arrival process is self-similar. We design STAR, a contact-probing algorithm that adapts to the contact arrival process. Instead of using constant probing intervals, STAR dynamically chooses the probing interval using both the short-term contact history and the long-term history based on time of day information. Via trace-driven simulations on our experimental data, we demonstrate that STAR requires three to five times less energy for device discovery than a constant contact-probing interval scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bounded-Mean-Delay Throughput and Nonstarvation Conditions in Aloha Network

    Publication Year: 2009 , Page(s): 1606 - 1618
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (843 KB) |  | HTML iconHTML  

    Prior investigations on the Aloha network have primarily focused on its system throughput. Good system throughput, however, does not automatically translate to good delay performance for the end users. Neither is fairness guaranteed: Some users may starve, while others hog the system. This paper establishes the conditions for bounded mean queuing delay and nonstarved operation of the slotted Aloha network. We focus on the performance when collisions of packets are resolved using an exponential backoff protocol. For a nonsaturated network, we find that bounded mean delay and nonstarved operation can be guaranteed only if the offered load is limited to below a quantity called "safe bounded mean-delay (SBMD) throughput." The SBMD throughput can be much lower than the saturation system throughput if the backoff factor r in the exponential backoff algorithm is not properly set. For example, it is well known that the maximum throughput of the Aloha network is e -1 = 0.3679. However, for r = 2, a value assumed in many prior investigations, the SBMD throughput is only 0.2158, a drastic penalty of 41% relative to 0.3679. Fortunately, using r = 1.3757 allows us to obtain an SBMD throughput of 0.3545, less than 4% away from 0.3679. A general conclusion is that the system parameters can significantly affect the delay and fairness performance of the Aloha network. This paper provides the analytical framework and expressions for tuning r and other system parameters to achieve good delay and nonstarved operation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Conflict-Free All-to-All Broadcast in One-Hop Optical Networks of Arbitrary Topologies

    Publication Year: 2009 , Page(s): 1619 - 1630
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB) |  | HTML iconHTML  

    In this paper, we investigate the problem of all-to-all broadcast in optical networks, also known as gossiping. This problem is very important in the context of control plane design as it relates to status information dissemination. We present a routing and wavelength assignment (RWA) method to reduce the number of wavelengths such that the communication is conflict-free in a wavelength division multiplexing (WDM) optical environment without wavelength converters. Our approach utilizes the tap-and-continue capability of the optical nodes. The network topology is considered to be arbitrary as long as it is connected. Both cases of maximally and nonmaximally edge-connected graphs are studied. For the first case, we give a closed-form expression for the lower bound on the number of wavelengths, which is an elegant extension of the results in for concurrent broadcast trees in optical networks. Furthermore, we show how to achieve this bound. The second case is more involved and requires a specific procedure to achieve the minimum number of wavelengths. For this case, we provide an attractive method for the RWA algorithm that attempts to minimize the number of wavelengths. Our solution for this case is within a constant factor that is strictly less than 2 from the optimal solution. The proposed algorithm uses the concept of "cactus" representation of all minimum edge-cuts in a graph in a novel recursive approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Resource Dimensioning Through Buffer Sampling

    Publication Year: 2009 , Page(s): 1631 - 1644
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (873 KB) |  | HTML iconHTML  

    Link dimensioning, i.e., selecting a (minimal) link capacity such that the users' performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship among the traffic offered (in terms of the mean offered load M, but also its fluctuation around the mean, i.e., `burstiness'), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulas that estimate the required capacity C as a function of the input traffic and the performance target. For the special case of Gaussian input traffic, these formulas reduce to C=M + alpha V, where alpha directly relates to the performance requirement (as agreed upon in a service level agreement) and V reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level, the Gaussianity assumption is justified. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Router Buffer Sizing for TCP Traffic and the Role of the Output/Input Capacity Ratio

    Publication Year: 2009 , Page(s): 1645 - 1658
    Cited by:  Papers (17)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (553 KB) |  | HTML iconHTML  

    The issue of router buffer sizing is still open and significant. Previous work either considers open-loop traffic or only analyzes persistent TCP flows. This paper differs in two ways. First, it considers the more realistic case of nonpersistent TCP flows with heavy-tailed size distribution. Second, instead of only looking at link metrics, it focuses on the impact of buffer sizing on TCP performance. Specifically, our goal is to find the buffer size that maximizes the average per-flow TCP throughput. Through a combination of testbed experiments, simulation, and analysis, we reach the following conclusions. The output/input capacity ratio at a network link largely determines the required buffer size. If that ratio is larger than 1, the loss rate drops exponentially with the buffer size and the optimal buffer size is close to 0. Otherwise, if the output/input capacity ratio is lower than 1, the loss rate follows a power-law reduction with the buffer size and significant buffering is needed, especially with TCP flows that are in congestion avoidance. Smaller transfers, which are mostly in slow-start, require significantly smaller buffers. We conclude by revisiting the ongoing debate on ldquosmall versus largerdquo buffers from a new perspective. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CR Switch: A Load-Balanced Switch With Contention and Reservation

    Publication Year: 2009 , Page(s): 1659 - 1671
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (943 KB) |  | HTML iconHTML  

    Load-balanced switches have received a great deal of attention recently as they are much more scalable than other existing switch architectures in the literature. However, as there exist multiple paths for flows of packets to traverse through load-balanced switches, packets in such switches may be delivered out of order. In this paper, we propose a new switch architecture, called the contention and reservation (CR) switch, that not only delivers packets in order but also guarantees 100% throughput. The key idea, as in a multiple-access channel, is to operate the CR switch in two modes: 1) the contention mode in light traffic and 2) the reservation mode in heavy traffic. To do this, we invent a new buffer management scheme, called virtual output queue with insertion (I-VOQ). With the I-VOQ scheme, we give rigorous mathematical proofs for 100% throughput and in-order packet delivery of the CR switch. By computer simulations, we also demonstrate that the average packet delay of the CR switch is considerably lower than other schemes in the literature, including the uniform frame spreading scheme, the padded frame scheme, and the mailbox switch . View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign