By Topic

Networking, IEEE/ACM Transactions on

Issue 1 • Date Feb. 2013

Filter Results

Displaying Results 1 - 25 of 31
  • Front Cover

    Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (612 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (124 KB)  
    Freely Available from IEEE
  • Answering “What-If” Deployment and Configuration Questions With WISE: Techniques and Deployment Experience

    Page(s): 1 - 13
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1501 KB) |  | HTML iconHTML  

    Designers of content distribution networks (CDNs) often need to determine how changes to infrastructure deployment and configuration affect service response times when they deploy a new data center, change ISP peering, or change the mapping of clients to servers. Today, the designers use coarse, back-of-the-envelope calculations or costly field deployments; they need better ways to evaluate the effects of such hypothetical “what-if” questions before the actual deployments. This paper presents What-If Scenario Evaluator (WISE), a tool that predicts the effects of possible configuration and deployment changes in content distribution networks. WISE makes three contributions: 1) an algorithm that uses traces from existing deployments to learn causality among factors that affect service response time distributions; 2) an algorithm that uses the learned causal structure to estimate a dataset that is representative of the hypothetical scenario that a designer may wish to evaluate, and uses these datasets to predict hypothetical response-time distributions; 3) a scenario specification language that allows a network designer to easily express hypothetical deployment scenarios without being cognizant of the dependencies between variables that affect service response times. Our evaluation, both in a controlled setting and in a real-world field deployment on a large, global CDN, shows that WISE can quickly and accurately predict service response-time distributions for many practical what-if scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Complexity Analysis and Algorithm Design for Advance Bandwidth Scheduling in Dedicated Networks

    Page(s): 14 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3219 KB) |  | HTML iconHTML  

    An increasing number of high-performance networks provision dedicated channels through circuit switching or MPLS/GMPLS techniques to support large data transfer. The link bandwidths in such networks are typically shared by multiple users through advance reservation, resulting in varying bandwidth availability in future time. Developing efficient scheduling algorithms for advance bandwidth reservation has become a critical task to improve the utilization of network resources and meet the transport requirements of application users. We consider an exhaustive combination of different path and bandwidth constraints and formulate four types of advance bandwidth scheduling problems, with the same objective to minimize the data transfer end time for a given transfer request with a prespecified data size: fixed path with fixed bandwidth (FPFB); fixed path with variable bandwidth (FPVB); variable path with fixed bandwidth (VPFB); and variable path with variable bandwidth (VPVB). For VPFB and VPVB, we further consider two subcases where the path switching delay is negligible or nonnegligible. We propose an optimal algorithm for each of these scheduling problems except for FPVB and VPVB with nonnegligible path switching delay, which are proven to be NP-complete and nonapproximable, and then tackled by heuristics. The performance superiority of these heuristics is verified by extensive experimental results in a large set of simulated networks in comparison to optimal and greedy strategies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Diffusion Dynamics of Network Technologies With Bounded Rational Users: Aspiration-Based Learning

    Page(s): 28 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3195 KB) |  | HTML iconHTML  

    Recently, economic models have been proposed to study adoption dynamics of entrant and incumbent technologies motivated by the need for new network architectures to complement the current Internet. We propose new models of adoption dynamics of entrant and incumbent technologies among bounded rational users who choose a satisfying strategy rather than an optimal strategy based on aspiration-based learning. Two models of adoption dynamics are proposed according to the characteristics of aspiration level. The impacts of switching cost, the benefit from entrant and incumbent technologies, and the initial aspiration level on the adoption dynamics are investigated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Delay-Based Network Utility Maximization

    Page(s): 41 - 54
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4056 KB) |  | HTML iconHTML  

    It is well known that max-weight policies based on a queue backlog index can be used to stabilize stochastic networks, and that similar stability results hold if a delay index is used. Using Lyapunov optimization, we extend this analysis to design a utility maximizing algorithm that uses explicit delay information from the head-of-line packet at each user. The resulting policy is shown to ensure deterministic worst-case delay guarantees and to yield a throughput utility that differs from the optimally fair value by an amount that is inversely proportional to the delay guarantee. Our results hold for a general class of 1-hop networks, including packet switches and multiuser wireless systems with time-varying reliability . View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Distributed Control Law for Load Balancing in Content Delivery Networks

    Page(s): 55 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2075 KB) |  | HTML iconHTML  

    In this paper, we face the challenging issue of defining and implementing an effective law for load balancing in Content Delivery Networks (CDNs). We base our proposal on a formal study of a CDN system, carried out through the exploitation of a fluid flow model characterization of the network of servers. Starting from such characterization, we derive and prove a lemma about the network queues equilibrium. This result is then leveraged in order to devise a novel distributed and time-continuous algorithm for load balancing, which is also reformulated in a time-discrete version. The discrete formulation of the proposed balancing law is eventually discussed in terms of its actual implementation in a real-world scenario. Finally, the overall approach is validated by means of simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Algorithms for Neighbor Discovery in Wireless Networks

    Page(s): 69 - 83
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3180 KB) |  | HTML iconHTML  

    Neighbor discovery is an important first step in the initialization of a wireless ad hoc network. In this paper, we design and analyze several algorithms for neighbor discovery in wireless networks. Starting with a single-hop wireless network of n nodes, we propose a Θ(nlnn) ALOHA-like neighbor discovery algorithm when nodes cannot detect collisions, and an order-optimal Θ(n) receiver feedback-based algorithm when nodes can detect collisions. Our algorithms neither require nodes to have a priori estimates of the number of neighbors nor synchronization between nodes. Our algorithms allow nodes to begin execution at different time instants and to terminate neighbor discovery upon discovering all their neighbors. We finally show that receiver feedback can be used to achieve a Θ(n) running time, even when nodes cannot detect collisions. We then analyze neighbor discovery in a general multihop setting. We establish an upper bound of O(Δlnn) on the running time of the ALOHA-like algorithm, where Δ denotes the maximum node degree in the network and n the total number of nodes. We also establish a lower bound of Ω(Δ+lnn) on the running time of any randomized neighbor discovery algorithm. Our result thus implies that the ALOHA-like algorithm is at most a factor min(Δ,lnn) worse than optimal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stochastic Game for Wireless Network Virtualization

    Page(s): 84 - 97
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3174 KB) |  | HTML iconHTML  

    We propose a new framework for wireless network virtualization. In this framework, service providers (SPs) and the network operator (NO) are decoupled from each other: The NO is solely responsible for spectrum management, and SPs are responsible for quality-of-service (QoS) management for their own users. SPs compete for the shared wireless resources to satisfy their distinct service objectives and constraints. We model the dynamic interactions among SPs and the NO as a stochastic game. SPs bid for the resources via dynamically announcing their value functions. The game is regulated by the NO through: 1) sum-utility optimization under rate region constraints; 2) enforcement of Vickrey-Clarke-Groves (VCG) mechanism for pricing the instantaneous rate consumption; and 3) declaration of conjectured prices for future resource consumption. We prove that there exists one Nash equilibrium in the conjectural prices that is efficient, i.e., the sum-utility is maximized. Thus, the NO has the incentive to compute the equilibrium point and feedback to SPs. Given the conjectural prices and the VCG mechanism, we also show that SPs must reveal their truthful value functions at each step to maximize their long-term utilities. As another major contribution, we develop an online learning algorithm that allows the SPs to update the value functions and the NO to update the conjectural prices iteratively. Thus, the proposed framework can deal with unknown dynamics in traffic characteristics and channel conditions. We present simulation results to show the convergence to the Nash equilibrium prices under various dynamic traffic and channel conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ABC: Adaptive Binary Cuttings for Multidimensional Packet Classification

    Page(s): 98 - 109
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1546 KB) |  | HTML iconHTML  

    Decision tree-based packet classification algorithms are easy to implement and allow the tradeoff between storage and throughput. However, the memory consumption of these algorithms remains quite high when high throughput is required. The Adaptive Binary Cuttings (ABC) algorithm exploits another degree of freedom to make the decision tree adapt to the geometric distribution of the filters. The three variations of the adaptive cutting procedure produce a set of different-sized cuts at each decision step, with the goal to balance the distribution of filters and to reduce the filter duplication effect. The ABC algorithm uses stronger and more straightforward criteria for decision tree construction. Coupled with an efficient node encoding scheme, it enables a smaller, shorter, and well-balanced decision tree. The hardware-oriented implementation of each variation is proposed and evaluated extensively to demonstrate its scalability and sensitivity to different configurations. The results show that the ABC algorithm significantly outperforms the other decision tree-based algorithms. It can sustain more than 10-Gb/s throughput and is the only algorithm among the existing well-known packet classification algorithms that can compete with TCAMs in terms of the storage efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Utility Maximization Framework for Fair and Efficient Multicasting in Multicarrier Wireless Cellular Networks

    Page(s): 110 - 120
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2543 KB) |  | HTML iconHTML  

    Multicast/broadcast is regarded as an efficient technique for wireless cellular networks to transmit a large volume of common data to multiple mobile users simultaneously. To guarantee the quality of service for each mobile user in such single-hop multicasting, the base-station transmitter usually adapts its data rate to the worst channel condition among all users in a multicast group. On one hand, increasing the number of users in a multicast group leads to a more efficient utilization of spectrum bandwidth, as users in the same group can be served together. On the other hand, too many users in a group may lead to unacceptably low data rate at which the base station can transmit. Hence, a natural question that arises is how to efficiently and fairly transmit to a large number of users requiring the same message. This paper endeavors to answer this question by studying the problem of multicasting over multicarriers in wireless orthogonal frequency division multiplexing (OFDM) cellular systems. Using a unified utility maximization framework, we investigate this problem in two typical scenarios: namely, when users experience roughly equal path losses and when they experience different path losses, respectively. Through theoretical analysis, we obtain optimal multicast schemes satisfying various throughput-fairness requirements in these two cases. In particular, we show that the conventional multicast scheme is optimal in the equal-path-loss case regardless of the utility function adopted. When users experience different path losses, the group multicast scheme, which divides the users almost equally into many multicast groups and multicasts to different groups of users over nonoverlapping subcarriers, is optimal . View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Achieving Efficient Flooding by Utilizing Link Correlation in Wireless Sensor Networks

    Page(s): 121 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2030 KB) |  | HTML iconHTML  

    Although existing flooding protocols can provide efficient and reliable communication in wireless sensor networks on some level, further performance improvement has been hampered by the assumption of link independence, which requires costly acknowledgments (ACKs) from every receiver. In this paper, we present collective flooding (CF), which exploits the link correlation to achieve flooding reliability using the concept of collective ACKs. CF requires only 1-hop information at each node, making the design highly distributed and scalable with low complexity. We evaluate CF extensively in real-world settings, using three different types of testbeds: a single-hop network with 20 MICAz nodes, a multihop network with 37 nodes, and a linear outdoor network with 48 nodes along a 326-m-long bridge. System evaluation and extensive simulation show that CF achieves the same reliability as state-of-the-art solutions while reducing the total number of packet transmission and the dissemination delay by 30%-50% and 35%-50%, respectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Random Walks and Green's Function on Digraphs: A Framework for Estimating Wireless Transmission Costs

    Page(s): 135 - 148
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2496 KB) |  | HTML iconHTML  

    Various applications in wireless networks, such as routing and query processing, can be formulated as random walks on graphs. Many results have been obtained for such applications by utilizing the theory of random walks (or spectral graph theory), which is mostly developed for undirected graphs. However, this formalism neglects the fact that the underlying (wireless) networks in practice contain asymmetric links, which are best characterized by directed graphs (digraphs). Therefore, random walk on digraphs is a more appropriate model to consider for such networks. In this paper, by generalizing the random walk theory (or spectral graph theory) that has been primarily developed for undirected graphs to digraphs, we show how various transmission costs in wireless networks can be formulated in terms of hitting times and cover times of random walks on digraphs. Using these results, we develop a unified theoretical framework for estimating various transmission costs in wireless networks. Our framework can be applied to random walk query processing strategy and the three routing paradigms-best path routing, opportunistic routing, and stateless routing-to which nearly all existing routing protocols belong. Extensive simulations demonstrate that the proposed digraph-based analytical model can achieve more accurate transmission cost estimation over existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Flexible Platform for Hardware-Aware Network Experiments and a Case Study on Wireless Network Coding

    Page(s): 149 - 161
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1471 KB) |  | HTML iconHTML  

    In this paper, we present the design and implementation of a general, flexible, hardware-aware network platform that takes hardware processing behavior into consideration to accurately evaluate network performance. The platform adopts a network-hardware co-simulation approach in which the NS-2 network simulator supervises the network-wide traffic flow and the SystemC hardware simulator simulates the underlying hardware processing in network nodes. In addition, as a case study, we implemented wireless all-to-all broadcasting with network coding on the platform. We analyze the hardware processing behavior during the algorithm execution and evaluate the overall performance of the algorithm. Our experimental results demonstrate that hardware processing can have a significant impact on the algorithm performance and hence should be taken into consideration in the algorithm design. We expect that this hardware-aware platform will become a very useful tool for more accurate network simulations and more efficient design space exploration of processing-intensive applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring the Design Space of Multichannel Peer-to-Peer Live Video Streaming Systems

    Page(s): 162 - 175
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2029 KB) |  | HTML iconHTML  

    Most of the commercial peer-to-peer (P2P) video streaming deployments support hundreds of channels and are referred to as multichannel systems. Recent research studies have proposed specific protocols to improve the streaming quality for all channels by enabling cross-channel cooperation among multiple channels. In this paper, we focus on the following fundamental problems in designing cooperating multichannel systems: 1) what are the general characteristics of existing and potential designs? and 2) under what circumstances should a particular design be used to achieve the desired streaming quality with the lowest implementation complexity? To answer the first question, we propose simple models based on linear programming and network-flow graphs for three general designs, namely Naive Bandwidth allocation Approach (NBA), Passive Channel-aware bandwidth allocation Approach (PCA), and Active Channel-aware bandwidth allocation Approach (ACA), which provide insight into understanding the key characteristics of cross-channel resource sharing. For the second question, we first develop closed-form results for two-channel systems. Then, we use extensive numerical simulations to compare the three designs for various peer population distributions, upload bandwidth distributions, and channel structures. Our analytical and simulation results show that: 1) the NBA design can rarely achieve the desired streaming quality in general cases; 2) the PCA design can achieve the same performance as the ACA design in general cases; and 3) the ACA design should be used for special applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Secondary Spectrum Trading—Auction-Based Framework for Spectrum Allocation and Profit Sharing

    Page(s): 176 - 189
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3366 KB) |  | HTML iconHTML  

    Recently, dynamic spectrum sharing has been gaining interest as a potential solution to scarcity of available spectrum. We investigate the problem of designing a secondary spectrum-trading market when there are multiple sellers and multiple buyers and propose a general framework for the trading market based on an auction mechanism. To this end, we first introduce a new optimal auction mechanism, called the generalized Branco's mechanism (GBM). The GBM, which is both incentive-compatible and individually rational, is used to determine the assigned frequency bands and prices for them. Second, we assume that buyers of the spectrum are selfish and model their interaction as a noncooperative game. Using this model, we prove that when the sellers employ the GBM to vend their frequency bands, they can guarantee themselves the largest expected profits by selling their frequency bands jointly. Third, based on the previous finding, we model the interaction among the sellers as a cooperative game and demonstrate that, for any fixed strategies of the buyers, the core of the cooperative game is nonempty. This suggests that there exists a way for the sellers to share the profits from the joint sale of the spectrum so that no subset of sellers will find it beneficial to vend their frequency bands separately without the remaining sellers. Finally, we propose a profit-sharing scheme that can achieve any expected profit vector in the nonempty core of the cooperative game while satisfying two desirable properties. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards Practical Communication in Byzantine-Resistant DHTs

    Page(s): 190 - 203
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2011 KB) |  | HTML iconHTML  

    There are several analytical results on distributed hash tables (DHTs) that can tolerate Byzantine faults. Unfortunately, in such systems, operations such as data retrieval and message sending incur significant communication costs. For example, a simple scheme used in many Byzantine fault-tolerant DHT constructions of n nodes requires O(log3n) messages; this is likely impractical for real-world applications. The previous best known message complexity is O(log2n) in expectation. However, the corresponding protocol suffers from prohibitive costs owing to hidden constants in the asymptotic notation and setup costs. In this paper, we focus on reducing the communication costs against a computationally bounded adversary. We employ threshold cryptography and distributed key generation to define two protocols, both of which are more efficient than existing solutions. In comparison, our first protocol is deterministic with O(log2n) message complexity, and our second protocol is randomized with expected O(logn) message complexity. Furthermore, both the hidden constants and setup costs for our protocols are small, and no trusted third party is required. Finally, we present results from microbenchmarks conducted over PlanetLab showing that our protocols are practical for deployment under significant levels of churn and adversarial behavior. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semi-Random Backoff: Towards Resource Reservation for Channel Access in Wireless LANs

    Page(s): 204 - 217
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2715 KB) |  | HTML iconHTML  

    This paper proposes a semi-random backoff (SRB) method that enables resource reservation in contention-based wireless LANs. The proposed SRB is fundamentally different from traditional random backoff methods because it provides an easy migration path from random backoffs to deterministic slot assignments. The central idea of the SRB is for the wireless station to set its backoff counter to a deterministic value upon a successful packet transmission. This deterministic value will allow the station to reuse the time-slot in consecutive backoff cycles. When multiple stations with successful packet transmissions reuse their respective time-slots, the collision probability is reduced, and the channel achieves the equivalence of resource reservation. In case of a failed packet transmission, a station will revert to the standard random backoff method and probe for a new available time-slot. The proposed SRB method can be readily applied to both 802.11 DCF and 802.11e EDCA networks with minimum modification to the existing DCF/EDCA implementations. Theoretical analysis and simulation results validate the superior performance of the SRB for small-scale and heavily loaded wireless LANs. When combined with an adaptive mechanism and a persistent backoff process, SRB can also be effective for large-scale and lightly loaded wireless networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Entry and Spectrum Sharing Scheme Selection in Femtocell Communications Markets

    Page(s): 218 - 232
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2526 KB) |  | HTML iconHTML  

    Focusing on a femtocell communications market, we study the entrant network service provider's (NSP's) long-term decision: whether to enter the market and which spectrum sharing technology to select to maximize its profit. This long-term decision is closely related to the entrant's pricing strategy and the users' aggregate demand, which we model as medium-term and short-term decisions, respectively. We consider two markets, one with no incumbent and the other with one incumbent. For both markets, we show the existence and uniqueness of an equilibrium point in the user subscription dynamics and provide a sufficient condition for the convergence of the dynamics. For the market with no incumbent, we derive upper and lower bounds on the optimal price and market share that maximize the entrant's revenue, based on which the entrant selects an available technology to maximize its long-term profit. For the market with one incumbent, we model competition between the two NSPs as a noncooperative game, in which the incumbent and the entrant choose their market shares independently, and provide a sufficient condition that guarantees the existence of at least one pure Nash equilibrium. Finally, we formalize the problem of entry and spectrum-sharing scheme selection for the entrant and provide numerical results to complement our analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Replication Algorithm in P2P VoD

    Page(s): 233 - 243
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1995 KB) |  | HTML iconHTML  

    Traditional video-on-demand (VoD) systems rely purely on servers to stream video content to clients, which does not scale. In recent years, peer-to-peer assisted VoD (P2P VoD) has proven to be practical and effective. In P2P VoD, each peer contributes some storage to store videos (or segments of videos) to help the video server. Assuming peers have sufficient bandwidth for the given video playback rate, a fundamental question is what is the relationship between the storage capacity (at each peer), the number of videos, the number of peers, and the resultant off-loading of video server bandwidth. In this paper, we use a simple statistical model to derive this relationship. We propose and analyze a generic replication algorithm Random with Load Balancing (RLB) that balances the service to all movies for both deterministic and random (but stationary) demand models and both homogeneous and heterogeneous peers (in upload bandwidth). We use simulation to validate our results for sensitivity analysis and for comparisons to other popular replication algorithms. This study leads to several fundamental insights for P2P VoD system design in practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Back-Pressure-Based Packet-by-Packet Adaptive Routing in Communication Networks

    Page(s): 244 - 257
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2648 KB) |  | HTML iconHTML  

    Back-pressure-based adaptive routing algorithms where each packet is routed along a possibly different path have been extensively studied in the literature. However, such algorithms typically result in poor delay performance and involve high implementation complexity. In this paper, we develop a new adaptive routing algorithm built upon the widely studied back-pressure algorithm. We decouple the routing and scheduling components of the algorithm by designing a probabilistic routing table that is used to route packets to per-destination queues. The scheduling decisions in the case of wireless networks are made using counters called shadow queues. The results are also extended to the case of networks that employ simple forms of network coding. In that case, our algorithm provides a low-complexity solution to optimally exploit the routing-coding tradeoff. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scheduling in a Random Environment: Stability and Asymptotic Optimality

    Page(s): 258 - 271
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3523 KB) |  | HTML iconHTML  

    We investigate the scheduling of a common resource between several concurrent users when the feasible transmission rate of each user varies randomly over time. Time is slotted, and users arrive and depart upon service completion. This may model, for example, the flow-level behavior of end-users in a narrowband HDR wireless channel (CDMA 1xEV-DO). As performance criteria, we consider the stability of the system and the mean delay experienced by the users. Given the complexity of the problem, we investigate the fluid-scaled system, which allows to obtain important results and insights for the original system: 1) We characterize for a large class of scheduling policies the stability conditions and identify a set of maximum stable policies, giving in each time-slot preference to users being in their best possible channel condition. We find in particular that many opportunistic scheduling policies like Score-Based, Proportionally Best, or Potential Improvement are stable under the maximum stability conditions, whereas the opportunistic scheduler Relative-Best or the cμ-rule are not. 2) We show that choosing the right tie-breaking rule is crucial for the performance (e.g., average delay) as perceived by a user. We prove that a policy is asymptotically optimal if it is maximum stable and the tie-breaking rule gives priority to the user with the highest departure probability. We will refer to such tie-breaking rule as myopic. 3) We derive the growth rates of the number of users in the system in overload settings under various policies, which give additional insights on the performance. 4) We conclude that simple priority-index policies with the myopic tie-breaking rule are stable and asymptotically optimal. All our findings are validated with extensive numerical experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Empirical Interference Modeling for Link Reliability Assessment in Wireless Networks

    Page(s): 272 - 285
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2587 KB) |  | HTML iconHTML  

    In recent years, it has been widely believed in the community that the link reliability is strongly related to received signal strength indicator (RSSI) [or signal-to-interference-plus-noise ratio (SINR)] and external interference makes it unpredictable, which is different from the previous understanding that there is no tight relationship between the link reliability and RSSI (or SINR), but multipath fading causes the unpredictability. However, both cannot fully explain why the unpredictability appears in the link state. In this paper, we unravel the following questions: 1) What causes frame losses that are directly related to intermediate link states? 2) Is RSSI or SINR a right criterion to represent the link reliability? 3) Is there a better measure to assess the link reliability? We first configured a testbed for performing a real measurement study to identify the causes of frame losses, and observed that link reliability depends on an intraframe SINR distribution, not a single value of RSSI (or SINR). We also learned that an RSSI value is not always a good indicator to estimate the link state. We then conducted a further investigation on the intraframe SINR distribution and the relationship between the SINR and link reliability with the ns-2 simulator. Based on these results, we finally propose an interference modeling framework for estimating link states in the presence of wireless interferences. We envision that the framework can be used for developing link-aware protocols to achieve their optimal performance in a hostile wireless environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Downlink Capacity of Cellular Data Networks With WLAN/WPAN Relays

    Page(s): 286 - 296
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2296 KB) |  | HTML iconHTML  

    We consider the downlink of a cellular network supporting data traffic in which each user is equipped with the same type of IEEE 802.11-like WLAN or WPAN interface used to relay packets to further users. We are interested in the design guidelines for such networks and how much capacity improvements the additional relay layer can bring. A first objective is to provide a scheduling/relay strategy that maximizes the network capacity. Using theoretical analysis, numerical evaluation, and simulations, we find that when the number of active users is large, the capacity-achieving strategy divides the cell into two areas: one closer to the base station where the relay layer is always saturated and some nodes receive traffic through both direct and relay links, and the farther one where the relay is never saturated and the direct traffic is almost nonexistent. We also show that it is approximately optimal to use fixed relay link lengths, and we derive this length. We show that the obtained capacity is independent of the cell size (unlike in traditional cellular networks). Based on our findings, we propose simple decentralized routing and scheduling protocols. We show that in a fully saturated network our optimized protocol substantially improves performance over the protocols that use naive relay-only or direct-only policies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Centralized and Distributed Protocols for Tracker-Based Dynamic Swarm Management

    Page(s): 297 - 310
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1836 KB) |  | HTML iconHTML  

    With BitTorrent, efficient peer upload utilization is achieved by splitting contents into many small pieces, each of which may be downloaded from different peers within the same swarm. Unfortunately, piece and bandwidth availability may cause the file-sharing efficiency to degrade in small swarms with few participating peers. Using extensive measurements, we identified hundreds of thousands of torrents with several small swarms for which reallocating peers among swarms and/or modifying the peer behavior could significantly improve the system performance. Motivated by this observation, we propose a centralized and a distributed protocol for dynamic swarm management. The centralized protocol (CSM) manages the swarms of peers at minimal tracker overhead. The distributed protocol (DSM) manages the swarms of peers while ensuring load fairness among the trackers. Both protocols achieve their performance improvements by identifying and merging small swarms and allow load sharing for large torrents. Our evaluations are based on measurement data collected during eight days from over 700 trackers worldwide, which collectively maintain state information about 2.8 million unique torrents. We find that CSM and DSM can achieve most of the performance gains of dynamic swarm management. These gains are estimated to be up to 40% on average for small torrents. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign