Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). We apologize for the inconvenience.
By Topic

Networking, IEEE/ACM Transactions on

Issue 2 • Date April 2010

Filter Results

Displaying Results 1 - 25 of 29
  • [Front cover]

    Publication Year: 2010 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (410 KB)  
    Freely Available from IEEE
  • IEEE/ACM Transactions on Networking publication information

    Publication Year: 2010 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • Cooperative Interdomain Traffic Engineering Using Nash Bargaining and Decomposition

    Publication Year: 2010 , Page(s): 341 - 352
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (778 KB) |  | HTML iconHTML  

    We present a novel approach to interdomain traffic engineering based on the concepts of Nash bargaining and dual decomposition. Under this scheme, ISPs use an iterative procedure to jointly optimize a social cost function, referred to as the Nash product. We show that the global optimization problem can be separated into subproblems by introducing appropriate shadow prices on the interdomain flows. These subproblems can then be solved independently and in a decentralized manner by the individual ISPs. Our approach does not require the ISPs to share any sensitive internal information, such as network topology or link weights. More importantly, our approach is provably Pareto-efficient and fair. Therefore, we believe that our approach is highly amenable to adoption by ISPs when compared to past approaches. We also conduct simulation studies of our approach over several real ISP topologies. Our evaluation shows that the approach converges quickly, offers equitable performance improvements to ISPs, is significantly better than unilateral approaches (e.g., hot-potato routing) and offers the same performance as a centralized solution with full knowledge. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Provisioning of Deadline-Driven Requests With Flexible Transmission Rates in WDM Mesh Networks

    Publication Year: 2010 , Page(s): 353 - 366
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1215 KB) |  | HTML iconHTML  

    With the increasing diversity of applications supported over optical networks, new service guarantees must be offered to network customers. Among the emerging data-intensive applications are those which require their data to be transferred before a predefined deadline. We call these deadline-driven requests (DDRs). In such applications, data-transfer finish time (which must be accomplished before the deadline) is the key service guarantee that the customer wants. In fact, the amount of bandwidth allocated to transfer a request is not a concern for the customer as long as its service deadline is met. Hence, the service provider can choose the bandwidth (transmission rate) to provision the request. In this case, even though DDRs impose a deadline constraint, they provide scheduling flexibility for the service provider since it can choose the transmission rate while achieving two objectives: 1) satisfying the guaranteed deadline; and 2) optimizing the network's resource utilization. We investigate the problem of provisioning DDRs with flexible transmission rates in wavelength-division multiplexing (WDM) mesh networks, although this approach is generalizable to other networks also. We investigate several (fixed and adaptive to network state) bandwidth-allocation policies and study the benefit of allowing dynamic bandwidth adjustment, which is found to generally improve network performance. We show that the performance of the bandwidth-allocation algorithms depends on the DDR traffic distribution and on the node architecture and its parameters. In addition, we develop a mathematical formulation for our problem as a mixed integer linear program (MILP), which allows choosing flexible transmission rates and provides a lower bound for our provisioning algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Upgrading Mice to Elephants: Effects and End-Point Solutions

    Publication Year: 2010 , Page(s): 367 - 378
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (443 KB) |  | HTML iconHTML  

    Short TCP flows may suffer significant response-time performance degradations during network congestion. Unfortunately, this creates an incentive for misbehavior by clients of interactive applications (e.g., gaming, telnet, web): to send ??dummy?? packets into the network at a TCP-fair rate even when they have no data to send, thus improving their performance in moments when they do have data to send. Even though no ??law?? is violated in this way, a large-scale deployment of such an approach has the potential to seriously jeopardize one of the core Internet's principles-statistical multiplexing. We quantify, by means of analytical modeling and simulation, gains achievable by the above misbehavior. Our research indicates that easy-to-implement application-level techniques are capable of dramatically reducing incentives for conducting the above transgressions, still without compromising the idea of statistical multiplexing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Algorithms for Minimum Cost Multicast With Network Coding

    Publication Year: 2010 , Page(s): 379 - 392
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (355 KB) |  | HTML iconHTML  

    Network coding techniques are used to find the minimum-cost transmission scheme for multicast sessions with or without elastic rate demand. It is shown that in wireline networks, solving for the optimal coding subgraphs in network coding is equivalent to finding the optimal routing scheme in a multicommodity flow problem. A set of node-based distributed gradient projection algorithms are designed to jointly implement congestion control/routing at the source node and ??virtual?? routing at intermediate nodes. The analytical framework and distributed algorithms are further extended to interference-limited wireless networks where link capacities are functions of the signal-to-interference-plus-noise ratio (SINR). To achieve minimum-cost multicast in this setting, the transmission powers of links must be jointly optimized with coding subgraphs and multicast input rates. Node-based power allocation and power control algorithms are developed for the power optimization. The power algorithms, when iterated in conjunction with the congestion control and routing algorithms, converge to the jointly optimal multicast configuration. The scaling matrices required in the gradient projection algorithms are explicitly derived and are shown to guarantee fast convergence to the optimum from any initial condition. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Delay Analysis for Wireless Networks With Single Hop Traffic and General Interference Constraints

    Publication Year: 2010 , Page(s): 393 - 405
    Cited by:  Papers (17)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (651 KB) |  | HTML iconHTML  

    We consider a class of wireless networks with general interference constraints on the set of links that can be served simultaneously at any given time. We restrict the traffic to be single-hop, but allow for simultaneous transmissions as long as they satisfy the underlying interference constraints. We begin by proving a lower bound on the delay performance of any scheduling scheme for this system. We then analyze a large class of throughput optimal policies which have been studied extensively in the literature. The delay analysis of these systems has been limited to asymptotic behavior in the heavy traffic regime and order results. We obtain a tighter upper bound on the delay performance for these systems. We use the insights gained by the upper and lower bound analysis to develop an estimate for the expected delay of wireless networks with mutually independent arrival streams operating under the well-known maximum weighted matching (MWM) scheduling policy. We show via simulations that the delay performance of the MWM policy is often close to the lower bound, which means that it is not only throughput optimal, but also provides excellent delay performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Optimality of Two Prices: Maximizing Revenue in a Stochastic Communication System

    Publication Year: 2010 , Page(s): 406 - 419
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (883 KB) |  | HTML iconHTML  

    This paper considers the problem of pricing and transmission scheduling for an access point (AP) in a wireless network, where the AP provides service to a set of mobile users. The goal of the AP is to maximize its own time-average profit. We first obtain the optimum time-average profit of the AP and prove the ??Optimality of Two Prices?? theorem. We then develop an online scheme that jointly solves the pricing and transmission scheduling problem in a dynamic environment. The scheme uses an admission price and a business decision as tools to regulate the incoming traffic and to maximize revenue. We show the scheme can achieve any average profit that is arbitrarily close to the optimum, with a tradeoff in average delay. This holds for general Markovian dynamics for channel and user state variation, and does not require a priori knowledge of the Markov model. The model and methodology developed in this paper are general and apply to other stochastic settings where a single party tries to maximize its time-average profit. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward Practical Opportunistic Routing With Intra-Session Network Coding for Mesh Networks

    Publication Year: 2010 , Page(s): 420 - 433
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (721 KB) |  | HTML iconHTML  

    We consider opportunistic routing in wireless mesh networks. We exploit the inherent diversity of the broadcast nature of wireless by making use of multipath routing. We present a novel optimization framework for opportunistic routing based on network utility maximization (NUM) that enables us to derive optimal flow control, routing, scheduling, and rate adaptation schemes, where we use network coding to ease the routing problem. All previous work on NUM assumed unicast transmissions; however, the wireless medium is by its nature broadcast and a transmission will be received by multiple nodes. The structure of our design is fundamentally different; this is due to the fact that our link rate constraints are defined per broadcast region instead of links in isolation. We prove optimality and derive a primal-dual algorithm that lays the basis for a practical protocol. Optimal MAC scheduling is difficult to implement, and we use 802.11-like random scheduling rather than optimal in our comparisons. Under random scheduling, our protocol becomes fully decentralized (we assume ideal signaling). The use of network coding introduces additional constraints on scheduling, and we propose a novel scheme to avoid starvation. We simulate realistic topologies and show that we can achieve 20%-200% throughput improvement compared to single path routing, and several times compared to a recent related opportunistic protocol (MORE). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Constrained Relay Node Placement in Wireless Sensor Networks: Formulation and Approximations

    Publication Year: 2010 , Page(s): 434 - 447
    Cited by:  Papers (34)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (958 KB) |  | HTML iconHTML  

    One approach to prolong the lifetime of a wireless sensor network (WSN) is to deploy some relay nodes to communicate with the sensor nodes, other relay nodes, and the base stations. The relay node placement problem for wireless sensor networks is concerned with placing a minimum number of relay nodes into a wireless sensor network to meet certain connectivity or survivability requirements. Previous studies have concentrated on the unconstrained version of the problem in the sense that relay nodes can be placed anywhere. In practice, there may be some physical constraints on the placement of relay nodes. To address this issue, we study constrained versions of the relay node placement problem, where relay nodes can only be placed at a set of candidate locations. In the connected relay node placement problem, we want to place a minimum number of relay nodes to ensure that each sensor node is connected with a base station through a bidirectional path. In the survivable relay node placement problem, we want to place a minimum number of relay nodes to ensure that each sensor node is connected with two base stations (or the only base station in case there is only one base station) through two node-disjoint bidirectional paths. For each of the two problems, we discuss its computational complexity and present a framework of polynomial time O(1) -approximation algorithms with small approximation ratios. Extensive numerical results show that our approximation algorithms can produce solutions very close to optimal solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Analytic Throughput Model for TCP NewReno

    Publication Year: 2010 , Page(s): 448 - 461
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1178 KB) |  | HTML iconHTML  

    This paper develops a simple and accurate stochastic model for the steady-state throughput of a TCP NewReno bulk data transfer as a function of round-trip time and loss behavior. Our model builds upon extensive prior work on TCP Reno throughput models but differs from these prior works in three key aspects. First, our model introduces an analytical characterization of the TCP NewReno fast recovery algorithm. Second, our model incorporates an accurate formulation of NewReno's timeout behavior. Third, our model is formulated using a flexible two-parameter loss model that can better represent the diverse packet loss scenarios encountered by TCP on the Internet. We validated our model by conducting a large number of simulations using the ns-2 simulator and by conducting emulation and Internet experiments using a NewReno implementation in the BSD TCP/IP protocol stack. The main findings from the experiments are: 1) the proposed model accurately predicts the steady-state throughput for TCP NewReno bulk data transfers under a wide range of network conditions; 2) TCP NewReno significantly outperforms TCP Reno in many of the scenarios considered; and 3) using existing TCP Reno models to estimate TCP NewReno throughput may introduce significant errors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pricing Strategies for Spectrum Lease in Secondary Markets

    Publication Year: 2010 , Page(s): 462 - 475
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (966 KB) |  | HTML iconHTML  

    We develop analytical models to characterize pricing of spectrum rights in cellular CDMA networks. Specifically, we consider a primary license holder that aims to lease its spectrum within a certain geographic subregion of its network. Such a transaction has two contrasting economic implications: On the one hand the lessor obtains a revenue due to the exercised price of the region. On the other hand, it incurs a cost due to: (1) reduced spatial coverage of its network; and (2) possible interference from the leased region into the retained portion of its network, leading to increased call blocking. We formulate this tradeoff as an optimization problem, with the objective of profit maximization. We consider a range of pricing philosophies and derive near-optimal solutions that are based on a reduced load approximation (RLA) for estimating blocking probabilities. The form of these prices suggests charging the lessee in proportion to the fraction of admitted calls. We also exploit the special structure of the solutions to devise an efficient iterative procedure for computing prices. We present numerical results that demonstrate superiority of the proposed strategy over several alternative strategies. The results emphasize importance of effective pricing strategies in bringing secondary markets to full realization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Demand-Aware Content Distribution on the Internet

    Publication Year: 2010 , Page(s): 476 - 489
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (987 KB) |  | HTML iconHTML  

    The rapid growth of media content distribution on the Internet in the past few years has brought with it commensurate increases in the costs of distributing that content. Can the content distributor defray these costs through a more innovative approach to distribution? In this paper, we evaluate the benefits of a hybrid system that combines peer-to-peer and a centralized client-server approach against each method acting alone. A key element of our approach is to explicitly model the temporal evolution of demand. In particular, we employ a word-of-mouth demand evolution model due to Bass to represent the evolution of interest in a piece of content. Our analysis is carried out in an order scaling depending on the total potential mass of customers N in the market. Using this approach, we study the relative performance of peer-to-peer and centralized client-server schemes, as well as a hybrid of the two-both from the point of view of consumers as well as the content distributor. We show how awareness of demand can be used to attain a given average delay target with lowest possible utilization of the central server by using the hybrid scheme. We also show how such awareness can be used to take provisioning decisions. Our insights are obtained in a fluid model and supported by stochastic simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TCAM Razor: A Systematic Approach Towards Minimizing Packet Classifiers in TCAMs

    Publication Year: 2010 , Page(s): 490 - 500
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (579 KB) |  | HTML iconHTML  

    Packet classification is the core mechanism that enables many networking services on the Internet such as firewall packet filtering and traffic accounting. Using ternary content addressable memories (TCAMs) to perform high-speed packet classification has become the de facto standard in industry. TCAMs classify packets in constant time by comparing a packet with all classification rules of ternary encoding in parallel. Despite their high speed, TCAMs suffer from the well-known range expansion problem. As packet classification rules usually have fields specified as ranges, converting such rules to TCAM-compatible rules may result in an explosive increase in the number of rules. This is not a problem if TCAMs have large capacities. Unfortunately, TCAMs have very limited capacity, and more rules mean more power consumption and more heat generation for TCAMs. Even worse, the number of rules in packet classifiers has been increasing rapidly with the growing number of services deployed on the Internet. In this paper, we consider the following problem: given a packet classifier, how can we generate another semantically equivalent packet classifier that requires the least number of TCAM entries? In this paper, we propose a systematic approach, the TCAM Razor, that is effective, efficient, and practical. In terms of effectiveness, TCAM Razor achieves a total compression ratio of 29.0%, which is significantly better than the previously published best result of 54%. In terms of efficiency, our TCAM Razor prototype runs in seconds, even for large packet classifiers. Finally, in terms of practicality, our TCAM Razor approach can be easily deployed as it does not require any modification to existing packet classification systems, unlike many previous range encoding schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Low-Complexity and Distributed Energy Minimization in Multihop Wireless Networks

    Publication Year: 2010 , Page(s): 501 - 514
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1038 KB) |  | HTML iconHTML  

    In this work, we study the problem of minimizing the total power consumption in a multihop wireless network subject to a given offered load. It is well-known that the total power consumption of multihop wireless networks can be substantially reduced by jointly optimizing power control, link scheduling, and routing. However, the known optimal cross-layer solution to this problem is centralized and with high computational complexity. In this paper, we develop a low-complexity and distributed algorithm that is provably power-efficient. In particular, under the node-exclusive interference model and with suitable assumptions on the power-rate function, we can show that the total power consumption of our algorithm is at most (2+??) times as large as the power consumption of the optimal (but centralized and complex) algorithm, where ?? is an arbitrarily small positive constant. Our algorithm is not only the first such distributed solution with provable performance bound, but its power-efficiency ratio is also tighter than that of another suboptimal centralized algorithm in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimizing Delay and Maximizing Lifetime for Wireless Sensor Networks With Anycast

    Publication Year: 2010 , Page(s): 515 - 528
    Cited by:  Papers (14)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1187 KB) |  | HTML iconHTML  

    In this paper, we are interested in minimizing the delay and maximizing the lifetime of event-driven wireless sensor networks for which events occur infrequently. In such systems, most of the energy is consumed when the radios are on, waiting for a packet to arrive. Sleep-wake scheduling is an effective mechanism to prolong the lifetime of these energy-constrained wireless sensor networks. However, sleep-wake scheduling could result in substantial delays because a transmitting node needs to wait for its next-hop relay node to wake up. An interesting line of work attempts to reduce these delays by developing ??anycast??-based packet forwarding schemes, where each node opportunistically forwards a packet to the first neighboring node that wakes up among multiple candidate nodes. In this paper, we first study how to optimize the anycast forwarding schemes for minimizing the expected packet-delivery delays from the sensor nodes to the sink. Based on this result, we then provide a solution to the joint control problem of how to optimally control the system parameters of the sleep-wake scheduling protocol and the anycast packet-forwarding protocol to maximize the network lifetime, subject to a constraint on the expected end-to-end packet-delivery delay. Our numerical results indicate that the proposed solution can outperform prior heuristic solutions in the literature, especially under practical scenarios where there are obstructions, e.g., a lake or a mountain, in the coverage area of the wireless sensor network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Parallel Self-Routing Rearrangeable Nonblocking Multi- \log _{2}N Photonic Switching Network

    Publication Year: 2010 , Page(s): 529 - 539
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (690 KB) |  | HTML iconHTML  

    A new rearrangeable nonblocking photonic multi-log 2 N network DM(N) is introduced. It is shown that DM(N) network possesses many good properties simultaneously. These good properties include all those of existing rearrangeable nonblocking photonic multi-log 2 N networks and new ones such as O(logN)-time fast parallel self-routing, nonblocking multiple-multicast, and cost-effective crosstalk-free wavelength dilation, which existing rearrangeable nonblocking multi-log 2 N networks do not have. The advantages of DM(N) over existing multi-log 2 N networks, especially Log 2 (N, 0, 2??log2 N/2??), are achieved by employing a two-level load balancing scheme-a combination of static load balancing and dynamic load balancing. DM(N) and Log 2 (N, 0, 2??log2 N/2??) are about the same in structure. The additional cost is for the intraplane routing preprocessing circuits. Considering the extended capabilities of DM(N) and current mature and cheap electronic technology, this extra cost is well justified. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rethinking the IEEE 802.11e EDCA Performance Modeling Methodology

    Publication Year: 2010 , Page(s): 540 - 553
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (862 KB) |  | HTML iconHTML  

    Analytical modeling of the 802.11e enhanced distributed channel access (EDCA) mechanism is today a fairly mature research area, considering the very large number of papers that have appeared in the literature. However, most work in this area models the EDCA operation through per-slot statistics, namely probability of transmission and collisions referred to ??slots.?? In so doing, they still share a methodology originally proposed for the 802.11 Distributed Coordination Function (DCF), although they do extend it by considering differentiated transmission/collision probabilities over different slots. We aim to show that it is possible to devise 802.11e models that do not rely on per-slot statistics. To this purpose, we introduce and describe a novel modeling methodology that does not use per-slot transmission/collision probabilities, but relies on the fixed-point computation of the whole (residual) backoff counter distribution occurring after a generic transmission attempt. The proposed approach achieves high accuracy in describing the channel access operations, not only in terms of throughput and delay performance, but also in terms of low-level performance metrics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and Field Experimentation of an Energy-Efficient Architecture for DTN Throwboxes

    Publication Year: 2010 , Page(s): 554 - 567
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1123 KB) |  | HTML iconHTML  

    Disruption-tolerant networks (DTNs) rely on intermittent contacts between mobile nodes to deliver packets using a store-carry-and-forward paradigm. We earlier proposed the use of throwbox nodes, which are stationary, battery-powered nodes with storage and processing, to enhance the capacity of DTNs. However, the use of throwboxes without efficient power management is minimally effective. If the nodes are too liberal with their energy consumption, they will fail prematurely. However, if they are too conservative, they may miss important transfer opportunities, hence increasing lifetime without improving performance. In this paper, we present a hardware and software architecture for energy-efficient throwboxes in DTNs. We propose a hardware platform that uses a multitiered, multiradio, scalable, solar-powered platform. The throwbox employs an approximate heuristic for solving the NP-hard problem of meeting an average power constraint while maximizing the number of bytes forwarded by the throwbox. We built and deployed prototype throwboxes in UMass DieselNet, a bus-based DTN testbed. Through extensive trace-driven simulations and prototype deployment, we show that a single throwbox with a 270-cm2 solar panel can run perpetually while improving packet delivery by 37% and reducing message delivery latency by at least 10% in the network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximizing Restorable Throughput in MPLS Networks

    Publication Year: 2010 , Page(s): 568 - 581
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (555 KB) |  | HTML iconHTML  

    MPLS recovery mechanisms are increasing in popularity because they can guarantee fast restoration and high QoS assurance. Their main advantage is that their backup paths are established in advance, before a failure event takes place. Most research on the establishment of primary and backup paths has focused on minimizing the added capacity required by the backup paths in the network. However, this so-called Spare Capacity Allocation (SCA) metric is less practical for network operators who have a fixed capacitated network and want to maximize their revenues. In this paper, we present a comprehensive study on restorable throughput maximization in MPLS networks. We present the first polynomial-time algorithms for the splittable version of the problem. For the unsplittable version, we provide a lower bound for the approximation ratio and propose an approximation algorithm with an almost identical bound. We present an efficient heuristic which is shown to have excellent performance. One of our most important conclusions is that when one seeks to maximize revenue, local recovery should be the recovery scheme of choice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Collusion-Resistant Routing Scheme for Noncooperative Wireless Ad Hoc Networks

    Publication Year: 2010 , Page(s): 582 - 595
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (479 KB) |  | HTML iconHTML  

    In wireless ad hoc networks, routing needs cooperation of nodes. Since nodes often belong to different users, it is highly important to provide incentives for them to cooperate. However, most existing studies of the incentive-compatible routing problem focus on individual nodes' incentives, assuming that no subset of them would collude. Clearly, this assumption is not always valid. In this paper, we present a systematic study of collusion-resistant routing in noncooperative wireless ad hoc networks. In particular, we consider two standard solution concepts for collusion resistance in game theory, namely Group Strategyproofness and Strong Nash Equilibrium. We show that achieving Group Strategyproofness is impossible, while achieving Strong Nash Equilibrium is possible. More specifically, we design a scheme that is guaranteed to converge to a Strong Nash Equilibrium and prove that the total payment needed is bounded. In addition, we propose a cryptographic method that prevents profit transfer among colluding nodes, as long as they do not fully trust each other unconditionally. This method makes our scheme widely applicable in practice. Experiments show that our solution is collusion-resistant and has good performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Replication Routing in DTNs: A Resource Allocation Approach

    Publication Year: 2010 , Page(s): 596 - 609
    Cited by:  Papers (37)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1147 KB) |  | HTML iconHTML  

    Routing protocols for disruption-tolerant networks (DTNs) use a variety of mechanisms, including discovering the meeting probabilities among nodes, packet replication, and network coding. The primary focus of these mechanisms is to increase the likelihood of finding a path with limited information, and so these approaches have only an incidental effect on such routing metrics as maximum or average delivery delay. In this paper, we present rapid, an intentional DTN routing protocol that can optimize a specific routing metric such as the worst-case delivery delay or the fraction of packets that are delivered within a deadline. The key insight is to treat DTN routing as a resource allocation problem that translates the routing metric into per-packet utilities that determine how packets should be replicated in the system. We evaluate rapid rigorously through a prototype deployed over a vehicular DTN testbed of 40 buses and simulations based on real traces. To our knowledge, this is the first paper to report on a routing protocol deployed on a real outdoor DTN. Our results suggest that rapid significantly outperforms existing routing protocols for several metrics. We also show empirically that for small loads, RAPID is within 10% of the optimal performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Burst Transmission Scheduling in Mobile TV Broadcast Networks

    Publication Year: 2010 , Page(s): 610 - 623
    Cited by:  Papers (12)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1157 KB) |  | HTML iconHTML  

    In mobile TV broadcast networks, the base station broadcasts TV channels in bursts such that mobile devices can receive a burst of traffic and then turn off their radio frequency circuits till the next burst in order to save energy. To achieve this energy saving without scarifying streaming quality, the base station must carefully construct the burst schedule for all TV channels. This is called the burst scheduling problem. In this paper, we prove that the burst scheduling problem for TV channels with arbitrary bit rates is NP-complete. We then propose a practical simplification of the general problem, which allows TV channels to be classified into multiple classes, and the bit rates of the classes have power of two increments, e.g., 100, 200, and 400 kbps. Using this practical simplification, we propose an optimal and efficient burst scheduling algorithm. We present theoretical analysis, simulation, and actual implementation in a mobile TV testbed to demonstrate the optimality, practicality, and efficiency of the proposed algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Fitting Report Position Scheme for the Gated IPACT Dynamic Bandwidth Algorithm in EPONs

    Publication Year: 2010 , Page(s): 624 - 637
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1354 KB) |  | HTML iconHTML  

    In EPONs using the gated Interleaved Polling with Adaptive Cycle Time (IPACT) scheme, the position of the report message within the granted transmission window has a direct effect on the average packet delay within the network. In this paper, this delay is minimized by using a fitting report position (FRP) scheme to adaptively adjust the position of the report message within the transmission window in accordance with the current network load. In the proposed approach, the optimal position of the report message is determined analytically for various system loads. The optical line terminal (OLT) then uses a heuristic algorithm to estimate the load of the optical network units (ONUs) in accordance with their report messages and determines the report message position that minimizes the average packet delay within the network. Finally, the OLT informs the ONUs of the optimal report position through an optional field in the gate message. The performance of the proposed FRP scheme is evaluated for three different network models, namely Poisson traffic with a uniform ONU load, Poisson traffic with a nonuniform ONU load, and self-similar traffic, respectively. The simulation results show that the FRP scheme achieves a lower average packet delay than fixed-report-position schemes such as fixed-report-front (FRF) or fixed-report-end (FRE) for both Poisson and self-similar traffic. The performance improvement is particularly apparent in networks with a nonuniform ONU load distribution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Cross-Layer Algorithms for the Optimal Control of Multihop Wireless Networks

    Publication Year: 2010 , Page(s): 638 - 651
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (865 KB) |  | HTML iconHTML  

    In this paper, we provide and study a general framework that facilitates the development of distributed mechanisms to achieve full utilization of multihop wireless networks. In particular, we describe a generic randomized routing, scheduling, and flow control scheme that allows for a set of imperfections in the operation of the randomized scheduler to account for potential errors in its operation. These imperfections enable the design of a large class of low-complexity and distributed implementations for different interference models. We study the effect of such imperfections on the stability and fairness characteristics of the system and explicitly characterize the degree of fairness achieved as a function of the level of imperfections. Our results reveal the relative importance of different types of errors on the overall system performance and provide valuable insight to the design of distributed controllers with favorable fairness characteristics. In the second part of the paper, we focus on a specific interference model, namely the secondary interference model, and develop distributed algorithms with polynomial communication and computation complexity in the network size. This is an important result given that earlier centralized throughput-optimal algorithms developed for such a model relies on the solution to an NP-hard problem at every decision. This results in a polynomial complexity cross-layer algorithm that achieves throughput optimality and fair allocation of network resources among the users. We further show that our algorithmic approach enables us to efficiently approximate the capacity region of a multihop wireless network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign