By Topic

Parallel and Distributed Systems, IEEE Transactions on

Issue 10 • Date Oct. 2010

Filter Results

Displaying Results 1 - 17 of 17
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (120 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (142 KB)  
    Freely Available from IEEE
  • A Locally-Adjustable Planar Structure for Adaptive Topology Control in Wireless Ad Hoc Networks

    Page(s): 1387 - 1397
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2464 KB)  

    In wireless ad hoc networks, the constructed topology is preferred to be planar since a planar topology enables guaranteed delivery of packets without a routing table. Previous planar structures are statically constructed for the whole network. However, environmental or network dynamics such as channel status, interference, or residual energy will prevent such structures from providing the best service to the network. In this paper, we present a t-adjustable planar structure (TAP) which enables each node to adjust the topology independently via a parameter t and allows nodes to have different path loss exponent. TAP is based on three well-known planar structures: Gabriel Graph, Relative Neighborhood Graph, and Local Minimum Spanning Tree. We show properties of TAP by proof or simulation: (1) It preserves connectivity; (2) it is planar, sparse, and symmetric; (3) it preserves all minimum energy path when t = 1 for all nodes; and (4) the average transmission power, interference, and node degree decrease as t increases and the maximum node degree is bounded by 6 when t = 3 for all nodes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Markovian Approach to Multipath Data Transfer in Overlay Networks

    Page(s): 1398 - 1411
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3342 KB)  

    The use of multipath routing in overlay networks is a promising solution to improve performance and availability of Internet applications, without the replacement of the existing TCP/IP infrastructure. In this paper, we propose an approach to distribute data over multiple overlay paths that is able to improve Quality of Service (QoS) metrics, such as the data transfer time, loss, and throughput. By using the Imbedded Markov Chain technique, we demonstrate that the system under analysis, observed at specific instants, possesses the Markov property. We therefore cast the data distribution problem into the Markov Decision Process (MDP) framework, and design a computationally efficient algorithm named Online Policy Iteration (OPI), to solve the optimization problem on the fly. The proposed approach is applied to the problem of multipath data distribution in various wired/wireless network scenarios, with the objective of minimizing the data transfer time as well as the delay and losses. Through both intensive ns-2 simulations with data collected from real heterogeneous networks and experiments over real networks, we show the superior performance of the proposed traffic control mechanism in comparison with two classical schemes, that are Weighted Round Robin and Join the Shortest Queue. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MIMO Power Control for High-Density Servers in an Enclosure

    Page(s): 1412 - 1426
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2260 KB)  

    Power control is becoming a key challenge for effectively operating a modern data center. In addition to reducing operating costs, precisely controlling power consumption is an essential way to avoid system failures caused by power capacity overload or overheating due to increasing high server density. Control-theoretic techniques have recently shown a lot of promise for power management because of their better control performance and theoretical guarantees on control accuracy and system stability. However, existing work oversimplifies the problem by controlling a single server independently from others. As a result, at the enclosure level where multiple high-density servers are correlated by common workloads and share common power supplies, power cannot be shared to improve application performance. In this paper, we propose an enclosure-level power controller that shifts power among servers based on their performance needs, while controlling the total power of the enclosure to be lower than a constraint. Our controller features a rigorous design based on an optimal Multi-Input-Multi-Output (MIMO) control theory. We present detailed control problem formulation and transformation to a standard constrained least-squares problem, as well as stability analysis in the face of significant workload variations. We then conduct extensive experiments on a physical testbed to compare our controller with three state-of-the-art controllers: a heuristic-based MIMO control solution, a Single-Input-Single-Output (SISO) control solution, and an improved SISO controller with simple power shifting among servers. Our empirical results demonstrate that our controller outperforms all the three baselines by having more accurate power control and up to 11.8 percent better benchmark performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Network Formation for n-Way Broadcast Applications

    Page(s): 1427 - 1441
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3524 KB)  

    In an n-way broadcast application, each one of n overlay nodes wants to push its own distinct large data file to all other n-1 destinations as well as download their respective data files. BitTorrent-like swarming protocols are ideal choices for handling such massive data volume transfers. The original BitTorrent targets one-to-many broadcasts of a single file to a very large number of receivers, and thus, by necessity, employs a suboptimized overlay topology. n-way broadcast applications, on the other hand, owing to their inherent complexity, are realizable only in small to medium scale networks. In this paper, we show that we can leverage this scale constraint to construct optimized overlay topologies that take into consideration the end-to-end characteristics of the network and as a consequence deliver far superior performance compared to random and myopic (greedy) approaches. We present the Max-Min and Max-Sum peer-selection policies used by individual nodes to select their neighbors. The first one strives to maximize the available bandwidth to the slowest destination, while the second maximizes the aggregate output rate. We design a swarming protocol suitable for n-way broadcast and operate it on top of overlay graphs formed by nodes that employ Max-Min or Max-Sum policies. Using measurements from a PlanetLab prototype implementation and trace-driven simulations, we demonstrate that the performance of swarming protocols on top of our constructed topologies is far superior to the performance of random and myopic overlays. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FS2You: Peer-Assisted Semipersistent Online Hosting at a Large Scale

    Page(s): 1442 - 1457
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2895 KB)  

    It has been widely acknowledged that online file hosting systems within the “cloud” of the Internet have provided valuable services to end users who wish to share files of any size. Such online hosting services are typically provided by dedicated servers, either in content distribution networks (CDNs) or large data centers. Server bandwidth costs, however, are prohibitive in these cases, especially when serving large volumes of files to a large number of users. Though it seems intuitive to take advantage of peer upload bandwidth to mitigate such server bandwidth costs in a complementary fashion, it is not trivial to design and fine-tune important aspects of such peer-assisted online hosting in a real-world large-scale deployment. This paper presents FS2You, a large-scale and real-world online file hosting system with peer assistance and semipersistent file availability. FS2You is designed to dramatically mitigate server bandwidth costs. In this paper, we show a number of key challenges involved in such a design objective, our architectural and protocol design in response to these challenges, as well as an extensive measurement study at a large scale to demonstrate the effectiveness of our design, using real-world traces that we have collected. To our knowledge, this paper represents the first attempt to design, implement, and evaluate a new peer-assisted semipersistent online file hosting system at a realistic scale. Since the launch of FS2You, it has quickly become one of the most popular online file hosting systems in mainland China, and a favorite in many online forums across the country. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2PASS: Bandwidth-Optimized Location Cloaking for Anonymous Location-Based Services

    Page(s): 1458 - 1472
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2302 KB)  

    Protection of users' location privacy is a critical issue for location-based services. Location cloaking has been proposed to blur users' accurate locations with cloaked regions. Although various cloaking algorithms have been studied, none of the prior work has explored the impact of cloaking on the bandwidth usage of requested services. In this paper, we develop an innovative result-aware location cloaking approach, called 2PASS. Based on the notion of Voronoi cells, 2PASS minimizes the number of objects to request, and hence, the bandwidth while meeting the same privacy requirement. The core component of 2PASS is a lightweight WAG-tree index, based on which efficient and secure client and server procedures are designed. Through threat analysis and experimental results, we argue that 2PASS is robust and outperforms state-of-the-art approaches in terms of various metrics, such as query response time and bandwidth consumption. We also enclose a case study of 2PASS in a real-life application. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Group-Based Negotiations in P2P Systems

    Page(s): 1473 - 1486
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1369 KB)  

    In P2P systems, groups are typically formed to share resources and/or to carry on joint tasks. In distributed environments formed by a large number of peers conventional authentication techniques are inadequate for the group joining process, and more advanced ones are needed. Complex transactions among peers may require more elaborate interactions based on what peers can do or possess instead of peers' identity. In this work, we propose a novel peer group joining protocol. We introduce a highly expressive resource negotiation language, able to support the specification of a large variety of conditions applying to single peers or groups of peers. Moreover, we define protocols to test such resource availability customized to the level of assurance required by the peers. Our approach has been tested and evaluated on an extension of the JXTA P2P platform. Our results show the robustness of our approach in detecting malicious peers, detected both during the negotiation and during the peer group lifetime. Regardless of the peer group cardinality and interaction frequency, the peers always detect possible free riders within a small time frame. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Performance of Content Delivery under Competition in a Stochastic Unstructured Peer-to-Peer Network

    Page(s): 1487 - 1500
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1661 KB)  

    Peer-to-peer (P2P) network is widely used for transferring large files nowadays. Measurement results show that most downloading peers are patient as the average download session is usually very long. It is sometimes even longer than downloading from a dedicated server using a modem. Existing results in the literature indicate that the stochastic fluctuation and the heterogeneity in the service capacity of each peer are two of the major reasons that make the average download time far longer than expected. In those studies, it has been often assumed that there is only one downloading peer in the network, ignoring the interaction and competition among peers. In this paper, we investigate the impact of the interaction and competition among peers on downloading performance under stochastic, heterogeneous, and unstructured P2P settings, thereby greatly extending the existing results on stochastic P2P networks made only under a single downloading peer in the network. To analyze the average download time in a P2P network with multiple competing downloading peers, we first introduce the notion of system utilization tailored to a P2P network. We investigate the relationship among the average download time, system utilization, and the level of competition among downloading peers in a stochastic P2P network. We then derive an achievable lower bound on the average download time and propose algorithms to give the peers the minimum average download time. Our result can much improve the download performance compared to earlier results in the literature. The performance of the different algorithms is compared under NS-2 simulations. Our results also provide theoretical explanation to the inconsistency of performance improvement by using parallel connections (parallel connections sometimes do not outperform a single connection) observed in some measurement studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-Disciplinary Worms and Countermeasures: Modeling and Analysis

    Page(s): 1501 - 1514
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1219 KB)  

    In this paper, we address issues related to the modeling, analysis, and countermeasures of worm attacks on the Internet. Most previous work assumed that a worm always propagates itself at the highest possible speed. Some newly developed worms (e.g., “Atak” worm) contradict this assumption by deliberately reducing the propagation speed in order to avoid detection. As such, we study a new class of worms, referred to as self-disciplinary worms. These worms adapt their propagation patterns in order to reduce the probability of detection, and eventually, to infect more computers. We demonstrate that existing worm detection schemes based on traffic volume and variance cannot effectively defend against these self-disciplinary worms. To develop proper countermeasures, we introduce a game-theoretic formulation to model the interaction between the worm propagator and the defender. We show that an effective integration of multiple countermeasure schemes (e.g., worm detection and forensics analysis) is critical for defending against self-disciplinary worms. We propose different integrated schemes for fighting different self-disciplinary worms, and evaluate their performance via real-world traffic data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Traffic Management in Sensor Networks with a Mobile Sink

    Page(s): 1515 - 1530
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3152 KB)  

    The imminent growth of user-centric, pervasive sensing environments promotes sink mobility in an increasing number of event-based, sensor network applications including rescue missions, intrusion detection, and smart buildings. In these settings, one of the most critical challenges toward supporting quality of service, is effective distributed congestion avoidance. Congestion control techniques have been proposed in sensor networks mostly in the context of a static sink. In our work, we study the problem of traffic management in the context of sensor networks with a mobile sink. Under sink mobility, various new challenges arise that need to be effectively addressed. Adaptation to sink mobility requires agile as well as effective load estimation techniques. In addition, unlike static networks, path reliability often fluctuates due to path reconfigurations. Thus, injecting traffic during transient periods of poor path quality might wastefully detain network resources. In this work, we first study the effect of sink mobility on traffic load in sensor networks. We then propose adaptive routing as well as load estimation techniques that effectively adapt to sink relocations. A novel aspect of our approach is that it jointly considers the network load as well as path quality variations to facilitate intelligent, mobility-adaptive rate regulation at the sources. We provide a thorough study of the trade-offs induced due to persistent path quality variations and conduct extensive real MICA2-based testbed experiments to study the performance of the sensor network under sink mobility. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximizing Service Reliability in Distributed Computing Systems with Random Node Failures: Theory and Implementation

    Page(s): 1531 - 1544
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1434 KB)  

    In distributed computing systems (DCSs) where server nodes can fail permanently with nonzero probability, the system performance can be assessed by means of the service reliability, defined as the probability of serving all the tasks queued in the DCS before all the nodes fail. This paper presents a rigorous probabilistic framework to analytically characterize the service reliability of a DCS in the presence of communication uncertainties and stochastic topological changes due to node deletions. The framework considers a system composed of heterogeneous nodes with stochastic service and failure times and a communication network imposing random tangible delays. The framework also permits arbitrarily specified, distributed load-balancing actions to be taken by the individual nodes in order to improve the service reliability. The presented analysis is based upon a novel use of the concept of stochastic regeneration, which is exploited to derive a system of difference-differential equations characterizing the service reliability. The theory is further utilized to optimize certain load-balancing policies for maximal service reliability; the optimization is carried out by means of an algorithm that scales linearly with the number of nodes in the system. The analytical model is validated using both Monte Carlo simulations and experimental data collected from a DCS testbed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 7 Great Reasons for Joining the IEEE Computer Society [advertisement]

    Page(s): 1545
    Save to Project icon | Request Permissions | PDF file iconPDF (590 KB)  
    Freely Available from IEEE
  • Certified Software Development Associate Certification

    Page(s): 1546
    Save to Project icon | Request Permissions | PDF file iconPDF (329 KB)  
    Freely Available from IEEE
  • TPDS Information for authors

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (142 KB)  
    Freely Available from IEEE
  • [Back cover]

    Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (120 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Parallel and Distributed Systems (TPDS) is published monthly. It publishes a range of papers, comments on previously published papers, and survey articles that deal with the parallel and distributed systems research areas of current importance to our readers.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
David Bader
College of Computing
Georgia Institute of Technology