By Topic

Parallel and Distributed Systems, IEEE Transactions on

Issue 6 • Date June 2008

Filter Results

Displaying Results 1 - 14 of 14
  • [Front cover]

    Publication Year: 2008 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (91 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Publication Year: 2008 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (77 KB)  
    Freely Available from IEEE
  • Adaptive Data Collection Strategies for Lifetime-Constrained Wireless Sensor Networks

    Publication Year: 2008 , Page(s): 721 - 734
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3253 KB) |  | HTML iconHTML  

    Communication is a primary source of energy consumption in wireless sensor networks. Due to resource constraints, the sensor nodes may not have enough energy to report every reading to the base station over a required network lifetime. This paper investigates data collection strategies in lifetime-constrained wireless sensor networks. Our objective is to maximize the accuracy of data collected by the base station over the network lifetime. Instead of sending sensor readings periodically, the relative importance of the readings is considered in data collection: the sensor nodes send data updates to the base station when the new readings differ more substantially from the previous ones. We analyze the optimal update strategy and develop adaptive update strategies for both individual and aggregate data collections. We also present two methods to cope with message losses in wireless transmission. To make full use of the energy budgets, we design an algorithm to allocate the numbers of updates allowed to be sent by the sensor nodes based on their topological relations. Experimental results using real data traces show that, compared with the periodic strategy, adaptive strategies significantly improve the accuracy of data collected by the base station. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SSW: A Small-World-Based Overlay for Peer-to-Peer Search

    Publication Year: 2008 , Page(s): 735 - 749
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2355 KB) |  | HTML iconHTML  

    Peer-to-peer (P2P) systems have become a popular platform for sharing and exchanging voluminous information among thousands or even millions of users. The massive amount of information shared in such systems mandates efficient semantic-based search instead of key-based search. The majority of existing proposals can only support simple key-based search rather than semantic-based search. This paper presents the design of an overlay network, namely, semantic small world (SSW), that facilitates efficient semantic-based search in P2P systems. SSW achieves the efficiency based on four ideas: 1) semantic clustering, where peers with similar semantics organize into peer clusters, 2) dimension reduction, where to address the high maintenance overhead associated with capturing high-dimensional data semantics in the overlay, peer clusters are adaptively mapped to a one-dimensional naming space, 3) small world network, where peer clusters form into a one-dimensional small world network, which is search efficient with low maintenance overhead, and 4) efficient search algorithms, where peers perform efficient semantic-based search, including approximate point query and range query in the proposed overlay. Extensive experiments using both synthetic data and real data demonstrate that SSW is superior to the state of the art on various aspects, including scalability, maintenance overhead, adaptivity to distribution of data and locality of interest, resilience to peer failures, load balancing, and efficiency in support of various types of queries on data objects with high dimensions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • HBA: Distributed Metadata Management for Large Cluster-Based Storage Systems

    Publication Year: 2008 , Page(s): 750 - 763
    Cited by:  Papers (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3825 KB) |  | HTML iconHTML  

    An efficient and distributed scheme for file mapping or file lookup is critical in decentralizing metadata management within a group of metadata servers. This paper presents a novel technique called Hierarchical Bloom Filter Arrays (HBA) to map filenames to the metadata servers holding their metadata. Two levels of probabilistic arrays, namely, the Bloom filter arrays with different levels of accuracies, are used on each metadata server. One array, with lower accuracy and representing the distribution of the entire metadata, trades accuracy for significantly reduced memory overhead, whereas the other array, with higher accuracy, caches partial distribution information and exploits the temporal locality of file access patterns. Both arrays are replicated to all metadata servers to support fast local lookups. We evaluate HBA through extensive trace-driven simulations and implementation in Linux. Simulation results show our HBA design to be highly effective and efficient in improving the performance and scalability of file systems in clusters with 1,000 to 10,000 nodes (or superclusters) and with the amount of data in the petabyte scale or higher. Our implementation indicates that HBA can reduce the metadata operation time of a single-metadata-server architecture by a factor of up to 43.9 when the system is configured with 16 metadata servers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accelerating Molecular Dynamics Simulations with Reconfigurable Computers

    Publication Year: 2008 , Page(s): 764 - 778
    Cited by:  Papers (13)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1911 KB) |  | HTML iconHTML  

    With advances in reconfigurable hardware, especially field-programmable gate arrays (FPGAs), it has become possible to use reconfigurable hardware to accelerate complex applications such as those in scientific computing. There has been a resulting development of reconfigurable computers, that is, computers that have both general-purpose processors and reconfigurable hardware, as well as memory and high-performance interconnection networks. In this paper, we describe the acceleration of molecular dynamics simulations with reconfigurable computers. We evaluate several design alternatives for the implementation of the application on a reconfigurable computer. We show that a single node accelerated with reconfigurable hardware, utilizing fine-grained parallelism in the reconfigurable hardware design, is able to achieve a speedup of about two times over the corresponding software-only simulation. We then parallelize the application and study the effect of acceleration on performance and scalability. Specifically, we study strong scaling, in which the problem size is fixed. We find that the unaccelerated version actually scales better, because it spends more time in computation than the accelerated version does. However, we also find that a cluster of P accelerated nodes gives better performance than a cluster of 2P unaccelerated nodes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Control-Based Adaptive Middleware for Real-Time Image Transmission over Bandwidth-Constrained Networks

    Publication Year: 2008 , Page(s): 779 - 793
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2399 KB) |  | HTML iconHTML  

    Real-time image transmission is crucial to an emerging class of distributed embedded systems operating in open network environments. Examples include avionics mission replanning over Link-16, security systems based on wireless camera networks, and online collaboration using camera phones. Meeting image transmission deadlines is a key challenge in such systems due to unpredictable network conditions. In this paper, we present CAMRIT, a Control-based Adaptive Middleware framework for Real-time Image Transmission in distributed real-time embedded systems. CAMRIT features a distributed feedback control loop that meets image transmission deadlines by dynamically adjusting the quality of image tiles. We derive an analytic model that captures the dynamics of a distributed middleware architecture. A control-theoretic methodology is applied to systematically design a control algorithm with analytic assurance of system stability and performance, despite uncertainties in network bandwidth. Experimental results demonstrate that CAMRIT can provide robust real-time guarantees for a representative application scenario. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detecting VoIP Floods Using the Hellinger Distance

    Publication Year: 2008 , Page(s): 794 - 805
    Cited by:  Papers (34)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2635 KB) |  | HTML iconHTML  

    Voice over IP (VoIP), also known as Internet telephony, is gaining market share rapidly and now competes favorably as one of the visible applications of the Internet. Nevertheless, being an application running over the TCP/IP suite, it is susceptible to flooding attacks. If flooded, as a time-sensitive service, VoIP may show noticeable service degradation and even encounter sudden service disruptions. Because multiple protocols are involved in a VoIP service and most of them are susceptible to flooding, an effective solution must be able to detect and overcome hybrid floods. As a solution, we offer the VoIP flooding detection system (vFDS)-an online statistical anomaly detection framework that generates alerts based on abnormal variations in a selected hybrid collection of traffic flows. It does so by viewing collections of related packet streams as evolving probability distributions and measuring abnormal variations in their relationships based on the Hellinger distance-a measure of variability between two probability distributions. Experimental results show that vFDS is fast and accurate in detecting flooding attacks, without noticeably increasing call setup times or introducing jitter into the voice streams. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic Bandwidth Auctions in Multioverlay P2P Streaming with Network Coding

    Publication Year: 2008 , Page(s): 806 - 820
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2464 KB) |  | HTML iconHTML  

    In peer-to-peer (P2P) live streaming applications such as IPTV, it is natural to accommodate multiple coexisting streaming overlays, corresponding to channels of programming. In the case of multiple overlays, it is a challenging task to design an appropriate bandwidth allocation protocol, such that these overlays efficiently share the available upload bandwidth on peers, media content is efficiently distributed to achieve the required streaming rate, as well as the streaming costs are minimized. In this paper, we seek to design simple, effective, and decentralized strategies to resolve conflicts among coexisting streaming overlays in their bandwidth competition and combine such strategies with network-coding-based media distribution to achieve efficient multioverlay streaming. Since such strategies of conflict are game theoretic in nature, we characterize them as a decentralized collection of dynamic auction games, in which downstream peers bid for upload bandwidth at the upstream peers for the delivery of coded media blocks. With extensive theoretical analysis and performance evaluation, we show that these local games converge to an optimal topology for each overlay in realistic asynchronous environments. Together with network-coding-based media dissemination, these streaming overlays adapt to peer dynamics, fairly share peer upload bandwidth to achieve satisfactory streaming rates, and can be prioritized. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhancing Search Performance in Unstructured P2P Networks Based on Users' Common Interest

    Publication Year: 2008 , Page(s): 821 - 836
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3450 KB) |  | HTML iconHTML  

    Peer-to-peer (P2P) networks establish loosely coupled application-level overlays on top of the Internet to facilitate efficient sharing of resources. They can be roughly classified as either structured or unstructured networks. Without stringent constraints over the network topology, unstructured P2P networks can be constructed very efficiently and are therefore considered suitable to the Internet environment. However, the random search strategies adopted by these networks usually perform poorly with a large network size. In this paper, we seek to enhance the search performance in unstructured P2P networks through exploiting users' common interest patterns captured within a probability-theoretic framework termed the user interest model (UIM). A search protocol and a routing table updating protocol are further proposed in order to expedite the search process through self organizing the P2P network into a small world. Both theoretical and experimental analyses are conducted and demonstrated the effectiveness and efficiency of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scalable and Efficient End-to-End Network Topology Inference

    Publication Year: 2008 , Page(s): 837 - 850
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2334 KB) |  | HTML iconHTML  

    To construct an efficient overlay network, the information of underlay is important. We consider using end-to-end measurement tools such as traceroute to infer the underlay topology among a group of hosts. Previously, Max-Delta has been proposed to infer a highly accurate topology with a low number of traceroutes. However, Max-Delta relies on a central server to collect traceroute results and to select paths for hosts to traceroute. It is not scalable to large groups. In this paper, we investigate a distributed inference scheme to support scalable inference. In our scheme, each host joins an overlay tree before conducting traceroute. A host then independently selects paths for tracerouting and exchanges traceroute results with others through the overlay tree. As a result, each host can maintain a partially discovered topology. We have studied the key issue in the scheme, that is, how a low-diameter overlay tree can be constructed. Furthermore, we propose several techniques to reduce the measurement cost for topology inference. They include 1) integrating the Doubletree algorithm into our scheme to reduce measurement redundancy, 2) setting up a lookup table for routers to reduce traceroute size, and 3) conducting topology abstraction and reducing the computational frequency to reduce the computational overhead. As compared to the naive Max-Delta, our scheme is fully distributed and scalable. The computational loads for target selection are distributed to all the hosts instead of a single server. In addition, each host only communicates with a few other hosts. The consumption of edge bandwidth at a host is hence limited. We have done simulations on Internet-like topologies and conducted measurements on PlanetLab. The results show that the constructed tree has a low diameter and can support quick data exchange between hosts. Furthermore, the proposed improvements can efficiently reduce measurement redundancy, bandwidth consumption, and computational overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FlexiTP: A Flexible-Schedule-Based TDMA Protocol for Fault-Tolerant and Energy-Efficient Wireless Sensor Networks

    Publication Year: 2008 , Page(s): 851 - 864
    Cited by:  Papers (18)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1933 KB) |  | HTML iconHTML  

    FlexiTP is a novel TDMA protocol that offers a synchronized and loose slot structure. Nodes in the network can build, modify, or extend their scheduled number of slots during execution, based on their local information. Nodes wake up for their scheduled slots; otherwise, they switch into power-saving sleep mode. This flexible schedule allows FlexiTP to be strongly fault tolerant and highly energy efficient. FlexiTP is scalable for a large number of nodes because its depth-first-search schedule minimizes buffering, and it allows communication slots to be reused by nodes outside each other's interference range. Hence, the overall scheme of FlexiTP provides end-to-end guarantees on data delivery (throughput, fair access, and robust self-healing) while also respecting the severe energy and memory constraints of wireless sensor networks. Simulations in ns-2 show that FlexiTP ensures energy efficiency and is robust to network dynamics (faults such as dropped packets and nodes joining or leaving the network) under various network configurations (network topology and network density), providing an efficient solution for data-gathering applications. Furthermore, under high contention, FlexiTP outperforms 2-MAC in terms of energy efficiency and network performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TPDS Information for authors

    Publication Year: 2008 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (77 KB)  
    Freely Available from IEEE
  • [Back cover]

    Publication Year: 2008 , Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (91 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Parallel and Distributed Systems (TPDS) is published monthly. It publishes a range of papers, comments on previously published papers, and survey articles that deal with the parallel and distributed systems research areas of current importance to our readers.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
David Bader
College of Computing
Georgia Institute of Technology