By Topic

Distributed Computing Systems, 2009. ICDCS '09. 29th IEEE International Conference on

Date 22-26 June 2009

Filter Results

Displaying Results 1 - 25 of 87
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (81 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (182 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (113 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (169 KB)  
    Freely Available from IEEE
  • Message from the General Chair

    Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (66 KB)  
    Freely Available from IEEE
  • Message from the Program Chair

    Page(s): xii - xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (96 KB)  
    Freely Available from IEEE
  • Committee Lists

    Page(s): xiv - xv
    Save to Project icon | Request Permissions | PDF file iconPDF (95 KB)  
    Freely Available from IEEE
  • Rethinking Multicast for Massive-Scale Platforms

    Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (185 KB) |  | HTML iconHTML  

    A dramatic scale-up of distributed computing platforms is underway. Internet routers can contain hundreds or thousands of line cards. Cloud computing platforms may contain tens or even hundreds of thousands of machines. What is gluing all of this together? Multicast to support data replication, event streams, and coordination. Yet yesterday??s multicast protocols are poorly matched to this new generation of uses; so much so that many cloud platforms refuse to deploy multicast as such, and have instead resorted to clumsy alternatives, mapping multicast to TCP or even web services method invocations. This talk will explore inadequacies of existing protocols, early progress towards better ones, and the longer term research agenda. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Reinforcement Learning Approach to Online Web Systems Auto-configuration

    Page(s): 2 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (271 KB) |  | HTML iconHTML  

    In a web system, configuration is crucial to the performance and service availability. It is a challenge, not only because of the dynamics of Internet traffic, but also the dynamic virtual machine environment the system tends to be run on. In this paper, we propose a reinforcement learning approach for autonomic configuration and reconfiguration of multi-tier web systems. It is able to adapt performance parameter settings not only to the change of workload, but also to the change of virtual machine configurations. The RL approach is enhanced with an efficient initialization policy to reduce the learning time for online decision. The approach is evaluated using TPC-W benchmark on a three-tier website hosted on a Xen-based virtual machine environment. Experiment results demonstrate that the approach can auto-configure the web system dynamically in response to the change in both workload and VM resource. It can drive the system into a near-optimal configuration setting in less than 25 trial-and-error iterations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Graduated QoS by Decomposing Bursts: Don't Let the Tail Wag Your Server

    Page(s): 12 - 21
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (663 KB) |  | HTML iconHTML  

    The growing popularity of hosted storage services and shared storage infrastructure in data centers is driving the recent interest in resource management and QoS in storage systems. The bursty nature of storage workloads raises significant performance and provisioning challenges, leading to increased infrastructure, management, and energy costs. We present a novel dynamic workload shaping framework to handle bursty workloads, where the arrival stream is dynamically decomposed to isolate its bursts, and then rescheduled to exploit available slack. We show how decomposition reduces the server capacity requirements dramatically while affecting QoS guarantees minimally. We present an optimal decomposition algorithm RTT and a recombination algorithm Miser, and show the benefits of the approach by performance evaluation using several storage traces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reducing Disk I/O Performance Sensitivity for Large Numbers of Sequential Streams

    Page(s): 22 - 31
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (209 KB) |  | HTML iconHTML  

    Retrieving sequential rich media content from modern commodity disks is a challenging task. As disk capacity increases, there is a need to increase the number of streams that are allocated to each disk. However, when multiple streams are accessing a single disk, throughput is dramatically reduced because of disk head seek overhead, resulting in requirements for more disks. Thus, there is a tradeoff between how many streams should be allowed to access a disk and the total throughput that can be achieved. In this work we examine this tradeoff and provide an understanding of issues along with a practical solution. We use Disksim, a detailed architectural simulator, to examine several aspects of a modern I/O subsystem and we show the effect of various disk parameters on system performance under multiple sequential streams. Then, we propose a solution that dynamically adjusts I/O request streams, based on host and I/O subsystem parameters. We implement our approach in a real system and perform experiments with a small and a large disk configuration. Our approach improves disk throughput up to a factor of 4 with a workload of 100 sequential streams, without requiring large amounts of memory on the storage node. Moreover, it is able to adjust (statically) to different storage node configurations, essentially making the I/O subsystem insensitive to the number of I/O streams used. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fragmentation Design for Efficient Query Execution over Sensitive Distributed Databases

    Page(s): 32 - 39
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (231 KB) |  | HTML iconHTML  

    The balance between privacy and utility is a classical problem with an increasing impact on the design of modern information systems. On the one side it is crucial to ensure that sensitive information is properly protected; on the other side, the impact of protection on the workload must be limited as query efficiency and system performance remain a primary requirement. We address this privacy/efficiency balance proposing an approach that, starting from a flexible definition of confidentiality constraints on a relational schema, applies encryption on information in a parsimonious way and mostly relies on fragmentation to protect sensitive associations among attributes. Fragmentation is guided by workload considerations so to minimize the cost of executing queries over fragments. We discuss the minimization problem when fragmenting data and provide a heuristic approach to its solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lightweight Secure Search Protocols for Low-cost RFID Systems

    Page(s): 40 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    RFID technology can potentially be used in many applications. A typical RFID system involves a reader and a number of tags, which may range from the battery-powered tags that have Wi-Fi capabilities, to the low-cost tags that are constrained in computation capacities and hardware resources. Keeping RFID systems secure is crucial since RFID systems are vulnerable to a number of malicious attacks. As for low-cost RFID systems, security problems become much more challenging, because traditional security mechanisms are infeasible to be used on low-cost tags due to their resource constraints. Tag search is an important functionality that a RFID system should provide. In this paper, we study how to secure tag search with a focus on low-cost RFID systems. Existing solutions are mostly based on hash functions and consume 8000 to 10000 gates, however, the low-cost tags can afford at most 2000 gates for secure features. In this paper, we propose several lightweight secure search protocols based on linear feedback shift registers (LFSR) and physically unclonable functions (PUF). Our protocols prevent adversaries from learning tag identity and impersonating RFID reader or tags. Experimental results show that our solutions have hundreds of nanoseconds processing time and require no more than 1400 hardware gates on tags. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CAP: A Context-Aware Privacy Protection System for Location-Based Services

    Page(s): 49 - 57
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (510 KB) |  | HTML iconHTML  

    We address issues related to privacy protection in location-based services (LBS). Most existing research in this field either requires a trusted third-party (anonymizer) or uses oblivious protocols that are computationally and communicationally expensive. Our design of privacy-preserving techniques is principled on not requiring a trusted third-party while being highly efficient in terms of time and space complexities. The problem has two interesting and challenging characteristics: First, the degree of privacy protection and LBS accuracy depends on the context, such as population and road density, around a user's location. Second, an adversary may violate a user's location privacy in two ways: (i) based on the user's location information contained in the LBS query payload, and (ii) by inferring a user's geographical location based on its device's IP address. To address these challenges, we introduce CAP, a Context-Aware Privacy-preserving LBS system with integrated protection for data privacy and communication anonymity. We have implemented CAP and integrated it with Google Maps, a popular LBS system. Theoretical analysis and experimental results validate CAP's effectiveness on privacy protection, LBS accuracy, and communication Quality-of-Service. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Impact of Communication Models on Routing-Algorithm Convergence

    Page(s): 58 - 67
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (263 KB) |  | HTML iconHTML  

    Autonomous routing algorithms, such as BGP, are intended to reach a globally consistent set of routes after nodes iteratively and independently collect, process, and share network information. Generally, the important role of the mechanism used to share information has been overlooked in previous analyses of these algorithms. In this paper, we explicitly study how the network-communication model affects algorithm convergence. To do this, we consider a variety of factors, including channel reliability, how much information is processed from channels, and how many channels are processed simultaneously. Using these factors, we define a taxonomy of communication models and identify particular models of interest, including those used in previous theoretical work, those that most closely model real-world implementations of BGP, and those of potential interest for the design of future routing algorithms. We characterize an extensive set of relationships among models in our taxonomy and show that convergence depends on the communication model in nontrivial ways. These results highlight that certain models are best for proving conditions that guarantee convergence, while other models are best for characterizing conditions that might permit nonconvergence. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Selective Protection: A Cost-Efficient Backup Scheme for Link State Routing

    Page(s): 68 - 75
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (291 KB) |  | HTML iconHTML  

    In recent years, there are substantial demands to reduce packet loss in the Internet. Among the schemes proposed, finding backup paths in advance is considered to be an effective method to reduce the reaction time. Very commonly, a backup path is chosen to be a most disjoint path from the primary path, or in the network level, backup paths are computed for all links (e.g., IPRFF). The validity of this straightforward choice is based on 1) all the links may fail with equal probability; and 2) facing the high protection requirement today, having links not protected or sharing links between the primary and backup paths just simply look weird. Nevertheless, indications from many research studies have confirmed that the vulnerability of the links in the Internet is far from equality. In addition, we have seen that full protection schemes may introduce high costs. In this paper, we argue that such approaches may not be cost effective. We first analyze the failure characteristics based on real world traces from CERNET2, the China education and Research NETwork 2. We observe that the failure probabilities of the links is heavy-tail, i.e., a small set of links caused most of the failures. We thus propose a selective protection scheme. We carefully analyze the implementation details and the overhead for general backup path schemes of the Internet today. We formulate an optimization problem where the routing performance (in terms of network level availability) should be guaranteed and the backup cost should be minimized. This cost is special as it involves computation overhead. Consequently, we propose a novel critical-protection algorithm which is fast itself. We evaluate our scheme systematically, using real world topologies and randomly generated topologies. We show significant gain even when the network availability requirement is 99.99% as compared to that of the full protection scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Centaur: A Hybrid Approach for Reliable Policy-Based Routing

    Page(s): 76 - 84
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (337 KB) |  | HTML iconHTML  

    In this paper, we consider the design of a policy-based routing system and the role that link state might play. Looking at the problem from a link-state perspective, we propose Centaur, a hybrid routing protocol combining the benefits of both link state and path vector. Through analytical and experimental studies, we demonstrate Centaur's potential in achieving rich policy expressiveness and high network availability. Our work shows that it is possible to combine link-state and path-vector approaches into a practical and efficient algorithm for policy-based routing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint Optimization of Spectrum Handoff Scheduling and Routing in Multi-hop Multi-radio Cognitive Networks

    Page(s): 85 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB) |  | HTML iconHTML  

    Spectrum handoff causes performance degradation of the cognitive network when the primary user reclaims its right to access the licensed spectrum. In a multi-hop cognitive network, this problem becomes even worse since multiple links are involved. Spectrum handoff of multiple links seriously affects the network connectivity and routing. In this paper, we describe a cross-layer optimization approach to solve the spectrum handoff problem with joint consideration of spectrum handoff scheduling and routing. We propose a protocol, called Joint Spectrum Handoff Scheduling and Routing Protocol (JSHRP). This paper makes the following major contributions. First, the concept "spectrum handoff of single link" is extended to "spectrum handoff of multiple links", termed as "multi-link spectrum handoff". Second, we define the problem of coordinating the spectrum handoff of multiple links to minimize the total spectrum handoff latency under the constraint of the network connectivity. This problem is proven to be NP-hard, and we propose both centralized and distributed greedy algorithms to minimize the total latency of spectrum handoff for multiple links in a multi-hop cognitive network. Moreover, we jointly design the rerouting mechanism with spectrum handoff scheduling algorithm to improve the network throughput. Different from previous works in which rerouting is performed after spectrum handoff, our rerouting mechanism is executed before the spectrum handoff really happens. Simulation results show that JSHRP improves the network performance by 50% and the higher degree of interference the cognitive network experiences, the more improvement our solution will bring to the network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation Framework and Performance Analysis of Multimedia Broadcasting Service over Wireless Networks

    Page(s): 93 - 100
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB) |  | HTML iconHTML  

    The recent development of high-speed data transmission over wireless networks enables multimedia broadcasting service to mobile users. Multimedia broadcasting service involves interactions among different system and network components, so it is crucial for the service provider to verify the correctness of system/service model and design, and their behaviors before a new type of service is deployed. However, due to limitations of using network simulations or scaled experimental testbeds, there has been none of research on such verification and simulation framework in 3G broadcasting networks. Therefore, we propose a simulation and analysis framework for multimedia broadcasting service over wireless networks. With concrete modeling of wireless physical channel, network, and data processing on a client device, it enables the prediction of various interesting system parameters and perceived quality of multimedia streams to users. Different models of system and network components can be plugged easily in our simulation framework for further extensions. Using this framework, we analyze the processing performance for decoding scalable videos on mobile devices in CDMA2000 wireless networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transactional Mobility in Distributed Content-Based Publish/Subscribe Systems

    Page(s): 101 - 110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (343 KB) |  | HTML iconHTML  

    This paper formalizes transactional properties for publish/subscribe client mobility and develops protocols to realize them. Evaluations show that compared to traditional protocols, those developed in this paper, in addition to supporting transactional properties, are more stable with respect to message and processing overheads. Changes in factors such as the number of moving clients have little impact, making the protocols more scalable and simpler to administer due to predictable resource requirements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multicast Throughput of Hybrid Wireless Networks Under Gaussian Channel Model

    Page(s): 111 - 118
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB) |  | HTML iconHTML  

    We study the multicast capacity for hybrid wireless networks consisting of ordinary wireless nodes and base stations under Gaussian channel model, which generalizes both the unicast capacity and broadcast capacity for hybrid wireless networks. We simply consider the hybrid extended network, where the ordinary wireless nodes are placed in the square region A(n) with side-length radicn according to a Poisson point process with unit intensity. In addition, m additional base stations (BSs) serving as the relay gateway are placed regularly in the region A(n) and they are connected by a high-bandwidth wired network. Three broad categories of multicast strategies are proposed in this paper. According to the different scenarios in terms of m, n and nd, we select the optimal scheme from the three categories of strategies, and derive the achievable multicast throughput based on the optimal decision. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Key Generation for the Internet

    Page(s): 119 - 128
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (263 KB) |  | HTML iconHTML  

    Although distributed key generation (DKG) has been studied for some time, it has never been examined outside of the synchronous setting. We present the first realistic DKG architecture for use over the Internet. We propose a practical system model and define an efficient verifiable secret sharing scheme in it. We observe the necessity of Byzantine agreement for asynchronous DKG and analyze the difficulty of using a randomized protocol for it. Using our verifiable secret sharing scheme and a leader-based agreement protocol, we then design a DKG protocol for public-key cryptography. Finally, along with traditional proactive security, we also introduce group modification primitives in our system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Clock-like Flow Replacement Schemes for Resilient Flow Monitoring

    Page(s): 129 - 136
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    In the context of a collaborating surveillance system for active TCP sessions handled by a networking device, we consider two problems. The first is the problem of protecting a flow table from overflow and the second is developing an efficient algorithm for estimating the number of active flows coupled with the identification of "heavy-hitter" TCP sessions. Our proposed techniques are sensitive to limited hardware and software resources allocated for this purpose in the linecards in addition to the very high data rates that modern line cards handle; specifically we are interested in cooperatively maintaining a per-flow state with a low cost, which has resiliency on dynamic traffic mix. We investigate a traditional timeout processing mechanism to manage the flow table for per-flow monitoring, called Timeout-Based Purging (TBP), our proposed Clock-like Flow Replacement (CFR) algorithms using a replacement policy, called "clock", and a hybrid approach combining these two. Experiments with Internet traces show that our CFR schemes can significantly reduce both false positive and false negative rates regardless of whether the flow table is fully occupied or sufficiently empty, even under SYN flooding. Our hybrid scheme estimates the number of active flows accurately, and confines the heavy-hitters without storing packet counters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Taming of the Shrew: Mitigating Low-Rate TCP-Targeted Attack

    Page(s): 137 - 144
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (279 KB) |  | HTML iconHTML  

    A Shrew attack, which uses a low-rate burst carefully designed to exploit TCP's retransmission timeout mechanism, can throttle the bandwidth of a TCP flow in a stealthy manner. While such an attack can significantly degrade the performance of all TCP-based protocols and services including Internet routing (e.g., BGP), no existing scheme clearly solves the problem in real network scenarios. In this paper, we propose a simple protection mechanism, called SAP (Shrew Attack Protection), for defending against a Shrew attack. Rather than attempting to track and isolate Shrew attackers, SAP identifies TCP victims by monitoring their drop rates and preferentially admits those packets from victims with high drop rates to the output queue. This is to ensure that well-behaved TCP sessions can retain their bandwidth shares. Our simulations indicate that under a Shrew attack, SAP can prevent TCP sessions from closing, and effectively enable TCP flows to maintain high throughput. SAP is a destination-port-based mechanism and requires only a small number of counters to find potential victims, which makes SAP readily implementable on top of existing router mechanisms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.