By Topic

Networking, Architecture and Storage (NAS), 2010 IEEE Fifth International Conference on

Date 15-17 July 2010

Filter Results

Displaying Results 1 - 25 of 69
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (157 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (85 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (153 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (113 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - ix
    Save to Project icon | Request Permissions | PDF file iconPDF (130 KB)  
    Freely Available from IEEE
  • Preface

    Page(s): x - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (77 KB)  
    Freely Available from IEEE
  • Organizing Committee

    Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (69 KB)  
    Freely Available from IEEE
  • Technical Program Committee

    Page(s): xiii - xv
    Save to Project icon | Request Permissions | PDF file iconPDF (84 KB)  
    Freely Available from IEEE
  • Steering Committee

    Page(s): xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (55 KB)  
    Freely Available from IEEE
  • Keynote speech

    Page(s): xvii
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (64 KB)  

    Summary form only given. In this talk, inspired by working on the Lustre cluster file system during the last ten years, we will take a tour of internal features that are required for recovery and data management to work reliably and with horizontal scaling. Scale now means handling 100's of servers and tens of thousands of clients in larger data centers, often with replicas spanning the globe. Things that we will look at are search, striping, clustering of metadata services and its recovery, caching and replication as well as HSM, migration and other data management features. Currently such features are implemented in an ad-hoc manner, deep in the guts of many systems. We will demonstrate during this talk is that there is an opportunity to define concise semantics that enables all such features, including their recovery and reasonable algorithmic complexity. This can be the first step towards a more modular, interoperable approach to file based data management applications, including file servers, middle-ware, backends for data and metadata, as well as data management applications. This is similar to what the relational algebra did 30 years ago for database applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time-Bounded Essential Localization for Wireless Sensor Networks

    Page(s): 3 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (249 KB) |  | HTML iconHTML  

    In many practical applications of wireless sensor networks, it is crucial to accomplish the localization of sensors within a given time bound. We find that the traditional definition of relative localization is inappropriate for evaluating its actual overhead. To address this problem, we define a novel problem called essential localization, and present the first rigorous study on the essential localizability of a wireless sensor network within a given time bound. We propose an efficient distributed algorithm for time-bounded essential localization over a sensor network, and evaluate the performance of our algorithm with extensive simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault Tolerant Data Collection in Heterogeneous Intelligent Monitoring Networks

    Page(s): 13 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (255 KB) |  | HTML iconHTML  

    In this work, we focus on the problem of fault tolerant data collection in heterogeneous Intelligent Monitoring Networks(IMNs). IMNs are expected to have a wide range of applications in many fields such as forest monitoring, structural monitoring, and industrial plant monitoring. We present our fault tolerant data collection scheme in the hierarchical structure of IMNs. We use an interesting technique borrowed from the popular BitTorrent software to maintain a highly efficient and robust data collection in IMNs with heterogeneous and faulty devices. In our proposed scheme, monitoring sensors are instructed to randomly select some overheard transmissions and process them in data fusion. Our preliminary study confirmed the benefits of the fault tolerant data collection strategy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Probabilistic Routing Protocol for Heterogeneous Sensor Networks

    Page(s): 19 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (358 KB) |  | HTML iconHTML  

    The past five years witnessed a rapid development in wireless sensor networks, which have been widely used in military and civilian applications. Due to different requirements in their application environment, sensors with different capacities, power, and so on are deployed. Data routing in such heterogeneous sensor networks is a challenging task. On one hand, the heterogeneous features bring about the diversity in their transmission ranges, which subsequently lead to asymmetric links in the communication graph. As a result, conventional routing strategies based on undirected graphs become unsuitable. On the other hand, sensors communicate with each other through intermittent asymmetric links. It is important to provide assurable delivery rate for mission critical applications. In this paper, we propose ProHet: a Probabilistic routing protocol for Heterogeneous sensor networks, which can deal with asymmetric links well and work in a distributed manner with low overhead and assurable delivery rate. The ProHet protocol first produces a bidirectional routing abstraction by finding a reverse routing path for every asymmetric link. Then, it uses a probabilistic strategy to choose forwarding nodes based on historical statistics, which is shown to achieve assurable delivery rate by theoretical analysis. Extensive simulations are conducted to verify the efficiency of the proposed protocol. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Characterizing the Dependability of Distributed Storage Systems Using a Two-Layer Hidden Markov Model-Based Approach

    Page(s): 31 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (333 KB) |  | HTML iconHTML  

    Nowadays, dependability is of paramount importance in modern distributed storage systems. A challenging issue to deploy a storage system with certain dependability requirements or improve existing systems' dependability is how to comprehensively and efficiently characterize the dependability of those systems. In this paper, we present a two-layer Hidden Markov Model (HMM) to characterize the dependability of a distributed storage system, focusing on the layer of parallel file system. By training the model with observable measurements under faulty scenarios, such as I/O performance, we quantify the system dependability via a tuple of state transition probability, service degradation, and fault latency under those scenarios. Our experimental results on a distributed storage system with PVFS (Parallel Virtual File System) demonstrate the effectiveness of our HMM-based approach, which efficiently captures the behavior patterns of the target system under disk faults and memory overusage. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A High Effective Indexing and Retrieval Method Providing Block-Level Timely Recovery to Any Point-in-Time

    Page(s): 41 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (518 KB) |  | HTML iconHTML  

    Block-level continuous data protection (CDP) logs every disk write operation so that the disk can be rolled back to any arbitrary point-in-time within a time window. For each update operation is time stamped and logged, the indexing for such huge amounts of records is an important and challenging problem. Unfortunately, the conventional indexing methods can not efficiently record large numbers of versions and support instant “time-travel” types of queries in CDP. In this paper, we present an effective indexing method providing timely recovery to any point-in-time in comprehensive versioning systems, called the Hierarchical Spatial-Temporal Indexing Method (HSTIM). The basic principle of HSTIM is to partition the time domain and the production storage LBAs into time slice and segments respectively according to update frequency of disk IOs, and build separate index file for each segment. In order to meet the demands of instant view of history data, the metadata of production storage is independently indexed. For long-time history data retrieval requirements, index snapshot is introduced in HSTIM to reduce the retrieval time. Another distinctive feature of HSTIM is its incremental retrieval method, which achieves high query performance at time point t + t if neighboring time point t is queried previously. The paper compares HSTIM with traditional B+-tree and multi-version B-tree (MVBT) index in many aspects. Experiments with real workload IO trace files show that HSTIM can locate history data within 8.05 seconds for recovery point of 48 hours, while B+-tree consumes 24.04 seconds. If the index snapshot is applied, HSTIM can reduce such retrieval time within 3 seconds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Fine-Grained Data Reconstruction Algorithm for Solid-State Disks

    Page(s): 51 - 59
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (636 KB) |  | HTML iconHTML  

    Solid-state disks (SSDs) with high I/O performance are increasingly becoming popular. To extend the life time of flash memory, one can apply wear-leveling strategies to manage data blocks. However, wear-leveling strategies certainly inevitably degrade write performance. In addition to low write performance, wear-leveling strategies make one block unwritable when one bit of this block is invalid. Although data reconstruction techniques have been widely employed in disk arrays, the reconstruction techniques has not been studied in the context of solid-state disks. In this paper, we present a new fine-grained data-reconstruction algorithm for solid-state disks. The algorithm aims to provide a simple yet efficient wear-leveling strategy that improves both I/O performance and reliability of solid-state disks. Simulation experiments show that all data blocks have very similar in terms of erasure times. The number of extra erasures incurred by our algorithm is very marginal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Distributed Approach for Hidden Wormhole Detection with Neighborhood Information

    Page(s): 63 - 72
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (255 KB) |  | HTML iconHTML  

    Ad hoc networks are promising but are vulnerable to selfish and malicious attacks. One kind of malicious attack, hidden wormhole attacks, can be mounted easily and be immune to cryptographic techniques. Wormholes distort network topology, and degrade the performance of applications such as localization and data collection. A wormhole attack is one of the most severe threats to an ad hoc network. Unfortunately, most state-of-the-art wormhole detection algorithms are not practicable. We observe and prove that, nodes attacked by the same wormhole are either 1-hop neighbors or 2-hop neighbors, and with a high probability, there are 3 nodes, which are non-1-hop neighbors, in the intersection of the two neighbor. However, such phenomena will not be present in normal topology. Thus a novel distributed algorithm is designed for wormhole detection and isolation with polynomial complexity. The detection probability is discussed. Simulation results show that the algorithm performs well regarding detection probability, as well as network overhead, false node alarms and miss detection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Simple Group Key Management Approach for Mobile Ad Hoc Networks

    Page(s): 73 - 78
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (209 KB) |  | HTML iconHTML  

    Securing communications among a group of nodes in mobile ad hoc networks (MANETs) is challenging due to the lack of trusted infrastructure. Group key management is one of the basic building blocks in securing group communications. A group key is a common secret used in cryptographic algorithms. Group key management involves creating and distributing the common secret for all group members. Change of membership requires the group key being refreshed to ensure backward and forward secrecy. In this paper, we extend our previous work with new protocols. Our basic idea is that each group member does not need to order intermediate keys and can deduce the group key locally. A multicast tree is formed for efficient and reliable message dissemination. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Label-Based DV-Hop Localization Against Wormhole Attacks in Wireless Sensor Networks

    Page(s): 79 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (253 KB) |  | HTML iconHTML  

    Node localization becomes an important issue in the wireless sensor network as its broad applications in environment monitoring, emergency rescue and battlefield surveillance, etc. Basically, the DV-Hop localization mechanism can work well with the assistance of beacon nodes that have the capability of self-positioning. However, if the network is invaded by a wormhole attack, the attacker can tunnel the packets via the wormhole link to cause severe impacts on the DV-Hop localization process. The distance-vector propagation phase during the DV-Hop localization even aggravates the positioning result, compared to the localization schemes without wormhole attacks. In this paper, we analyze the impacts of wormhole attack on DV-Hop localization scheme. Based on the basic DV-Hop localization process, we propose a label-based secure localization scheme to defend against the wormhole attack. Simulation results demonstrate that our proposed secure localization scheme is capable of detecting the wormhole attack and resisting its adverse impacts with a high probability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of a Reliable Distributed Secure Database System

    Page(s): 91 - 99
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (319 KB) |  | HTML iconHTML  

    We propose to develop a design for a reliable distributed relational database management system to safeguard sensitive information. Each day, increasingly more highly sophisticated computer attacks are occurring in the information storage systems of financial institutions, eCommerce businesses, universities, hospitals and government agencies. We urgently need to develop a secure and dependable way to safeguard our information repositories and storage systems. In this research, we propose to develop a highly robust, dependable and secure relational database management system to prevent sensitive information from being lost, stolen or corrupted. The basic idea is to (i) include a (k, n) threshold-based secret sharing scheme (k ≤ n) to provide privacy and durability in order to prevent sensitive information from being lost or stolen, (ii) incorporate an efficient distributed database management design to enhance system performance and minimize interfered accesses contentions, and (iii) integrate private information storage (PIS) schemes to reduce communication overhead and improve robustness of the system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A MAP Fitting Approach with Joint Approximation Oriented to the Dynamic Resource Provisioning in Shared Data Centres

    Page(s): 100 - 108
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (430 KB) |  | HTML iconHTML  

    In shared data centres, accurate models of workloads are indispensable in the process of autonomic resource scheduling. Facing the problem of parameterizing the vast space of big MAPs in order to fit the real workload traces with time-varying characteristics, in this paper we propose a MAP fitting approach JAMC with joint approximation of the order moment and the lag correlation. Based on the state-of-the-art fitting method KPC, JAMC uses a similar divide and conquer approach to simplify the fitting problem and uses optimization to explore the best solution. Our experiments show that JAMC is simple and sufficient enough to effectively predict the behavior of the queueing systems, and the fitting time cost of a few minutes is acceptable for shared data center. Through the analysis of the sensitivity to the orders fitted, we deduce that it is not the case that the higher orders have better results. In the case of Bellcore Aug89, the appropriate fitted orders for the moments and autocorrelations should be respectively on a set of 10 ~ 20 and 104 ~ 3*104. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Trust Aware Grid Access Control Architecture Based on ABAC

    Page(s): 109 - 115
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (394 KB) |  | HTML iconHTML  

    Grid system has many great security challenges such as access control. The attribute-based access control model (ABAC) has much merits that are more flexible, fine-grained and dynamically suitable to grid environment. As an important factor in grid security, trust is increasingly applied to management of security, especially in access control. This paper puts forward a novel trust model in multi-domain grid environment and trust factor was originally introduced into access control architecture of grid to extend classic ABAC model. By extending the authorization architecture of XACML, extended ABAC based access control architecture for grid was submitted. In our experiment, the increase and decrease of trust are non-symmetrical and the trust model is sensitive to the malicious attacks. It can effectively control the trust change of different nodes and the trust model can reduce effectively the damage of vicious attack. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving Disk Array Reliability Through Expedited Scrubbing

    Page(s): 119 - 125
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (338 KB) |  | HTML iconHTML  

    Disk scrubbing periodically scans the contents of a disk array to detect the presence of irrecoverable read errors and reconstitute the contents of the lost blocks using the built-in redundancy of the disk array. We address the issue of scheduling scrubbing runs in disk arrays that can tolerate two disk failures without incurring a data loss, and propose to start an urgent scrubbing run of the whole array whenever a disk failure is detected. Used alone or in combination with periodic scrubbing runs, these expedited runs can improve the mean time to data loss of disk arrays over a wide range of disk repair times. As a result, our technique eliminates the need for frequent scrubbing runs and the need to maintain spare disks and personnel on site to replace failed disks within a twenty-four hour interval. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability Analysis of Declustered-Parity RAID 6 with Disk Scrubbing and Considering Irrecoverable Read Errors

    Page(s): 126 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1144 KB) |  | HTML iconHTML  

    We investigate the impact of Irrecoverable Read Errors (IREs) on Mean Time To Data Loss (MTTDL) of declustered-parity RAID 6 systems. By extending the analytic model to study the reliability of RAID 5 systems from Wu et. al. we obtain the MTTDL which mainly takes into account two types of data loss: data loss caused by three independent disk failures, and data loss due to a detected IRE during the rebuild after two disks failed. Furthermore we improve the analysis by also considering disk scrubbing to reduce the probability of IREs via periodically reading the data stored on a disk. The results of our numerical analysis show that IREs have a large effect on the MTTDL. The countermeasure is to increase the disk scrubbing rate. As an example, the MTTDL of a system where each disk is scrubbed everyday increases by a factor of at least 27 compared to that of a system with a scrubbing rate of once a year. In addition, declustered-parity RAID 6 system improves the reliability of standard non-declustered RAID 6 systems. For example, a declustered-parity RAID 6 system without disk scrubbing improves the MTTDLs by a factor at least 150 compared to that of a standard system where each disk is scrubbed everyday. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Evaluation of Two Typical RAID-6 Codes on Online Single Disk Failure Recovery

    Page(s): 135 - 142
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1100 KB) |  | HTML iconHTML  

    Redundant Arrays of Independent Disks RAID is a popular storage architecture with high performance and reliability. RAID-6 with a higher level of reliability based on MDS (Maximum Distance Separable) code is well studied, for its optimal storage efficiency. RAID-6 could offer continuous services in degraded mode, during the period of online failure recovery. However, the online recovery would bring a considerable I/O workflow to the storage system, that almost all the surviving data in the system need to be accessed. Due to the limitation of disk bandwidth, user response time would be significantly affected by the recovery workflow. In this paper, we examine the online recovery performance of two typical MDS RAID-6 codes RDP code and P-code. To our observation, P-code significiantly outperforms RDP in user response time and recovery duration during a single disk failure recovery. To our analysis, the difference comes from not only the parity layout but also the parity organization. Therefore, we propose a new categorization for existing MDS RAID-6 codes, based on the methodology of parity organization. By our approach, all the MDS RAID-6 codes could be categorized to Sym-codes with only one type of parity, and Asym-codes with at least two different types of parity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.