By Topic

Dependable and Secure Computing, IEEE Transactions on

Issue 4 • Date July-Aug. 2011

Filter Results

Displaying Results 1 - 16 of 16
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (101 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (94 KB)  
    Freely Available from IEEE
  • A New Diskless Checkpointing Approach for Multiple Processor Failures

    Page(s): 481 - 493
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1136 KB) |  | HTML iconHTML  

    Diskless checkpointing is an important technique for performing fault tolerance in distributed or parallel computing systems. This study proposes a new approach to enhance neighbor-based diskless checkpointing to tolerate multiple failures using simple checkpointing and failure recovery operations, without relying on dedicated checkpoint processors. In this scheme, each processor saves its checkpoints in a set of peer processors, called checkpoint storage nodes. In return, each processor uses simple XOR operations to store a collection of checkpoints for the processors for which it is a checkpoint storage node. This study defines the concept of safe recovery criterion, which specifies the requirement for ensuring that any failed processor can be recovered in a single step using the checkpoint data stored at one of the surviving processors, as long as no more than a given number of failures occur. This study further identifies the necessary and sufficient conditions for satisfying the safe recovery criterion and presents a method for designing checkpoint storage node sets that meet these requirements. The proposed scheme allows failure recovery to be performed in a distributed manner using XOR operations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Anomaly Detection in Network Traffic Based on Statistical Inference and alpha-Stable Modeling

    Page(s): 494 - 509
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2785 KB) |  | HTML iconHTML  

    This paper proposes a novel method to detect anomalies in network traffic, based on a nonrestricted α-stable first-order model and statistical hypothesis testing. To this end, we give statistical evidence that the marginal distribution of real traffic is adequately modeled with α-stable functions and classify traffic patterns by means of a Generalized Likelihood Ratio Test (GLRT). The method automatically chooses traffic windows used as a reference, which the traffic window under test is compared with, with no expert intervention needed to that end. We focus on detecting two anomaly types, namely floods and flash-crowds, which have been frequently studied in the literature. Performance of our detection method has been measured through Receiver Operating Characteristic (ROC) curves and results indicate that our method outperforms the closely-related state-of-the-art contribution described in. All experiments use traffic data collected from two routers at our university-a 25,000 students institution-which provide two different levels of traffic aggregation for our tests (traffic at a particular school and the whole university). In addition, the traffic model is tested with publicly available traffic traces. Due to the complexity of α-stable distributions, care has been taken in designing appropriate numerical algorithms to deal with the model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Fault Detection and Diagnosis in Complex Software Systems with Information-Theoretic Monitoring

    Page(s): 510 - 522
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (986 KB) |  | HTML iconHTML  

    Management metrics of complex software systems exhibit stable correlations which can enable fault detection and diagnosis. Current approaches use specific analytic forms, typically linear, for modeling correlations. In practice, more complex nonlinear relationships exist between metrics. Moreover, most intermetric correlations form clusters rather than simple pairwise correlations. These clusters provide additional information and offer the possibility for optimization. In this paper, we address these issues by using Normalized Mutual Information (NMI) as a similarity measure to identify clusters of correlated metrics, without assuming any specific form for the metric relationships. We show how to apply the Wilcoxon Rank-Sum test on the entropy measures to detect errors in the system. We also present three diagnosis algorithms to locate faulty components: RatioScore, based on the Jaccard coefficient, SigScore, which incorporates knowledge of component dependencies, and BayesianScore, which uses Bayesian inference to assign a fault probability to each component. We evaluate our approach in the context of a complex enterprise application, and show that 1) stable, nonlinear correlations exist and can be captured with our approach; 2) we can detect a large fraction of faults with a low false positive rate (we detect up to 18 of the 22 faults we injected); and 3) we improve the diagnosis with our new diagnosis algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ELMO: Energy Aware Local Monitoring in Sensor Networks

    Page(s): 523 - 536
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1391 KB) |  | HTML iconHTML  

    Over the past decade, local monitoring has been shown to be a powerful technique for improving security in multihop wireless sensor networks (WSNs). Indeed, local monitoring-based security algorithms are becoming the most popular tool for providing security in WSNs. However, local monitoring as it is currently practiced is costly in terms of energy consumption, a major drawback for energy-constrained systems such as WSNs. In WSN environments, the scarce power resources are typically addressed through sleep-wake scheduling of the nodes. However, sleep-wake scheduling techniques in WSNs are vulnerable even to simple attacks. In this paper, a new technique is proposed that promises to allow operation of WSNs in a manner that is both energy-efficient and secure. The proposed technique combines local monitoring with a novel, more secure form of sleep-wake scheduling. The latter is a new methodology dubbed Elmo (Energy Aware Local MOnitoring in Sensor Networks), which enables sleep-wake management in a secure manner even in the face of adversarial nodes that choose not to awaken nodes responsible for monitoring their traffic. An analytical proof is given showing that security coverage is not weakened under Elmo. Moreover, ns-2 simulation results show that the performance of local monitoring is practically unchanged, while energy savings of 20 to 100 times are achieved, depending on the scenario. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling and Mitigating Transient Errors in Logic Circuits

    Page(s): 537 - 547
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1373 KB) |  | HTML iconHTML  

    Transient or soft errors caused by various environmental effects are a growing concern in micro and nanoelectronics. We present a general framework for modeling and mitigating the logical effects of such errors in digital circuits. We observe that some errors have time-bounded effects; the system's output is corrupted for a few clock cycles, after which it recovers automatically. Since such erroneous behavior can be tolerated by some applications, i.e., it is noncritical at the system level, we define the critical soft error rate (CSER) as a more realistic alternative to the conventional SER measure. A simplified technology-independent fault model, the single transient fault (STF), is proposed for efficiently estimating the error probabilities associated with individual nodes in both combinational and sequential logic. STFs can be used to compute various other useful metrics for the faults and errors of interest, and the required computations can leverage the large body of existing methods and tools designed for (permanent) stuck-at faults. As an application of the proposed methodology, we introduce a systematic strategy for hardening logic circuits against transient faults. The goal is to achieve a desired level of CSER at minimum cost by selecting a subset of nodes for hardening against STFs. Exact and approximate algorithms to solve the node selection problem are presented. The effectiveness of this approach is demonstrated by experiments with the ISCAS-85 and -89 benchmark suites, as well as some large (multimillion-gate) industrial circuits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Ultralightweight RFID Authentication Protocols

    Page(s): 548 - 563
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1781 KB) |  | HTML iconHTML  

    A recent research trend, motivated by the massive deployment of RFID technology, looks at cryptographic protocols for securing communication between entities in which some of the parties have very limited computing capabilities. In this paper, we focus our attention on SASI, a new RFID authentication protocol, designed for providing Strong Authentication and Strong Integrity. SASI is a good representative of a family of RFID authentication protocols, referred to as Ultralightweight RFID authentication protocols. These protocols, suitable for passive Tags with limited computational power and storage, involve simple bitwise operations such as and, or, exclusive or, modular addition, and cyclic shift operations. They are efficient, fit the hardware constraints, and can be seen as an example of the above research trend. However, the main concern is the real security of these protocols, which are often supported only by apparently reasonable and intuitive arguments. The contribution we provide with this work is the following: we start by showing some weaknesses in the SASI protocol, and then, we describe how such weaknesses, through a sequence of simple steps, can be used to compute in an efficient way all secret data used for the authentication process. Specifically, we describe three attacks: 1) a desynchronization attack, through which an adversary can break the synchronization between the RFID Reader and the Tag; 2) an identity disclosure attack, through which an adversary can compute the identity of the Tag; and 3) a full disclosure attack, which enables an adversary to retrieve all secret data stored in the Tag. Then, we present some experimental results, obtained by running several tests on an implementation of the protocol, in order to evaluate the performance of the proposed attacks, which confirm that the attacks are effective and efficient. It comes out that an active adversary by interacting with a Tag more or less three hundred times, makes the authenticati- n protocol completely useless. Finally, we close the paper with some observations. The cryptoanalysis of SASI gets some new light on the ultralightweight approach, and can also serve as a warning to researchers working on the field and tempted to apply these techniques. Indeed, the results of this work, rise serious questions regarding the limits of the ultralightweight family of protocols, and on the benefits of these ad hoc protocol design strategies and informal security analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Prime: Byzantine Replication under Attack

    Page(s): 564 - 577
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (682 KB) |  | HTML iconHTML  

    Existing Byzantine-resilient replication protocols satisfy two standard correctness criteria, safety and liveness, even in the presence of Byzantine faults. The runtime performance of these protocols is most commonly assessed in the absence of processor faults and is usually good in that case. However, faulty processors can significantly degrade the performance of some protocols, limiting their practical utility in adversarial environments. This paper demonstrates the extent of performance degradation possible in some existing protocols that do satisfy liveness and that do perform well absent Byzantine faults. We propose a new performance-oriented correctness criterion that requires a consistent level of performance, even with Byzantine faults. We present a new Byzantine fault-tolerant replication protocol that meets the new correctness criterion and evaluate its performance in fault-free executions and when under attack. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Privacy-Preserving Updates to Anonymous and Confidential Databases

    Page(s): 578 - 587
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (737 KB) |  | HTML iconHTML  

    Suppose Alice owns a k-anonymous database and needs to determine whether her database, when inserted with a tuple owned by Bob, is still k-anonymous. Also, suppose that access to the database is strictly controlled, because for example data are used for certain experiments that need to be maintained confidential. Clearly, allowing Alice to directly read the contents of the tuple breaks the privacy of Bob (e.g., a patient's medical record); on the other hand, the confidentiality of the database managed by Alice is violated once Bob has access to the contents of the database. Thus, the problem is to check whether the database inserted with the tuple is still k-anonymous, without letting Alice and Bob know the contents of the tuple and the database, respectively. In this paper, we propose two protocols solving this problem on suppression-based and generalization-based k-anonymous and confidential databases. The protocols rely on well-known cryptographic assumptions, and we provide theoretical analyses to proof their soundness and experimental results to illustrate their efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Runtime Defense against Code Injection Attacks Using Replicated Execution

    Page(s): 588 - 601
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (728 KB) |  | HTML iconHTML  

    The number and complexity of attacks on computer systems are increasing. This growth necessitates proper defense mechanisms. Intrusion detection systems play an important role in detecting and disrupting attacks before they can compromise software. Multivariant execution is an intrusion detection mechanism that executes several slightly different versions, called variants, of the same program in lockstep. The variants are built to have identical behavior under normal execution conditions. However, when the variants are under attack, there are detectable differences in their execution behavior. At runtime, a monitor compares the behavior of the variants at certain synchronization points and raises an alarm when a discrepancy is detected. We present a monitoring mechanism that does not need any kernel privileges to supervise the variants. Many sources of inconsistencies, including asynchronous signals and scheduling of multithreaded or multiprocess applications, can cause divergence in behavior of variants. These divergences cause false alarms. We provide solutions to remove these false alarms. Our experiments show that the multivariant execution technique is effective in detecting and preventing code injection attacks. The empirical results demonstrate that dual-variant execution has on average 17 percent performance overhead when deployed on multicore processors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-Healing Control Flow Protection in Sensor Applications

    Page(s): 602 - 616
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1270 KB) |  | HTML iconHTML  

    Since sensors do not have a sophisticated hardware architecture or an operating system to manage code for safety, attacks injecting code to exploit memory-related vulnerabilities can present threats to sensor applications. In a sensor's simple memory architecture, injected code can alter the control flow of a sensor application to either misuse existing routines or download other malicious code to achieve attacks. To protect the control flow, this paper proposes a self-healing scheme that can stop attacks from exploiting the control flow and then recover sensor applications to normal operations with minimum overhead. The self-healing scheme embeds diversified protection code at particular locations to enforce access control in code memory. Both the access control code and the recovery code are designed to be resilient to control flow attacks that attempt to evade the protection. Furthermore, the self-healing scheme directly processes application code at the machine instruction level, instead of performing control or data analysis on source code. The implementation and evaluation show that the self-healing scheme is lightweight in protecting sensor applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamics of Malware Spread in Decentralized Peer-to-Peer Networks

    Page(s): 617 - 623
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB) |  | HTML iconHTML  

    In this paper, we formulate an analytical model to characterize the spread of malware in decentralized, Gnutella type peer-to-peer (P2P) networks and study the dynamics associated with the spread of malware. Using a compartmental model, we derive the system parameters or network conditions under which the P2P network may reach a malware free equilibrium. The model also evaluates the effect of control strategies like node quarantine on stifling the spread of malware. The model is then extended to consider the impact of P2P networks on the malware spread in networks of smart cell phones. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stay Connected with the IEEE Computer Society [advertisement]

    Page(s): 624
    Save to Project icon | Request Permissions | PDF file iconPDF (292 KB)  
    Freely Available from IEEE
  • TDSC Information for authors

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (94 KB)  
    Freely Available from IEEE
  • [Back cover]

    Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (101 KB)  
    Freely Available from IEEE

Aims & Scope

The purpose of TDSC is to publish papers in dependability and security, including the joint consideration of these issues and their interplay with system performance.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Elisa Bertino
CS Department
Purdue University