By Topic

Reliability, IEEE Transactions on

Issue 1 • Date March 2014

Filter Results

Displaying Results 1 - 25 of 34
  • Table of contents

    Page(s): C1 - 1
    Save to Project icon | Request Permissions | PDF file iconPDF (135 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (133 KB)  
    Freely Available from IEEE
  • Inferences on the Competing Risk Reliability Problem for Exponential Distribution Based on Fuzzy Data

    Page(s): 2 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2725 KB) |  | HTML iconHTML  

    The problem of estimating the reliability parameter originated in the context of reliability where X represents the strength subjected to a stress Y. But traditionally it is assumed that the available data from the stress and strength populations are performed in exact numbers. However, some collected data might be imprecise, and are represented in the form of fuzzy numbers. In this paper, we consider the estimation of the stress-strength parameter R, when X and Y are statistically independent exponential random variables, and the obtained data from both distributions are reported in the form of fuzzy numbers. We consider the classical and Bayesian approaches. In the Bayesian setting, we obtain the estimate of R by using the approximation forms of Lindley, and Tierney & Kadane, as well as a Markov Chain Monte Carlo method under the assumption of statistically independent gamma priors. The estimation procedures are discussed in detail, and compared via Monte Carlo simulations in terms of their average values and mean squared errors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Random Fuzzy Extension of the Universal Generating Function Approach for the Reliability Assessment of Multi-State Systems Under Aleatory and Epistemic Uncertainties

    Page(s): 13 - 25
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2624 KB) |  | HTML iconHTML  

    Many engineering systems can perform their intended tasks with various levels of performance, which are modeled as multi-state systems (MSS) for system availability and reliability assessment problems. Uncertainty is an unavoidable factor in MSS modeling, and it must be effectively handled. In this work, we extend the traditional universal generating function (UGF) approach for multi-state system (MSS) availability and reliability assessment to account for both aleatory and epistemic uncertainties. First, a theoretical extension, named hybrid UGF (HUGF), is made to introduce the use of random fuzzy variables (RFVs) in the approach. Second, the composition operator of HUGF is defined by considering simultaneously the probabilistic convolution and the fuzzy extension principle. Finally, an efficient algorithm is designed to extract probability boxes ( p-boxes) from the system HUGF, which allow quantifying different levels of imprecision in system availability and reliability estimation. The HUGF approach is demonstrated with a numerical example, and applied to study a distributed generation system, with a comparison to the widely used Monte Carlo simulation method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Profust Reliability Based Approach to Prognostics and Health Management

    Page(s): 26 - 41
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3214 KB) |  | HTML iconHTML  

    Prognostics and health management (PHM) technology has been widely accepted, and employed to evaluate system performance. In practice, system performance often varies continually rather than just being functional or failed, especially for a complex system. Profust reliability theory extends the traditional binary state space {0, 1} into a fuzzy state space [0, 1], which is therefore suitable to characterize a gradual physical degradation. Moreover, in profust reliability theory, fuzzy state transitions can also help to describe the health evolution of a component or a system. Accordingly, this paper proposes a profust reliability based PHM approach, where the profust reliability is employed as a health indicator to evaluate the real-time system performance. On the basis of the health estimation, the system remaining useful life (RUL) is further defined, and the mean RUL estimate is predicted by using a degraded Markov model. Finally, an experimental case study of Li-ion batteries is presented to demonstrate the effectiveness of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy Importance Measures for Ranking Key Interdependent Sectors Under Uncertainty

    Page(s): 42 - 57
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2436 KB) |  | HTML iconHTML  

    In the field of reliability engineering, several approaches have been developed to identify those components that are important to the operation of the larger interconnected system. We extend the concept of component importance measures to the study of industry criticality in a larger system of economically interdependent industry sectors that are perturbed when underlying infrastructures are disrupted. We provide measures of (i) those industries that are most vulnerable to disruptions and (ii) those industries that are most influential to cause interdependent disruptions. However, difficulties arise in the identification of critical industries when uncertainties exist in describing the relationships among sectors. This work adopts fuzzy measures to develop criticality indices, and we offer an approach to rank industries according to these fuzzy indices. Much like decision makers with the knowledge of the most critical components in a physical system, the identification of these critical industries provides decision makers with priorities for resources. We illustrate our approach with an interdependency model driven by US Bureau of Economic Analysis data to describe industry interconnectedness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian Analysis for Accelerated Life Tests Using a Dirichlet Process Weibull Mixture Model

    Page(s): 58 - 67
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2318 KB) |  | HTML iconHTML  

    This study proposes a semiparametric Bayesian approach to accelerated life test (ALT). The proposed accelerated life test model assumes a log-linear lifetime-stress relationship, without making any assumption on the parametric form of the failure-time distribution. A Dirichlet process mixture model with a Weibull kernel is employed to model the failure-time distribution at a given stress level. A simulation-based model fitting algorithm that implements Gibbs sampling is developed to analyze right-censored ALT data, and to predict the failure-time distribution at the normal stress level. The proposed model and algorithm are applied to two practical examples related to the reliability of nanoelectronic devices. The results have demonstrated that the proposed methodology is capable of providing accurate prediction of the failure-time distribution at the normal stress level without assuming any restrictive parametric failure-time distribution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Discrete Modified Weibull Distribution

    Page(s): 68 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2612 KB) |  | HTML iconHTML  

    A three-parameter discrete distribution is introduced based on a recent modification of the continuous Weibull distribution. It is one of only three discrete distributions allowing for bathtub shaped hazard rate functions. We study some of its mathematical properties, discuss estimation by the method of maximum likelihood, and describe applications to four real data sets. The new distribution is shown to outperform at least three other models including those allowing for bathtub shaped hazard rate functions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Multiple-Valued Decision-Diagram-Based Approach to Solve Dynamic Fault Trees

    Page(s): 81 - 93
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1972 KB) |  | HTML iconHTML  

    Dynamic fault trees (DFTs) have been used for many years because they can easily provide a concise representation of the dynamic failure behaviors of general non-repairable fault tolerant systems. However, when repeated failure events appear in real-life DFT models, the traditional modularization-based DFT analysis process can still generate large dynamic subtrees, the modeling of which can lead to a state explosion problem. Examples of these kinds of large dynamic subtrees abound in models of real-world dynamic software and embedded computing systems integrating with various multi-function components. This paper proposes an efficient, multiple-valued decision-diagram (MDD)-based DFT analysis approach for computing the reliability of large dynamic subtrees. Unlike the traditional modularization methods where the whole dynamic subtree must be solved using state-space methods, the proposed approach restricts the state-space method only to components associated with dynamic failure behaviors within the dynamic subtree. By using multiple-valued variables to encode the dynamic gates, a single compact MDD can be generated to model the failure behavior of the overall system. The combination of MDD and state-space methods applied at the component or gate level helps relieve the state explosion problem of the traditional modularization method, for the problems we explore. Applications and advantages of the proposed approach are illustrated through detailed analyses of an example DFT, and through two case studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computational Methods for Reliability and Importance Measures of Weighted-Consecutive-System

    Page(s): 94 - 104
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3168 KB) |  | HTML iconHTML  

    We consider the two reliability systems, namely, the weighted-consecutive- k-out-of- n:F (Cω(k,n:F)), and weighted-m-consecutive- k-out-of- n:F system (Cmω(k,n:F)). The weighted-systems have in general n components, each one having a positive integer weight ωi, i=1,2,...n such that total weight of all components of the system is ω=Σi=1nωi. The Cω(k,n:F) fails iff the total weight of failed consecutive components is at least k. We propose Cmω(k,n:F) as a generalization of Cω(k,n:F) which fails iff there are at least m non-overlapping groups of consecutive failed components with a total weight of at least k. Here we study the reliability, Birnbaum reliability importance, and improvement potential importance of a weighted-consecutive system based on the distribution of the failure run statistic in the sequence of weighted Bernoulli trials. We develop a simplified, efficient formula for the evaluation of reliability and importance measures of the systems under consideration, and demonstrate the results numerically. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Component Importance for Multi-State System Lifetimes With Renewal Functions

    Page(s): 105 - 117
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2943 KB) |  | HTML iconHTML  

    Importance measures are widely used to characterize the roles of components in systems. The system lifetime can be divided into different life stages. Traditionally, importance measures do not consider the possible effect of the expected number of component failures over a system's lifetime and over different life stages, which, however, has a great effect on the system performance changes, and should therefore be taken into consideration. This paper extends the integrated importance measure (IIM) from unit time to system lifetime, and to different life stages. Based on the renewal functions of components, this measure can evaluate the changes of the system performance due to component failures. This generalization of the IIM describes which component is the most important to improve the performance of the system during the system lifetime and at different life stages. An example of the application of an oil transportation system is presented to illustrate the use of the generalized IIM. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • How and When to Deploy Error Prone Sensors in Support of the Maintenance of Two-Phase Systems With Ageing

    Page(s): 118 - 133
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3619 KB) |  | HTML iconHTML  

    We consider the deployment of a sensor alongside a programme of planned maintenance interventions to enhance the reliability of two-phase systems. Such systems operate fault free until they enter a worn state which is a precursor to failure. The sensor is designed to report transitions into the worn state, but does so with error. The sensor can fail to report a transition when it occurs (false-negative), and can report one when none has taken place (false-positive). Key goals of our analyses are (i) the design of simple, cost effective schedules for the inspection, repair, and renewal of such systems, for use alongside the sensor; and (ii) the determination of the range of sensor operating characteristics for which the deployment of the sensor is cost beneficial. The latter is achieved via the computation of cost indifference curves which identify sensor operating characteristics for which we are indifferent to whether the sensor is deployed or not. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predictive Maintenance by Risk Sensitive Particle Filtering

    Page(s): 134 - 143
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1662 KB) |  | HTML iconHTML  

    Predictive Maintenance (PrM) exploits the estimation of the equipment Residual Useful Life (RUL) to identify the optimal time for carrying out the next maintenance action. Particle Filtering (PF) is widely used as a prognostic tool in support of PrM, by reason of its capability of robustly estimating the equipment RUL without requiring strict modeling hypotheses. However, a precise PF estimate of the RUL requires tracing a large number of particles, and thus large computational times, often incompatible with the need of rapidly processing information for making maintenance decisions in due time. This work considers two different Risk Sensitive Particle Filtering (RSPF) schemes proposed in the literature, and investigates their potential for PrM. The computational burden problem of PF is addressed. The effectiveness of the two algorithms is analyzed on a case study concerning a mechanical component affected by fatigue degradation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effects of Intermittent Faults on the Reliability of a Reduced Instruction Set Computing (RISC) Microprocessor

    Page(s): 144 - 153
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2638 KB) |  | HTML iconHTML  

    With the scaling of complementary metal-oxide-semiconductor (CMOS) technology to the submicron range, designers have to deal with a growing number and variety of fault types. In this way, intermittent faults are gaining importance in modern very large scale integration (VLSI) circuits. The presence of these faults is increasing due to the complexity of manufacturing processes (which produce residues and parameter variations), together with special aging mechanisms. This work presents a case study of the impact of intermittent faults on the behavior of a reduced instruction set computing (RISC) microprocessor. We have carried out an exhaustive reliability assessment by using very-high-speed-integrated-circuit hardware description language (VHDL)-based fault injection. In this way, we have been able to modify different intermittent fault parameters, to select various targets, and even, to compare the impact of intermittent faults with those induced by transient and permanent faults. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Efficient Approach to Search for All Multi-State Minimal Cuts

    Page(s): 154 - 166
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2944 KB) |  | HTML iconHTML  

    There are several exact or approximating approaches that apply d-MinCuts ( d-MCs) to compute multistate two-terminal reliability. Searching for and determining d-MCs in a stochastic-flow network are important for computing system reliability. Here, by investigating the existing methods and using our new results, an efficient algorithm is proposed to find all the d-MCs. The complexity of the new algorithm illustrates its efficiency in comparison with other existing algorithms. Two examples are worked out to show how the algorithm determines all the d-MCs in a network flow with unreliable nodes, and in a network flow of moderate size. Moreover, using the d-MCs found by the algorithm, the system reliability of a sample network is computed by the inclusion-exclusion method. Finally, to illustrate the efficacy of using the new presented techniques, computational results on random test problems are provided in the sense of the performance profile introduced by Dolan and Moré. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimating Remaining Useful Life With Three-Source Variability in Degradation Modeling

    Page(s): 167 - 190
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5850 KB) |  | HTML iconHTML  

    The use of the observed degradation data of a system can help to estimate its remaining useful life (RUL). However, the degradation progression of the system is typically stochastic, and thus the RUL is also a random variable, resulting in the difficulty to estimate the RUL with certainty. In general, there are three sources of variability contributing to the uncertainty of the estimated RUL: 1) temporal variability, 2) unit-to-unit variability, and 3) measurement variability. In this paper, we present a relatively general degradation model based on a Wiener process. In the presented model, the above three-source variability is simultaneously characterized to incorporate the effect of three-source variability into RUL estimation. By constructing a state-space model, the posterior distributions of the underlying degradation state and random effect parameter, which are correlated, are estimated by employing the Kalman filtering technique. Further, the analytical forms of not only the probability distribution but also the mean and variance of the estimated RUL are derived, and can be real-time updated in line with the arrivals of new degradation observations. We also investigate the issues regarding the identifiability problem in parameter estimation of the presented model, and establish the according results. For verifying the presented approach, a case study for gyros in an inertial platform is provided, and the results indicate that considering three-source variability can improve the modeling fitting and the accuracy of the estimated RUL. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Review of Hybrid Prognostics Approaches for Remaining Useful Life Prediction of Engineered Systems, and an Application to Battery Life Prediction

    Page(s): 191 - 207
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1025 KB) |  | HTML iconHTML  

    Prognostics focuses on predicting the future performance of a system, specifically the time at which the system no long performs its desired functionality, its time to failure. As an important aspect of prognostics, remaining useful life (RUL) prediction estimates the remaining usable life of a system, which is essential for maintenance decision making and contingency mitigation. A significant amount of research has been reported in the literature to develop prognostics models that are able to predict a system's RUL. These models can be broadly categorized into experience-based models, date-driven models, and physics-based models. However, due to system complexity, data availability, and application constraints, there is no universally accepted best model to estimate RUL. The review part of this paper specifically focused on the development of hybrid prognostics approaches, attempting to leverage the advantages of combining the prognostics models in the aforementioned different categories for RUL prediction. The hybrid approaches reported in the literature were systematically classified by the combination and interfaces of various types of prognostics models. In the case study part, a hybrid prognostics method was proposed and applied to a battery degradation case to show the potential benefit of the hybrid prognostics approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Additive Wiener Process-Based Prognostic Model for Hybrid Deteriorating Systems

    Page(s): 208 - 222
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3625 KB) |  | HTML iconHTML  

    Hybrid deteriorating systems, which are made up of both linear and nonlinear degradation parts, are often encountered in engineering practice, such as gyroscopes which are frequently utilized in ships, aircraft, and weapon systems. However, little reported literature can be found addressing the degradation modeling for a system of this type. This paper proposes a general degradation modeling framework for hybrid deteriorating systems by employing an additive Wiener process model that consists of a linear degradation part and a nonlinear part. Furthermore, we derive the analytical solution of the remaining useful life distribution approximately for the presented model. For a specific system in service, the posterior estimates of the stochastic parameters in the model are updated recursively by using the condition monitoring observations based on a Bayesian framework with the consideration that the stochastic parameters in the linear and nonlinear deteriorating parts are correlated. Thereafter, the posterior distribution of stochastic parameters is used to update in real-time the distribution of the remaining useful life where the uncertainties in the estimated stochastic parameters are incorporated. Finally, a numerical example and a practical case study are provided to verify the effectiveness of the proposed method. Compared with two existing methods in literature, our proposed degradation modeling method increases the one-step prediction accuracy slightly in terms of mean squared error, but gains significant improvements in the estimated remaining useful life. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Allocation Policies of Redundancies in Two-Parallel-Series and Two-Series-Parallel Systems

    Page(s): 223 - 229
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1689 KB) |  | HTML iconHTML  

    In this paper, comparisons of allocation policies of components in two-parallel-series systems with two types of components are provided with respect to both hazard rate and reversed hazard rate orders. The main results indicate that the lifetime of these kinds of system is stochastically maximized by unbalancing the two classes of components as much as possible. We only assume that the two distributions implied in the model have proportional hazard rates. The same type of comparisons are also given for the dual model, the two-series-parallel systems, but assuming that the distributions implied in the model have proportional reversed hazard rates, and therefore the final conclusion is the opposite; that is, the reliability of the system improves as the similarity between the two parallel subsystems increases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimizing Bypass Transportation Expenses in Linear Multistate Consecutively-Connected Systems

    Page(s): 230 - 238
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1665 KB) |  | HTML iconHTML  

    Many continuous transportation systems can be represented as multi-state linear consecutively connected systems consisting of N+1 linearly ordered nodes. Some of these nodes contain statistically independent multistate elements with different characteristics. Each element j can provide a connection between the node to which it belongs and Xj next nodes, where Xj is a discrete random variable with known probability mass function. If the system contains nodes not connected with any previous node, then gaps exist that require bypass transportation solutions associated with considerable expenses. An algorithm based on the universal generating function method is suggested for evaluating the expected value of these expenses. A problem of finding the multi-state element allocation that minimizes the expected bypass transportation expenses is formulated and solved. Illustrative examples are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Robust Redundancy Allocation Problem in Series-Parallel Systems With Budgeted Uncertainty

    Page(s): 239 - 250
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2649 KB) |  | HTML iconHTML  

    We propose a robust optimization framework to deal with uncertain component reliabilities in redundancy allocation problems in series-parallel systems. The proposed models are based on linearized versions of standard mixed integer nonlinear programming (MINLP) formulations of these problems. We extend the linearized models to address uncertainty by assuming that the component reliabilities belong to a budgeted uncertainty set, and develop robust counterpart models. A key challenge is that, because the models involve nonlinear functions of the uncertain data, classical robust optimization approaches cannot apply directly to construct their robust optimization counterparts. We exploit problem structure to develop robust counterparts and exact solution methods, and present computational results demonstrating their performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimum Mission Cost Cold-Standby Sequencing in Non-Repairable Multi-Phase Systems

    Page(s): 251 - 258
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1262 KB) |  | HTML iconHTML  

    This paper considers the optimal cold standby element sequencing problem (SESP) for 1-out-of- n: G heterogeneous non-repairable cold-standby systems that accomplish multi-phase missions. Given a fixed set of element choices, the objective of the optimal system design is to select the initiation sequence of the system elements so as to minimize the expected mission cost while providing a desired level of system reliability. It is assumed that during different mission phases the elements are exposed to different stresses, which affects their time-to-failure distributions. The startup and exploitation costs of system elements are also phase dependent. We suggest an algorithm for evaluating the mission reliability and expected mission cost based on a discrete approximation of time-to-failure distributions of the system elements. A genetic algorithm is used as an optimization tool for solving the formulated SESP for multi-phase cold-standby systems. Examples are given to illustrate the considered problem and the proposed solution methodology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Safety Comparison of Centralized and Distributed Aircraft Separation Assurance Concepts

    Page(s): 259 - 269
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2480 KB) |  | HTML iconHTML  

    This paper presents several models to compare centralized and distributed automated separation assurance concepts in aviation. In a centralized system, safety-related functions are implemented by common equipment on the ground. In a distributed system, safety-related functions are implemented by equipment on each aircraft. Failures of the safety-related functions can increase the risk of near mid-air collisions. Intuitively, failures on the ground are worse than failures in the air because the ground failures simultaneously affect multiple aircraft. This paper evaluates the degree to which this belief is true. Using region-wide models to account for dependencies between aircraft pairs, we derive the region-wide expectation and variance of the number of separation losses for both centralized and distributed concepts. This derivation is done first for a basic scenario involving a single component and function. We show that the variance of the number of separation losses is always higher for the centralized system, holding the expectations equal. However, numerical examples show that the difference is negligible when the events of interest are rare. Results are extended to a hybrid centralized-distributed scenario involving multiple components and functions on the ground and in the air. In this case, the variance of the centralized system may actually be less than that of the distributed system. The overall implication is that the common-cause failure of the ground function does not seriously weaken the overall case for using a centralized concept versus a distributed concept. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software Crash Analysis for Automatic Exploit Generation on Binary Programs

    Page(s): 270 - 289
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4247 KB) |  | HTML iconHTML  

    This paper presents a new method, capable of automatically generating attacks on binary programs from software crashes. We analyze software crashes with a symbolic failure model by performing concolic executions following the failure directed paths, using a whole system environment model and concrete address mapped symbolic memory in S2 E. We propose a new selective symbolic input method and lazy evaluation on pseudo symbolic variables to handle symbolic pointers and speed up the process. This is an end-to-end approach able to create exploits from crash inputs or existing exploits for various applications, including most of the existing benchmark programs, and several large scale applications, such as a word processor (Microsoft office word), a media player (mpalyer), an archiver (unrar), or a pdf reader (foxit). We can deal with vulnerability types including stack and heap overflows, format string, and the use of uninitialized variables. Notably, these applications have become software fuzz testing targets, but still require a manual process with security knowledge to produce mitigation-hardened exploits. Using this method to generate exploits is an automated process for software failures without source code. The proposed method is simpler, more general, faster, and can be scaled to larger programs than existing systems. We produce the exploits within one minute for most of the benchmark programs, including mplayer. We also transform existing exploits of Microsoft office word into new exploits within four minutes. The best speedup is 7,211 times faster than the initial attempt. For heap overflow vulnerability, we can automatically exploit the unlink() macro of glibc, which formerly requires sophisticated hacking efforts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The DStar Method for Effective Software Fault Localization

    Page(s): 290 - 308
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3091 KB) |  | HTML iconHTML  

    Effective debugging is crucial to producing reliable software. Manual debugging is becoming prohibitively expensive, especially due to the growing size and complexity of programs. Given that fault localization is one of the most expensive activities in program debugging, there has been a great demand for fault localization techniques that can help guide programmers to the locations of faults. In this paper, a technique named DStar (D*) is proposed which can suggest suspicious locations for fault localization automatically without requiring any prior information on program structure or semantics. D* is evaluated across 24 programs, and is compared to 38 different fault localization techniques. Both single-fault and multi-fault programs are used. Results indicate that D* is more effective at locating faults than all the other techniques it is compared to. An empirical evaluation is also conducted to illustrate how the effectiveness of D* increases as the exponent * grows, and then levels off when the exponent * exceeds a critical value. Discussions are presented to support such observations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong