By Topic

Reliability, IEEE Transactions on

Issue 2 • Date June 2013

Filter Results

Displaying Results 1 - 25 of 26
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (116 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (133 KB)  
    Freely Available from IEEE
  • Local Polynomial Fitting of the Mean Residual Life Function

    Page(s): 317 - 328
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3320 KB) |  | HTML iconHTML  

    The mean residual life (MRL) function is one of the most important, widely used reliability measures in practice. For example, it is used to design burn-in programs, plan spare provision, and formulate warranty policies. Parametric techniques, which rely on the assumption that the parametric form of the failure time is known, are usually employed in estimating the MRL function. However, this approach could lead to an inconsistent, inaccurate estimator of the MRL function if the assumption is violated. A nonparametric approach in such a setup provides a promising alternative. In this paper, we employ local polynomial regression with fixed design points accompanied by appropriate binning to construct several new estimators for the MRL function. The asymptotic unbiasedness and consistency of the these estimators are proven. We then bring in two popular bandwidth selection methods to select the bandwidth of the proposed MRL estimators. Finally, we evaluate the performance of the estimators using several simulated and real life examples. Results indicate that the proposed estimators perform well in estimating MRL functions, particularly MRL models with constant, bathtub-shaped, and upside-down bathtub-shaped MRL functions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability Analysis of the Harmonic Mean Inactivity Time Order

    Page(s): 329 - 337
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2857 KB) |  | HTML iconHTML  

    Based on the comparison of a certain function of mean inactivity times of two nonnegative random variables, we introduce and study a new stochastic order. Several elementary and then basic preservation properties of the new stochastic order under reliability operations of convolution, mixture, and shock models are discussed. We also derive characterizations of some well-known stochastic orders by the new order, and point out some results related to the weighted distributions. Some examples are included to illustrate the concepts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical Lifetime Inference With Skew-Wiener Linear Degradation Models

    Page(s): 338 - 350
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3392 KB) |  | HTML iconHTML  

    Degradation models are widely used to assess the lifetime information of highly reliable products possessing quality characteristics that both degrade over time and can be related to reliability. The performance of a degradation model largely depends on an appropriate model description of the product's degradation path. Conventionally, the random or mixed-effect model is one of the most well-known approaches presented in the literature in which the normal distribution is commonly adopted to represent unit-to-unit variability in the degradation model. However, this assumption may not appropriately signify accurate projections for practical applications. This paper is motivated by laser data wherein the normal distribution is relaxed with a skew-normal distribution that consequently provides greater flexibility as it can capture a broad range, non-normal, asymmetric behavior in unit-to-unit variability. Based on the proposed degradation model, we first derive analytical expressions for a product's lifetime distribution along with its corresponding mean-time-to-failure (MTTF). We then utilize the laser data to illustrate advantages gained by the proposed model. Finally, we address effects from the skewness parameter with regard to the accuracy of both a product's MTTF and its q th failure quantile; especially when the underlying skew-normal distribution is mis-specified as a normal distribution. The result demonstrates that effects from the skewness parameter on the tail probabilities of a product's lifetime distribution are not negligible when the random effect of the true degradation model follows a skew-normal distribution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Preventive Maintenance Rate for Best Availability With Hypo-Exponential Failure Distribution

    Page(s): 351 - 361
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2498 KB) |  | HTML iconHTML  

    The optimal rate of periodic preventive maintenance to achieve the best availability is studied for Markov systems with multiple degraded operational stages, where the time-to-failure has a hypo-exponential distribution. An analytical expression is developed for the availability of such systems having n operational stages, and a necessary and sufficient condition is derived for a non-trivial optimal rate of periodic maintenance to exist. Numerical procedures for finding the optimal rate of periodic maintenance are given, and examples are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maintenance Optimization for Asset Systems With Dependent Performance Degradation

    Page(s): 362 - 367
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (748 KB) |  | HTML iconHTML  

    This paper presents a model for optimizing maintenance plans for an industrial system consisting of a number of assets with degradation and performance interaction between them. In particular, we consider an asset system with M identical non-critical machines feeding their output to a critical machine. A common repair team performs maintenance on all the machines. All the machines deteriorate over time stochastically independently. In addition to the stochastically independent degradation, the degradation of the non-critical machines also affects the performance of the critical machine. We develop a mathematical model to represent these interactions and performance of the asset system. We also provide a simulation-based numerical solution to optimize the maintenance plan for the system outlining the maintenance intervals for each of the machines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Joint Redundancy and Imperfect Maintenance Strategy Optimization for Multi-State Systems

    Page(s): 368 - 378
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1913 KB) |  | HTML iconHTML  

    The redundancy allocation problem has been extensively studied with the aim of determining optimal redundancy levels of components at various stages to achieve the required system reliability or availability. In most existing studies, failed elements are assumed to be as good as new after repair, from a failure perspective. Due to deterioration, the repaired element cannot always be restored to a virtually new condition unless replaced with a new element. In this paper, we present an approach of joint redundancy and imperfect maintenance strategy optimization for multi-state systems. Along with determining the optimal redundancy levels, the element replacement strategy under imperfect repair is also optimized simultaneously, so as to reach the desired availability with minimal average expenditure. A generalized imperfect repair model is proposed to characterize the stochastic behavior of multi-state elements (MSEs) after repair, and a replacement policy under which a MSE is replaced once it reaches the pre-determined number of failures is introduced. The cost-repair efficiency relation, which regards the imperfect repair efficiency as a function of assigned repair cost, is put forth to provide a flexibility of assigning repair efforts strategically among MSEs. The benefits of the proposed method compared to the existing ones are demonstrated and verified via an illustrative case study of a three-stage coal transportation system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation and Comparison of Mixed Effects Model Based Prognosis for Hard Failure

    Page(s): 379 - 394
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2648 KB) |  | HTML iconHTML  

    Failure prognosis plays an important role in effective condition-based maintenance. In this paper, we evaluate and compare the hard failure prediction accuracy of three types of prognostic methods that are based on mixed effect models: the degradation-signal based prognostic model with deterministic threshold (DSPM), with random threshold (RDSPM), and the joint prognostic model (JPM). In this work, the failure prediction performance is measured by the mean squared prediction error, and the power of prediction. We have analyzed characteristics of the three methods, and provided insights to the comparison results through both analytical study and extensive simulation. In addition, a case study using real data has been conducted to illustrate the comparison results as well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Vulnerability Scrying Method for Software Vulnerability Discovery Prediction Without a Vulnerability Database

    Page(s): 395 - 407
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1264 KB) |  | HTML iconHTML  

    Predicting software vulnerability discovery trends can help improve secure deployment of software applications and facilitate backup provisioning, disaster recovery, diversity planning, and maintenance scheduling. Vulnerability discovery models (VDMs) have been studied in the literature as a means to capture the underlying stochastic process. Based on the VDMs, a few vulnerability prediction schemes have been proposed. Unfortunately, all these schemes suffer from the same weaknesses: they require a large amount of historical vulnerability data from a database (hence they are not applicable to a newly released software application), their precision depends on the amount of training data, and they have significant amount of error in their estimates. In this work, we propose vulnerability scrying, a new paradigm for vulnerability discovery prediction based on code properties. Using compiler-based static analysis of a codebase, we extract code properties such as code complexity (cyclomatic complexity), and more importantly code quality (compliance with secure coding rules), from the source code of a software application. Then we propose a stochastic model which uses code properties as its parameters to predict vulnerability discovery. We have studied the impact of code properties on the vulnerability discovery trends by performing static analysis on the source code of four real-world software applications. We have used our scheme to predict vulnerability discovery in three other software applications. The results show that even though we use no historical data in our prediction, vulnerability scrying can predict vulnerability discovery with better precision and less divergence over time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combining Operational and Debug Testing for Improving Reliability

    Page(s): 408 - 423
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2697 KB) |  | HTML iconHTML  

    This paper addresses the challenge of reliability-driven testing, i.e., of testing software systems with the specific objective of increasing its operational reliability. We first examined the most relevant approach oriented toward this goal, namely operational testing. The main issues that in the past hindered its wide-scale adoption and practical application are first discussed, followed by the analysis of its performance under different conditions and configurations. Then, a new approach conceived to overcome the limits of operational testing in delivering high reliability is proposed. The two testing strategies are evaluated probabilistically, and by simulation. Results report on the performance of operational testing when several involved parameters are taken into account, and on the effectiveness of the new proposed approach in achieving better reliability. At a higher level, the findings of the paper also suggest that a different view of the testing for reliability improvement concept may help to devise new testing approaches for high-reliability, demanding systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Multistage Sequential Test Allocation for Software Reliability Estimation

    Page(s): 424 - 433
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1883 KB) |  | HTML iconHTML  

    We propose a method to determine how to sequentially allocate test cases among partitions of the software to minimize the expected loss incurred by the Bayes estimator of the overall reliability when the total number of software test cases is fixed. In contrast to fixed sampling schemes, where the proportion of test cases taken from each partition is determined before reliability testing begins, we make allocation decisions dynamically throughout the testing process. Using a fully Bayesian approach, we can take advantage of information from previous functional testing and insights from developers. We then refine these estimates in an iterative manner as we sample. We also compare the results from a multistage sampling scheme with the optimal fixed sampling scheme, and demonstrate its superiority in terms of the expected loss incurred when the overall reliability is estimated by its Bayes estimator both theoretically and through Monte Carlo simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Class Imbalance Learning for Software Defect Prediction

    Page(s): 434 - 443
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1049 KB) |  | HTML iconHTML  

    To facilitate software testing, and save testing costs, a wide range of machine learning methods have been studied to predict defects in software modules. Unfortunately, the imbalanced nature of this type of data increases the learning difficulty of such a task. Class imbalance learning specializes in tackling classification problems with imbalanced distributions, which could be helpful for defect prediction, but has not been investigated in depth so far. In this paper, we study the issue of if and how class imbalance learning methods can benefit software defect prediction with the aim of finding better solutions. We investigate different types of class imbalance learning methods, including resampling techniques, threshold moving, and ensemble algorithms. Among those methods we studied, AdaBoost.NC shows the best overall performance in terms of the measures including balance, G-mean, and Area Under the Curve (AUC). To further improve the performance of the algorithm, and facilitate its use in software defect prediction, we propose a dynamic version of AdaBoost.NC, which adjusts its parameter automatically during training. Without the need to pre-define any parameters, it is shown to be more effective and efficient than the original AdaBoost.NC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Literature Review of Research in Software Defect Reporting

    Page(s): 444 - 454
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (385 KB) |  | HTML iconHTML  

    In 2002, the National Institute of Standards and Technology (NIST) estimated that software defects cost the U.S. economy in the area of $60 billion a year. It is well known that identifying and tracking these defects efficiently has a measurable impact on software reliability. In this work, we evaluate 104 academic papers on defect reporting published since the NIST report to 2012 to identify the most important advancements in improving software reliability though the efficient identification and tracking of software defects. We categorize the research into the areas of automatic defect detection, automatic defect fixing, attributes of defect reports, quality of defect reports, and triage of defect reports. We then summarize the most important work being done in each area. Finally, we provide conclusions on the current state of the literature, suggest tools and lessons learned from the research for practice, and comment on open research problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonparametric Bayesian Estimation of Reliabilities in a Class of Coherent Systems

    Page(s): 455 - 465
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2763 KB) |  | HTML iconHTML  

    Usually, methods evaluating system reliability require engineers to quantify the reliability of each of the system components. For series and parallel systems, there are limited options to handle the estimation of each component's reliability. This study examines the reliability estimation of complex problems of two classes of coherent systems: series-parallel, and parallel-series. In both of the cases, the component reliabilities may be unknown. We developed estimators for reliability functions at all levels of the system (component and system reliabilities). The main assumption required is that, for all the distributions of the components of a particular system, the sets of discontinuity points have to be disjoint. Nonparametric Bayesian estimators of all sub-distribution and distribution functions are derived, and a Dirichlet multivariate process as a prior distribution is considered for the nonparametric Bayesian estimation of all distributions. For illustration, two simulated numerical examples are presented. The estimators are s-consistent, and one may observe from the examples that they have good performance. Our estimator can accommodate continuous failure distributions, as well as distributions with mass points. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Defending Threshold Voting Systems With Identical Voting Units

    Page(s): 466 - 477
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2770 KB) |  | HTML iconHTML  

    The threshold voting system consists of N units that each provide a binary decision (0 or 1), or abstain from voting. System output is 1 if the number of 1-opting units is at least a pre-specified fraction τ of the number of all non-abstaining units. Otherwise system output is 0. For a system consisting of voting units with given probabilistic output distribution, one can maximize the entire system reliability by choosing a proper threshold value τ. When a system operates in a hostile environment, some units can be destroyed or compromised by an aggressive media, or by a strategic malicious counterpart. One of the ways to enhance voting system survivability is to protect its units from possible attacks. We consider a situation when an attacker and a defender have fixed resources. The defender can protect, and the attacker can attack, a subset of the units. First, we formulate the problem of maximizing survivability of a threshold voting system by proper choice of system threshold and number of protected units, assuming that all the units are attacked. Then we consider a maxmin game in which the defender chooses an optimal system threshold and number of protected units assuming that the attacker chooses the number of attacked units that minimizes the probability of correct system output. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault Tolerance Analysis of Surveillance Sensor Systems

    Page(s): 478 - 489
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2870 KB) |  | HTML iconHTML  

    A surveillance sensor system is a network of sensors that provides surveillance coverage to designated geographical areas. If all sensors are working properly, a well-designed surveillance system can supposedly provide the desirable level of detection capability for the locations and regions it covers. In reality, sensors may fail, falling out-of-service. Motivated by the need to determine the ability of a surveillance sensor system to tolerate the failure of sensors, we propose a fault tolerance capability measure to quantify the robustness of surveillance systems. The proposed measure is a conditional probability, characterizing the likelihood that a surveillance system is still working in the presence of sensor failures. Case studies of the surveillance sensor system in a major US port demonstrate that this new measure differentiates different surveillance systems better than using the sensor redundancy measure, or the reliability measure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability Analysis of Repairable Systems With Dependent Component Failures Under Partially Perfect Repair

    Page(s): 490 - 498
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1650 KB) |  | HTML iconHTML  

    Existing reliability models for repairable systems with a single component can be well applied for a range of repair actions from perfect repair to minimal repair. Establishing reliability models for multi-component repairable systems, however, is still a challenge problem when considering the dependency of component failures. This paper focuses on a special repair assumption, called partially perfect repair, for repairable systems with dependent component failures, where only the failed component is repaired to as good as new condition. A parametric reliability model is proposed to capture the statistical dependency among different component failures, in which the joint distribution of the latent component failure time is established using copula functions. The model parameters are estimated by using the maximum likelihood method, and the maximum likelihood function is calculated based on the conditional probability. Based on the proposed reliability model, statistical hypothesis testing procedures are developed to determine the dependency of component failures. The developed methods are illustrated with an application in a cylinder head assembling cell that consists of multiple stations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability of a K-Out-of-n System Equipped With a Single Warm Standby Component

    Page(s): 499 - 503
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1221 KB) |  | HTML iconHTML  

    A k-out-of-n:G system consists of n components, and operates if at least k of its components operate. Its reliability properties have been widely studied in the literature from different perspectives. This paper is concerned with the reliability analysis of a k-out-of-n:G system equipped with a single warm standby unit. We obtain an explicit expression for the reliability function of the system for arbitrary lifetime distributions. Two different mean residual life functions are also studied for the system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An extension of Universal Generating Function in Multi-State Systems Considering Epistemic Uncertainties

    Page(s): 504 - 514
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2412 KB) |  | HTML iconHTML  

    Many practical methods and different approaches have been proposed to assess Multi-State Systems (MSS) reliability measures. The universal generating function (UGF) method, introduced in 1986, is known to be a very efficient way of evaluating the availability of different types of MSSs. In this paper, we propose an extension of the UGF method considering epistemic uncertainties. This extended method allows one to model ill-known probabilities and transition rates, or to model both aleatory and epistemic uncertainty in a single model. It is based on the use of belief functions which are general models of uncertainty. We also compare this extension with UGF methods based on interval arithmetic operations performed on probabilistic bounds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Planning Accelerated Life Tests Under Scheduled Inspections for Log-Location-Scale Distributions

    Page(s): 515 - 526
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2724 KB) |  | HTML iconHTML  

    This paper proposes an efficient approach to planning an ALT under scheduled inspections. We aim to simultaneously optimize stress levels, sample allocation, and inspection times for lifetimes that follow log-location-scale life distributions, including Weibull and Lognormal distributions. Such a high-dimension optimization problem is solved by a computationally efficient approach leveraging on the asymptotic equivalence between the selection of sample quantiles for parameter estimation of a location-scale distribution and the selection of the optimal inspection times during an ALT for the same purpose. A numerical example is presented to illustrate the application of the proposed approach, and a sensitivity analysis is performed to investigate the robustness of the optimal ALT plans against misspecification of planning inputs. A computer program coded in the MATLAB Graphical User Interface Design Environment is provided to make our method readily applicable in practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Novel Approach to Optimal Accelerated Life Test Planning With Interval Censoring

    Page(s): 527 - 536
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1778 KB) |  | HTML iconHTML  

    Accelerated life testing (ALT) is widely used in industry to obtain the lifetime estimate of a product which is expected to last years or even decades. It is important to find an effective experimental design of ALT with the consideration of certain optimality criteria. In this paper, we discuss a new approach to designing ALT test plans when readout data (i.e., interval censoring) are collected. We utilize the proportional hazard (PH) model for a failure time distribution, and formulate a generalized linear model (GLM) for censored data. The optimal design is obtained such that the prediction variance of the expected product lifetime at the product's use condition is minimized. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Expectation Maximization Algorithm for One Shot Device Accelerated Life Testing with Weibull Lifetimes, and Variable Parameters over Stress

    Page(s): 537 - 551
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2821 KB) |  | HTML iconHTML  

    In reliability analysis, accelerated life-tests are commonly used for inducing more failures, thus obtaining more lifetime information in a relatively short period of time. In this paper, we study binary response data collected from an accelerated life-test arising from one-shot device testing based on a Weibull lifetime distribution with both scale and shape parameters varying over stress factors. Log-linear link functions are used to connect both scale and shape parameters in the Weibull model with the stress factors. Because no failure times of units are observed, we use the EM algorithm for computing the maximum likelihood estimates (MLEs) of the model parameters. Moreover, we develop inferences on the reliability at a specific time, and the mean lifetime at normal operating conditions. This method of estimation is then compared with Fisher scoring and least-squares methods in terms of mean square error as well as tolerance value, computational time, and number of cases of divergence. The asymptotic confidence intervals and parametric bootstrap confidence intervals are also developed for some parameters of interest. A transformation approach is also proposed for constructing confidence intervals. A simulation study is then carried out to demonstrate that the proposed estimators perform very well for data of the considered form. Such accelerated one-shot device testing data can also be found in survival analysis. For an illustration, we consider here an application of the proposed algorithm to mice tumor toxicology data from a study involving the development of tumors with respect to risk factors such as sex, strain of offspring, and dose effects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Open Access

    Page(s): 552
    Save to Project icon | Request Permissions | PDF file iconPDF (1156 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability institutional listings

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (1045 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong