By Topic

Reliability, IEEE Transactions on

Issue 1 • Date March 2012

Filter Results

Displaying Results 1 - 25 of 34
  • Table of contents

    Page(s): C1 - 1
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • Guest Editorial Managing for Reliability

    Page(s): 2 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • Component Reliability Criticality or Importance Measures for Systems With Degrading Components

    Page(s): 4 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (503 KB) |  | HTML iconHTML  

    This paper proposes two new importance measures: one new importance measure for systems with -independent degrading components, and another one for systems with -correlated degrading components. Importance measures in previous research are inadequate for systems with degrading components because they are only applicable to steady-state cases and problems with discrete states without considering the continuously changing status of the degrading components. Our new importance measures are proposed as functions of time that can provide timely feedback on the critical components prior to failure based on the measured or observed degradation. Furthermore, the correlation between components is considered for developing these importance measures through a multivariate distribution. To evaluate the criticality of components, we analysed reliability models for multi-component systems with degrading components, which can also be utilized for studying maintenance models. Numerical examples show that the proposed importance measures can be used as an effective tool to assess component criticality for systems with degrading components. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling the Dependent Competing Risks With Multiple Degradation Processes and Random Shock Using Time-Varying Copulas

    Page(s): 13 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (444 KB) |  | HTML iconHTML  

    We develop s-dependent competing risk model for systems subject to multiple degradation processes and random shocks using time-varying copulas. The proposed model allows for a more flexible dependence structure between risks in which (a) the dependent relationship between random shocks and degradation processes is modulated by a time-scaled covariate factor, and (b) the dependent relationship among various degradation processes is fitted using the copula method. Two types of random shocks are considered in the model: fatal shocks, which fails the system immediately; and nonfatal shocks, which does not. In a nonfatal shock situation there are two impacts towards the degradation processes: sudden increment jumps, and degradation rate accelerations. The comparison results of the system reliability estimation from both constant and time-varying copulas are illustrated in the numerical examples to demonstrate the application of the proposed model. The modified joint distribution bounds in terms of Kendall's tau and Spearman's rho provide an improvement to Frechet-Hoeffding bounds for estimating the possible system reliability range. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effects of Cycling Humidity on the Performance of RFID Tags With ACA Joints

    Page(s): 23 - 31
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (888 KB) |  | HTML iconHTML  

    Radio frequency identification (RFID) tags with anisotropically conductive joints (ACAs) are used in different applications where the environmental conditions may impair their reliability. Thus the effects of different environmental stresses on reliability need to be investigated. The effects of high temperature and humidity may change the performance of the tags. More- over, the effects of constantly varying temperature and humidity conditions may be even more harmful. In this study, the effects of changing humidity conditions on the performance of a passive ultra high frequency (UHF) RFID tag with ACA joints were studied. The tags were tested in a humidity cycling test where humidity varied from 85%RH to 10% RH, and temperature from 85°C to 25°C. Tags with four different sets of bonding parameters were tested. Significant differences in the reliability between the tags with different bonding parameters were observed. The results were also compared with results from a corresponding constant humidity test where the humidity was 85%RH, and the temperature 85°C. The tags had different failure times, modes, and mechanisms in these two tests. Furthermore, the effects of bonding parameters on the reliability were different in these tests. According to this study, it is important to investigate the effects of changing humidity, when the reliability in different environments is investigated, but the constant humidity test cannot be replaced with the faster humidity cycling test. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improved Estimation of Weibull Parameters Considering Unreliability Uncertainties

    Page(s): 32 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (817 KB) |  | HTML iconHTML  

    We propose a linear regression method for estimating Weibull parameters from life tests. The method uses stochastic models of the unreliability at each failure instant. As a result, a heteroscedastic regression problem arises that is solved by weighted least squares minimization. The main feature of our method is an innovative s-normalization of the failure data models, to obtain analytic expressions of centers and weights for the regression. The method has been Monte Carlo contrasted with Benard's approximation, and Maximum Likelihood Estimation; and it has the highest global scores for its robustness, and performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Proposed Measure of Residual Life of Live Components of a Coherent System

    Page(s): 41 - 49
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (327 KB) |  | HTML iconHTML  

    The concept of the signature of a coherent system is useful to study the stochastic and aging properties of the system. Let X1m, X2:n, ⋯ Xn:n denote the ordered lifetimes of the components of a coherent system consisting of n i.i.d components. If T denotes the lifetime of the system, then the signature vector of the system is defined to be a probability vectors = (s1, s2, ⋯, sn) such that si = P(T = Xi:n), i = 1, 2, ⋯,n. Here we consider a coherent system with sig- nature of the form s = (s1, s2, ⋯ Si, 0 ... , 0), where sk >; 0, k = 1,2, ⋯, i. Under the condition that the system is working at time t, we propose a time dependent measure to calculate the probability of residual life of live components of the system, i.e., Xk:n, k = i + 1 ⋯, n. Several stochastic and aging properties of the proposed measure are explored. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Remaining Useful Life Estimation Based on a Nonlinear Diffusion Degradation Process

    Page(s): 50 - 67
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1318 KB) |  | HTML iconHTML  

    Remaining useful life estimation is central to the prognostics and health management of systems, particularly for safety-critical systems, and systems that are very expensive. We present a non-linear model to estimate the remaining useful life of a system based on monitored degradation signals. A diffusion process with a nonlinear drift coefficient with a constant threshold was transformed to a linear model with a variable threshold to characterize the dynamics and nonlinearity of the degradation process. This new diffusion process contrasts sharply with existing models that use a linear drift, and also with models that use a linear drift based on transformed data that were originally nonlinear. Both existing models are based on a constant threshold. To estimate the remaining useful life, an analytical approximation to the distribution of the first hitting time of the diffusion process crossing a threshold level is obtained in a closed form by a time-space transformation under a mild assumption. The unknown parameters in the established model are estimated using the maximum likelihood estimation approach, and goodness of fit measures are applied. The usefulness of the proposed model is demonstrated by several real-world examples. The results reveal that considering nonlinearity in the degradation process can significantly improve the accuracy of remaining useful life estimation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Approximate Solution to the G–Renewal Equation With an Underlying Weibull Distribution

    Page(s): 68 - 73
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB) |  | HTML iconHTML  

    An important characteristic of the g-renewal process, of great practical interest, is the g-renewal equation, which represents the expected cumulative number of recurrent events as a function of time. Just like in an ordinary renewal process, the problem is that the g-renewal equation does not have a closed form solution, unless the underlying event times are exponentially distributed. The Monte Carlo solution, although exhaustive, is computationally demanding. This paper offers a simple-to-implement (in an Excel spreadsheet) approximate solution, when the underlying failure-time distribution is Weibull. The accuracy of the proposed solution is in the neighborhood of 2%, when compared to the respective Monte Carlo solution. Based on the proposed solution, we also consider an estimation procedure of the g-renewal process parameters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying Bayesian Model Averaging for Quantile Estimation in Accelerated Life Tests

    Page(s): 74 - 83
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB) |  | HTML iconHTML  

    In an accelerated life test, inferences on extreme quantiles of the lifetime distribution at the use condition are obtained via extrapolation in two directions: in time, and in stress levels. Extrapolation is known to highly depend on the working model, and ignoring model uncertainty can result in over-confidence. This paper explores the use of Bayesian model averaging for estimating quantiles in an accelerated life test. Two of the most commonly used lifetime regression models, lognormal, and Weibull log-location-scale regression models, are considered in this paper as candidate models. To illustrate, we analyse complementary metal-oxide semiconductor integrated circuit data. We also construct a simulation study to compare the performance of the Bayesian model averaging -credibility intervals with other exiting interval estimators. The simulation study shows that, for estimating extreme quantiles, both the standard Bayesian and the maximum likelihood approaches can lead to an over-confident result, and Bayesian model averaging provides a -credibility interval with a wider average length, but a more accurate coverage probability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analytical Method to Determine Uncertainty Propagation in Fault Trees by Means of Binary Decision Diagrams

    Page(s): 84 - 94
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB) |  | HTML iconHTML  

    An analytical method is presented which enables one to propagate uncertainties described by continuous probability density functions through fault trees from the lower level (basic event) to the higher level (top-event) of a stochastic binary system. It is based on calculating the expected value and the variance of the top-event probability by means of Binary Decision Diagrams (BDD). This method allows an accurate computation of both the expected value and the variance of the top-event probability. We show, on a benchmark of real fault trees, that our method results in a quantitative and qualitative improvement in safety analysis of industrial systems, especially those concerning accurate evaluation of Safety Integrity Levels (SIL), whenever different sources of uncertainties are present. The numerical results of the analytical method are in good agreement with those of the Monte Carlo method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A General Imperfect Repair Model Considering Time-Dependent Repair Effectiveness

    Page(s): 95 - 100
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (342 KB) |  | HTML iconHTML  

    Kijima I and Kijima II models are two important imperfect repair models in literature. These two models use one constant parameter to represent the degree of repair, which is called Repair Effectiveness (RE) in this paper. We developed a more general imperfect repair model by extending the constant RE to a time-dependent function based on the virtual age process, where the Kijima models are special cases of the new model. A simulation method is developed to estimate the cumulative number of failures for the new model, and a Bayesian inference method is proposed to select the best imperfect repair model. Finally, to demonstrate the new model, a numerical example is provided. From this example, the new model shows a more accurate mean and a narrower confidence interval than that of the Non-Homogeneous Poisson Processes, and Kijima I and Kijima II models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Data-Driven Approach to Selecting Imperfect Maintenance Models

    Page(s): 101 - 112
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1251 KB) |  | HTML iconHTML  

    Many imperfect maintenance models have been developed to mathematically characterize the efficiency of maintenance activity from various points of view. However, the adequacy of an imperfect maintenance model must be validated before it is used in decision making. The most adequate imperfect maintenance model among the candidates to facilitate decision making is also desired. The contributions of this paper lie in three aspects: 1 it proposes an approach to conducting a goodness-of-flt test, 2 it introduces a Bayesian approach to selecting the most adequate model among several competitive candidates, and 3 it develops a framework that incorporates the model selection results into the preventive maintenance decision making. The effectiveness of the proposed methods is demonstrated by three designed numerical studies. The case studies show that the proposed methods are able to identify the most adequate model from the competitive candidates, and incorporating the model selection results into the maintenance decision model achieves better estimation for applications with limited data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Availability and Cost-Constrained Long-Reach Passive Optical Network Planning

    Page(s): 113 - 124
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (917 KB) |  | HTML iconHTML  

    To avoid huge data loss in the last mile of Internet service, Passive Optical Networks (PONs) need to be designed with a high availability guarantee. Because next generation PON includes extending the coverage of optical broadband access networks under the name long-reach PON, availability-guaranteed planning of PONs for long-reach access is required. In this paper, we propose a Mixed Integer Linear Programming (MILP)-based approach, and a heuristic algorithm, for the planning of survivable long-reach passive optical networks. The MILP-based planning model mainly consists of cost and availability constraints, while having the objective of largest possible area coverage. The heuristic is called Locate-ONU-with-Lowest-Availability-Requirement-First (LOWLARF), and it performs a faster search for the nearly optimal locations of Optical Network Units (ONUs), Optical Line Terminal (OLT), and the optical splitter having the same objective and constraints with the MILP model. The proposed heuristic and the MILP model are compared in terms of the solution spaces provided for a small sized problem. The heuristic LOWLARF introduces the advantage of significantly degraded running time, and numerical results indicate that it can provide close results to those of the MILP-based planning. On the other hand, three survivability schemes are compared in terms of deployment cost, availability, and coverage by MILP-based planning and LOWLARF. The evaluation is done by two different availability requirement scenarios. The results show that, under both scenarios, the protection scheme offering a lower bound of 99.999% availability leads to the highest deployment cost while it covers the smallest area. The protection schemes that guarantee 99.99% availability by employing less redundancy can cover a larger area under both scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE 1413: A Standard for Reliability Predictions

    Page(s): 125 - 129
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (329 KB) |  | HTML iconHTML  

    There is no standard method for creating a hardware reliability prediction. Therefore, predictions vary widely in terms of methodological rigor, data quality, extent of analysis, and uncertainty. Documentation of the prediction process employed is often not presented. These inconsistencies can leave the user of the prediction confused, uncertain of the prediction's true value, and unable to compare the results of two reliability predictions of the same hardware. IEEE has created a standard [1], IEEE 1413, that, when followed, results in consistent, complete documentation of a prediction. The prediction users can then understand the strengths and weaknesses of a prediction, and compare the value and usefulness of multiple predictions. The standard creates consistency by requiring documentation for specific processes, activities, and levels of knowledge. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Particle Filtering for the Detection of Fault Onset Time in Hybrid Dynamic Systems With Autonomous Transitions

    Page(s): 130 - 139
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (530 KB) |  | HTML iconHTML  

    The behavior of multi-component engineered systems is typically characterized by transitions among discrete modes of operation and failure, each one giving rise to a specific continuous dynamics of evolution. The detection of the system's mode change time represents a particularly challenging task because it requires keeping track of the transitions among the multiple system dynamics corresponding to the different modes of operation and failure. To this purpose, we implement a novel particle filtering method within a log-likelihood ratio approach here, specifically tailored to handle hybrid dynamic systems. The proposed method relies on the generation of multiple particle swarms for each discrete mode, each originating from the nominal particle swarm at different time instants. The hybrid system considered consists of a hold up tank filled with liquid, whose level is autonomously maintained between two thresholds; the system behavior is controlled by discrete mode actuators whose states are estimated by a Monte Carlo-based particle filter on the basis of noise level, and temperature measurements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Strong Diagnosability and Conditional Diagnosability of Augmented Cubes Under the Comparison Diagnosis Model

    Page(s): 140 - 148
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (457 KB) |  | HTML iconHTML  

    The problem of fault diagnosis has been discussed widely, and the diagnosability of many well-known networks has been explored. Strong diagnosability, and conditional diagnosability are both novel measurements for evaluating reliability and fault tolerance of a system. In this paper, some useful sufficient conditions are proposed to determine strong diagnosability, and the conditional diagnosability of a system. We then apply them to show that an n-dimensional augmented cube AQn is strongly (2n -1)-diagnosable for n ≥ 5, and the conditional diagnosability of AQn is 6n - 17 for n ≥ 6. Our result demonstrates that the conditional diagnosability of AQn is about three times larger than the classical diagnosability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effective Software Fault Localization Using an RBF Neural Network

    Page(s): 149 - 169
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1613 KB) |  | HTML iconHTML  

    We propose the application of a modified radial basis function neural network in the context of software fault localization, to assist programmers in locating bugs effectively. This neural network is trained to learn the relationship between the statement coverage information of a test case and its corresponding execution result, success or failure. The trained network is then given as input a set of virtual test cases, each covering a single statement. The output of the network, for each virtual test case, is considered to be the suspiciousness of the corresponding covered statement. A statement with a higher suspiciousness has a higher likelihood of containing a bug, and thus statements can be ranked in descending order of their suspiciousness. The ranking can then be examined one by one, starting from the top, until a bug is located. Case studies on 15 different programs were conducted, and the results clearly show that our proposed technique is more effective than several other popular, state of the art fault localization techniques. Further studies investigate the robustness of the proposed technique, and illustrate how it can easily be applied to programs with multiple bugs as well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability of Various 2-Out-of-4:G Redundant Systems With Minimal Repair

    Page(s): 170 - 179
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (310 KB) |  | HTML iconHTML  

    In safety critical applications, it is becoming quite common to improve reliability through the use of quadruplexed, redundant subsystems organized with 2-out-of-4:G (2oo4:G) logic. As the subsystems involved will generally be complex, comprising many individual components, they are likely to fail at a time-dependent rate λ(t). The maintenance schedule here is assumed to require failed subsystems to be repaired and workable within a given time τ of failure; and repairs are assumed to be minimal so that all functioning subsystems possess the same rate λ(t). Within this context, we consider the dependence of the system reliability measures on the allowed repair time τ by providing solutions to two integro-differential-delay equations (IDDEs) which bound the exact solution above and below; these bounds may be tightened by iterating the IDDEs to higher order. Results for the stationary system are used to investigate the order required to provide sufficiently tight bounds for the general case. In addition, we consider examples in support of the conjecture of Solov'yev and Zaytsev (Engineering Cybernetics, 1975) that if λ(t)τ is small, then the (asymptotic) instantaneous hazard function of the system hs(t), with a time-varying λ(t), will approach (λ(t)τ → 0) the limit hso(λ(t),τ), where hso(λ, τ) is the asymptotical hazard rate for the same system with constant failure rates. This method then allows for a simple analysis of the case of arbitrary time-varying λ(t) in terms of the much simpler stationary case. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Failure Profile Analysis of Complex Repairable Systems With Multiple Failure Modes

    Page(s): 180 - 191
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (611 KB) |  | HTML iconHTML  

    The relative failure frequency among different failure modes of a production system is referred to as failure profile in this paper. Identification of failure profile based on failure-time data collected in the production phase of a system can help pinpoint the bottleneck problems, and provide valuable information for system design evaluation and maintenance management. Major challenges of effective failure profile identification come from time-varying and limited failure-time data. In this paper, the failure profile is estimated by using the maximum likelihood method. In addition, statistical hypothesis testing procedures are proposed to inspect the existence of a dominating failure mode, and possible changes of failure profiles during a production period. The developed methods are illustrated with an automation system of a high throughput screening (HTS) process, and a production process for cylinder heads. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrated Importance Measure of Component States Based on Loss of System Performance

    Page(s): 192 - 202
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (289 KB) |  | HTML iconHTML  

    This paper mainly focuses on the integrated importance measure (IIM) of component states based on loss of system performance. To describe the impact of each component state, we first introduce the performance function of the multi-state system. Then, we present the definition of IIM of component states. We demonstrate its corresponding physical meaning, and then analyze the relationships between IIM and Griffith importance, Wu importance, and Natvig importance. Secondly, we present the evaluation method of IIM for multi-state systems. Thirdly, the characteristics of IIM of component states are discussed. Finally, we demonstrate a numerical example, and an application to an offshore oil and gas production system for IIM to verify the proposed method. The results show that 1) the IIM of component states concerns not only the probability distributions and transition intensities of the states of the object component, but also the change in the system performance under the change of the state distribution of the object component; and 2) IIM can be used to identify the key state of a component that affects the system performance most. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Number of Failed Components in a Coherent System With Exchangeable Components

    Page(s): 203 - 207
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (199 KB) |  | HTML iconHTML  

    This paper is concerned with the number of components that are failed at the time of system failure. We study the corresponding quantity for a coherent structure via the system signature. Furthermore, we study the distribution of the number of failures after a specified time until the system failure. We illustrate the results for well-known general classes of coherent systems such as linear consecutive -within- -out-of- :F, and -consecutive- -out-of- :F. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Linear Multistate Consecutively-Connected Systems With Gap Constraints

    Page(s): 208 - 214
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (290 KB) |  | HTML iconHTML  

    This paper generalizes the linear multistate consecutively-connected system model by introducing allowable gaps. The new model consists of N +1 linearly ordered nodes. Some of these nodes contain statistically independent multistate elements with different characteristics. Each element j can provide a connection between the node to which it belongs and Xj next nodes, where Xj is a discrete random variable with known probability mass function. The system fails if it contains at least m consecutive nodes not connected with any previous node (m consecutive gaps). An algorithm based on the universal generating function method is suggested for the system reliability evaluation. Illustrative examples are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability of Combined m -Consecutive- k -out-of- n :F and Consecutive k_{c} -out-of- n :F Systems

    Page(s): 215 - 219
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (139 KB) |  | HTML iconHTML  

    A combined m-consecutive-k-out-of-n:F & consecutive k<;sub>;c<;/sub>;-out-of-n:F systems consists of linearly ordered components, and fails iff there exist at least k<;sub>;c<;/sub>; consecutive failed components, or at least m nonoverlapping runs of k consecutive failed components. This structure has applications for modeling systems such as infrared detecting and signal processing, and bank automatic payment systems. In this paper, we derive a combinatorial equation for the number of path sets of this structure including a specified number of working components. This number is used to derive a reliability function, and a signature based survival function formulae, for the system consisting of i.i.d. components. We also obtain a combinatorial equation for the reliability of a system with Markov dependent components. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong