By Topic

Reliability, IEEE Transactions on

Issue 2 • Date June 2008

Filter Results

Displaying Results 1 - 25 of 26
  • Table of contents

    Page(s): C1 - 221
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Degradation Analysis of Nano-Contamination in Plasma Display Panels

    Page(s): 222 - 229
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (398 KB) |  | HTML iconHTML  

    As an alternative to traditional life testing, degradation tests can be effective in assessing product reliability when measurements of degradation leading to failure can be observed. This article proposes a new model to describe the nonlinear degradation paths caused by nano-contamination in plasma display panels (PDP): a bi-exponential model with random coefficients. A likelihood ratio test was sequentially executed to select random effects in the nonlinear model. Analysis results indicate that the reliability estimation can be improved substantially by using the nonlinear random-coefficients model to incorporate both inherent degradation characteristics, and contamination effects of impurities for PDP degradation paths. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time Reliability Prediction for a Dynamic System Based on the Hidden Degradation Process Identification

    Page(s): 230 - 242
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB) |  | HTML iconHTML  

    This paper introduces a real-time reliability prediction method for a dynamic system which suffers from a hidden degradation process. The hidden degradation process is firstly identified by use of particle filtering based on measurable outputs of the considered dynamic system. Then the system's reliability is predicted according to the model of the degradation path. We analyze the identification algorithm mathematically, and validate the effectiveness of this method through computer simulations of a three-vessel water tank. This real-time reliability prediction method is beneficial to the dynamic system's condition monitoring, and may be further helpful to make a proper predictive maintenance policy for the system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Censored Sequential Posterior Odd Test (SPOT) Method for Verification of the Mean Time To Repair

    Page(s): 243 - 247
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB) |  | HTML iconHTML  

    The verification of the Mean Time To Repair (MTTR) is a problem of interest in many practical cases. As maintenance time can often be adequately described by a lognormal distribution, the problem leads us to research on statistical inference on the mean of the lognormal distribution. To achieve this purpose, we propose the Sequential Posterior Odd Test (SPOT) method for verification of the unknown parameter of the lognormal distribution. We first present a model to determine a threshold in the SPOT method for verification of MTTR. Moreover, the Censored SPOT method is proposed when the sample size (usually small) is fixed. A numeric example with comparison is discussed to illustrate advantages of the Censored SPOT method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parametric Model Discrimination for Heavily Censored Survival Data

    Page(s): 248 - 259
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (479 KB) |  | HTML iconHTML  

    Simultaneous discrimination among various parametric lifetime models is an important step in the parametric analysis of survival data. We consider a plot of the skewness versus the coefficient of variation for the purpose of discriminating among parametric survival models. We extend the method of Cox & Oakes from complete to censored data by developing an algorithm based on a competing risks model and kernel function estimation. A by-product of this algorithm is a nonparametric survival function estimate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Relationship Between Confidence Intervals for Failure Probabilities and Life Time Quantiles

    Page(s): 260 - 266
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (209 KB) |  | HTML iconHTML  

    The failure probability of a product F(t), and the life time quantile t p are commonly used metrics in reliability applications. Confidence intervals are used to quantify the s-uncertainty of estimators of these two metrics. In practice, a set of pointwise confidence intervals for F(t), or the quantiles t p are often plotted on one graph, which we refer to as pointwise ldquoconfidence bands.rdquo These confidence bands for F(t) or t p can be obtained through s-normal approximation, maximum likelihood, or other procedures. In this paper, we compare s-normal approximation to likelihood methods, and introduce a new procedure to get the confidence intervals for F(t) by inverting the pointwise confidence bands of the quantile t p function. We show why it is valid to interpret the set of pointwise confidence intervals for the quantile function as a set of pointwise confidence intervals for F(t), and vice-versa. Our results also indicate that the likelihood-based pointwise confidence bands have desirable statistical properties, beyond those that were known previously. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards a Standardized Terminology for Network Performance

    Page(s): 267 - 271
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (886 KB) |  | HTML iconHTML  

    The integration of the products of diverse fields, including the human component, into complex systems has created major difficulties in the development of efficient mechanisms for analyzing system performance. One of the problems can be traced to the variety of terminologies used in describing performance across different fields. A designer or user is faced with vague terms that may be complementary, synonymous, or some combination thereof. There is a need to develop a common understanding of the meaning of widely used terms without reference to a specific discipline. The work described in this paper aims to develop a framework that helps identify a set of performance indicators for complex systems, such as information infrastructures. The objective is not to propose yet another concept, but rather to identify from the existing concepts the proper definitions, attributes, intersections, and evaluation measures. Dependability, fault-tolerance, reliability/availability, security, and survivability are used as representative examples for describing the proposed framework. This framework could eventually help furnish the basis towards adoption of standard terminology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Shared Segment Protection Method for Survivable Networks with Guaranteed Recovery Time

    Page(s): 272 - 282
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1150 KB) |  | HTML iconHTML  

    Shared segment protection (SSP), compared with shared path protection (SPP), and shared link protection (SLP), provides an optimal protection configuration due to the ability of maximizing spare capacity sharing, and reducing the restoration time in cases of a single link failure. This paper provides a thorough study on SSP under the GMPLS-based recovery framework, where an effective survivable routing algorithm for SSP is proposed. The tradeoff between the price (i.e., cost representing the amount of resources, and the blocking probability), and the restoration time is extensively studied by simulations on three networks with highly dynamic traffic. We demonstrate that the proposed survivable routing algorithm can be a powerful solution for meeting stringent delay upper bounds for achieving high restorability of transport services. This can significantly improve the network reliability, and enable more advanced, mission critical services in the networks. The comparison among the three protection types further verifies that the proposed scheme can yield significant advantages over shared path protection, and shared link protection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating Reliability of Telecommunications Networks Using Traffic Path Information

    Page(s): 283 - 294
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (327 KB) |  | HTML iconHTML  

    We propose a reliability model for representing telecommunications networks that does not focus on topological information, but rather traffic path information. Mapping from traffic paths to physical elements and capacities enables the model to express simply how terrible performance degradations occur. Existing models, such as probability graph models, and probability-capacity graph models, do not adequately address actual telecommunication network designs. The probability graph model never considers performance degradations, while the probability-capacity model unreasonably assumes that we can estimate performance degradations from only the network topology. This paper also proposes an algorithm for evaluating the reliability of our new model. A numerical example shows that the algorithm is reasonably efficient for even large telecommunications networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Practical Algorithm for Computing Multi-State Two-Terminal Reliability

    Page(s): 295 - 302
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (283 KB) |  | HTML iconHTML  

    A practical, important system performance index for analyzing real world systems is NP-hard multi-state two-terminal reliability. Contributions of this paper are 1) it presents a direct, correct decomposition method for solving NP-hard multi-state two-terminal reliability, and 2) it does not require a priori the multi-state minimal paths/cuts of the system. Because all the exact methods solve the multi-state two-terminal reliability problem in terms of two/three NP-hard problems, excluding the impractical complete enumeration method, and Doulliez & Jamoulle's popular but incorrect decomposition method, development of such an exact, direct, and practical algorithm is thus worthwhile. Computational experiments verify that the proposed method is practical, fast, and efficient in comparison with the complete enumeration method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability-Redundancy Allocation for Multi-State Series-Parallel Systems

    Page(s): 303 - 310
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB) |  | HTML iconHTML  

    Current studies of the optimal design of multi-state series-parallel systems often focus on the problem of determining the optimal redundancy for each stage. However, this is only a partial optimization. There are two options to improve the system utility of a multi-state series-parallel system: 1) to provide redundancy at each stage, and 2) to improve the component state distribution, that is, make a component in states with respect to higher utilities with higher probabilities. This paper presents an optimization model for a multi-state series-parallel system to jointly determine the optimal component state distribution, and optimal redundancy for each stage. The relationship between component state distribution, and component cost is discussed based on an assumption on the treatment on the components. An example is used to illustrate the optimization model with its solution approach, and that the proposed reliability-redundancy allocation model is superior to the current redundancy allocation models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Risk Informed Design Refinement of a Power System Protection Scheme

    Page(s): 311 - 321
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (676 KB) |  | HTML iconHTML  

    To prevent wide-area disturbances, and enhance power system reliability, various forms of system protection scheme (SPS) have been designed and implemented by utilities. One of the main concerns in SPS deployment is to ensure that the system will fit with the reliability requirement specification in terms of dependability, and security. After major changes in system operating practices, or physical changes of the protected system, a refinement of the SPS decision process may be required. This paper uses a currently operational SPS as an example to demonstrate a refinement study. By incorporating an interval theory, a risk reduction worth importance concept, and a probabilistic risk-based index, the proposed procedure conducts parameter uncertainty analysis, identifies critical factors in the reliability model, performs risk assessments, and determines a better option for the refinement of the studied SPS decision process logic module. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Sample-Based Approach to Predict System Performance Reliability

    Page(s): 322 - 330
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB) |  | HTML iconHTML  

    Multiple degradation paths arise when systems operate under uncontrolled, uncertain environmental conditions at customers' hands in the field. This paper presents a design stage method for assessing performance reliability of systems with competing time-variant responses due to components with uncertain degradation rates. Herein, system performance measures (e.g. selected responses) are related to their critical levels by time dependent limit-state functions. System failure is defined as the non-conformance of any response, and hence unions of the multiple failure regions are formed. For discrete time, set theory establishes the minimum union size needed to identify a true incremental failure region that emerges from a safe region. A cumulative failure distribution function is built by summing incremental failure probabilities. A practical implementation of the theory is manifest through evaluating probabilities by Monte Carlo simulation. Error analysis suggests ways to predict and minimize errors. An electrical temperature controller shows the details of the method, and the potential of the approach. It is shown that the proposed method provides a more realistic way to predict performance reliability than either worst-case, or simple average-based approaches that are available in the open literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lifetime of Combined k -out-of- n , and Consecutive k_{c} -out-of- n Systems

    Page(s): 331 - 335
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (181 KB) |  | HTML iconHTML  

    A combined k-out-of-n:F(G) & consecutive kc -out-of-n :F(G) system fails (functions) iff at least k components fail (function), or at least fcc consecutive components fail (function). Explicit formulas are given for the lifetime distribution of these combined systems whenever the lifetimes of components are exchangeable, and have an absolutely continuous joint distribution. The lifetime distributions of the aforementioned systems are represented as a linear combination of distributions of order statistics by using the concept of Samaniego's signature. Formulas for the mean lifetimes are given. Some numerical results are also presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Reliability Assessment of Redundant Systems Subject to Imperfect Fault Coverage Using Binary Decision Diagrams

    Page(s): 336 - 348
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB) |  | HTML iconHTML  

    Systems requiring very high levels of reliability, such as aircraft controls or spacecraft, often use redundancy to achieve their requirements. This paper provides highly efficient techniques for computing the reliability of redundant systems involving simple k-out-of-n arrangements, and those involving complex structures which may include imbedded k-out-of-n structures. Techniques for modeling systems subject to imperfect fault coverage must be appropriate to the redundancy management architecture utilized by the system. Systems for which coverage can be associated with each of the redundant components, perhaps taking advantage of the component's built-in test capability, are modeled with what we term element level coverage (ELC); while systems which utilize majority voting for the selection from among redundant components are modeled with fault level coverage (FLC). In FLC, systems coverage is a function of the fault sequence, i.e., coverage will be greater for the initial faults which can utilize voting for redundant component selection, but will have a lower coverage value when the system must select from among the last two operational components. Occasionally, FLC systems can be adequately modeled using a simplified version of FLC in which it can be assumed that the initial fault coverage values are unity. This model is called one-on-level coverage (OLC). The FLC algorithms provided in this paper are of particular importance for the correct modeling of systems which utilize voting to select from among their redundant elements. While combinatorial, and recursive techniques for modeling ELC, FLC, and OLC have been previously reported, this paper presents new table-based algorithms, and binary decision diagram-based algorithms for these models which have superior computational complexity. The algorithms presented here provide the ability to analyse large, and complex systems very efficiently, in fact with a computational complexity com- - parable to the best available techniques for systems with perfect fault coverage. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Achievable Limits on the Reliability of k -out-of- n :G Systems Subject to Imperfect Fault Coverage

    Page(s): 349 - 354
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (443 KB) |  | HTML iconHTML  

    Systems which must be designed to achieve very low probabilities of failure often use redundancy to meet these requirements. However, redundant k-out-of-n:G systems which are subject to imperfect fault coverage have an optimum level of redundancy, n opt. That is to say, additional redundancy in excess of nopt will result in an increase, not a decrease, in the probability of system failure. This characteristic severely limits the level of redundancy considered in the design of highly reliable systems to modest values of n. Correct modeling of imperfect coverage is critical to the design of highly reliable systems. Two distinctly different k-out-of-n:G imperfect fault coverage reliability models are discussed in this paper: Element Level Coverage (ELC), and Fault Level Coverage (FLC). ELC is the appropriate model when each component can be assigned a given coverage level, while FLC is the appropriate model for systems using voting as the primary means of selection among redundant components. It is shown, over a wide range of realistic coverage values and relatively high component reliabilities, that the optimal redundancy level for ELC systems is 2 and 4 for FLC systems. It is also shown that the optimal probability of failure for FLC systems exceeds that of ELC systems by several orders of magnitude. New functions for computing the mean time to failure for i.i.d. k-out-of-n:G ELC, and FLC systems are also presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Flowgraph Models in Reliability and Finite Automata

    Page(s): 355 - 359
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (178 KB) |  | HTML iconHTML  

    We discuss the interrelationship of two seemingly unrelated subjects: the theory of finite automata, and reliability theory, finite automata, more generally known as generalized transition graphs, are dasiaconvertedpsila to regular expressions by manipulating their pictorial representation, a directed graph, by elimination of its states one-by-one until two states are left, connected by an edge whose label is a regular expression equivalent to the initially given finite automata or generalized transition graph. Flowgraphs are used to represent semi-Markov reliability models. They are directed graphs with edges labeled with expressions of the form pG(s), where p is the probability of transition from node i to node j, say; and G(s) is the transform (Laplace transform, moment generating function, or characteristic function) of the waiting time in i given that the next transition is to j. Usually, transforms of waiting time distributions (e.g. time to first failure) are obtained from these graph representations by applying Mason's Rule (e.g. Huzurbazar, Mason, and Osaki), or, by the Cofactor Rule. In this paper we are concerned with obtaining transforms of waiting times by direct manipulation of the flowgraphs along the lines in finite automata. The goal of the paper is to observe that identical patterns of reasoning are applicable in both fields. This interconnects two apparently unrelated fields of knowledge, an interesting observation for its own sake but also important from a tool & technique point of view. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Residuals and Their Analyses for Accelerated Life Tests With Step and Varying Stress

    Page(s): 360 - 368
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB) |  | HTML iconHTML  

    Analyses of residuals are used to assess a regression model, identify peculiar data points, and reveal the effect of other variables. Suitable residuals for accelerated life test data from step and varying stress tests have been needed. This article defines new, suitable residuals, and presents graphical and numerical analyses of them, which yield useful understanding of such data. Engineers can benefit from applying these techniques for more accurate test result assessments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inference Based on Type-II Hybrid Censored Data From a Weibull Distribution

    Page(s): 369 - 378
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (388 KB) |  | HTML iconHTML  

    A hybrid censoring scheme is a mixture of type-I and type-II censoring schemes. This article presents the statistical inferences on Weibull parameters when the data are type-II hybrid censored. The maximum likelihood estimators, and the approximate maximum likelihood estimators are developed for estimating the unknown parameters. Asymptotic distributions of the maximum likelihood estimators are used to construct approximate confidence intervals. Bayes estimates, and the corresponding highest posterior density credible intervals of the unknown parameters, are obtained using suitable priors on the unknown parameters, and by using Markov chain Monte Carlo techniques. The method of obtaining the optimum censoring scheme based on the maximum information measure is also developed. We perform Monte Carlo simulations to compare the performances of the different methods, and we analyse one data set for illustrative purposes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improvement on “Sequential Testing” in MIL-HDBK-781A and IEC 61124

    Page(s): 379 - 387
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (873 KB) |  | HTML iconHTML  

    This paper presents results of an analysis of the sequential test (ST) procedures described in MIL-HDBK-781A, and IEC 61124, intended for checking the mean Time Between Failures (TBF) value under an exponential distribution of the TBF. The methodological basis of the calculations consists in discretization of the ST process through subdivision of the time axis in small segments. By this means, the process is converted into a binomial for which an algorithm, and a fast computer program have been developed; and most important of all, a tool is provided for searching for the optimal truncation. The influence of truncation by time on the Expected Test Time (ETT) characteristics was studied; and an improved truncation method, minimizing this influence, was developed. The distributions of the test times were determined. The type A plan characteristics in IEC 61124:2006 have substantial inconsistencies in the probabilities of types I & II errors (up to a factor of 2), and in the ETT (up to 17%). We checked these results by using the binomial-recursive method, and by simulation. The Type C plans, reproduced from GOST R27.402:2005, are consistent; but there is scope (and need) for substantial improvement of the search algorithm for the optimal parameters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Two-Stage Failure Model for Bayesian Change Point Analysis

    Page(s): 388 - 393
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB) |  | HTML iconHTML  

    This paper presents a new approach for detecting certain change-points, which may disturb the evaluation of reliability models with covariates, via a two-stage failure model, and stochastic time-lagged regression functions. The proposed model is developed with the Bayesian survival analysis method, and thus the problems for censored (or truncated) data in reliability tests can be resolved. In addition, a Markov chain Monte Carlo method based on Gibbs sampling is used to dynamically simulate the Markov chain of the parameterspsila posterior distribution. Finally, a numeric example is discussed to demonstrate the proposed model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Reliability information for authors

    Page(s): 394 - 395
    Save to Project icon | Request Permissions | PDF file iconPDF (53 KB)  
    Freely Available from IEEE
  • Reliability Society to Offer Scholarships

    Page(s): 396
    Save to Project icon | Request Permissions | PDF file iconPDF (248 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability institutional listings

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (693 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong