By Topic

Reliability, IEEE Transactions on

Issue 2 • Date June 1999

Filter Results

Displaying Results 1 - 17 of 17
  • IEEE Transactions on Reliability

    Page(s): 01 - 02
    Save to Project icon | Request Permissions | PDF file iconPDF (169 KB)  
    Freely Available from IEEE
  • Information for readers & authors

    Page(s): 207 - 210
    Save to Project icon | Request Permissions | PDF file iconPDF (402 KB)  
    Freely Available from IEEE
  • Branch-and-bound redundancy optimization for a series system with multiple-choice constraints

    Page(s): 108 - 117
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (728 KB)  

    This paper considers a redundancy optimization problem in which multiple-choice and resource constraints are incorporated. The problem is expressed as a nonlinear integer programming problem and is characterized as an NP-hard problem. The purpose of the paper is to develop a SSRP (solution space reduction procedure). Therefore, the problem is analyzed first to characterize some solution properties. An iterative SSRP is then derived using those solution properties. Finally, the iterative SSRP is used to define an efficient B&BP (branch-and-bound procedure) algorithm. Experimental tests show how dramatically the SSRP can work on removing any intermediately-found unnecessary decision variables from further consideration in solution search, and how efficient this B&BP is View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development programs for 1-shot systems: decoupled tests and redesigns, with the possibility of design degradation

    Page(s): 189 - 198
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (780 KB)  

    This paper extends, in two important directions, the recent work of Huang, McBeth and Vardeman on efficient development testing for 1-shot systems. The testing and redesign activities are decoupled. They are assigned their own costs, and development strategies are allowed to include multiple tests between redesigns and vice versa. The possibility that in fact an engineering redesign degrades design reliability is allowed. Backward induction is used to find optimal test-and-redesign programs. The theory developed here characterizes optimal programs and allows for computation and the study of numerical examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of the stress-threshold for the Weibull inverse power law

    Page(s): 176 - 181
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    Lifetime data on stress rupture of copper joints made with certain lead-free solders suggest that specimens under `stress below a certain threshold' run indefinitely without failure. A commonly used model for this type of data is the Weibull inverse power law that includes a threshold. If the threshold is unknown, this estimation problem presents several difficulties for statistical treatment. The largest problem is: as the threshold approaches the minimum of the data (stresses) the likelihood approaches infinity, thus there is no global maximum. A modified maximum likelihood approach, in the spirit of Cohen is used to resolve this problem. The method is similar to Cohen's, but interesting differences occur for censored data. The results show that modifications of the Cohen method produce estimates of the parameters in the Weibull inverse power threshold law View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analyzing accelerated degradation data by nonparametric regression

    Page(s): 149 - 158
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (656 KB)  

    This paper presents a nonparametric regression accelerated life-stress (NPRALS) model for accelerated degradation data wherein the data consist of groups of degrading curve data. In contrast to the usual parametric modeling, a nonparametric regression model relaxes assumptions on the form of the regression functions and lets data speak for themselves in searching for a suitable model for data. NPRALS assumes that various stress levels affect only the degradation rate, but not the shape of the degradation curve. An algorithm is presented for estimating the components of NPRALS. By investigating the relationship between the acceleration factors and the stress levels, the mean time to failure estimate of the product under the usual use condition is obtained. The procedure is applied to a set of data obtained from an accelerated degradation test for a light emitting diode product. The results look very promising. The performance of NPRALS is further checked by a simulated example and found satisfactory. We anticipate that NPRALS can be applied to other applications as well View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An approach for computing tight numerical bounds on renewal functions

    Page(s): 182 - 188
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (468 KB)  

    This method computes tight lower and upper bounds for the renewal function. It is based on Riemann-Stieltjes integration, and provides bounds for solving certain renewal equations used in the study of availability. An error analysis is given for the numerical bounds when inter-renewal time distributions are sufficiently smooth. Three examples are explored that demonstrate the accuracy of these computed numerical bounds View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault-tolerant vlsi systems

    Page(s): 106 - 107
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (112 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of average maintenance cost for imperfect-repair model

    Page(s): 199 - 204
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB)  

    The imperfect-repair model considers units which are either perfectly-repaired or minimally-repaired, with known, fixed probabilities. Minimal repair is defined, roughly, as returning the item to the population's average state at the failure time. The average cost of maintaining the unit under this model is studied by assuming fixed amounts of cost for perfect and minimal repairs when failed. The authors' measure of such maintenance cost is an average cost per unit-time over an infinite time span. To obtain this average cost, they use the expressions for average numbers of perfect and minimal repairs under this model. They prove the few theorems necessary to derive the formulae for the average cost per unit-time. The results are applied to some examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stupid statistics reliability prediction

    Page(s): 105
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (112 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Step-stress life-testing with random stress-change times for exponential data

    Page(s): 141 - 148
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB)  

    This paper studies statistical models in step-stress accelerated life-testing when the stress-change times are random. The marginal lifetime distribution of a test unit under a step-stress test plan when the stress change times are random variables is presented. Maximum likelihood estimates for model parameters based on both the marginal and conditional life distributions are considered. An optimum test plan is explored for simple step-stress test when the stress change time is an order statistic from the exponential lifetime under the low-stress level View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A general imperfect-software-debugging model with S-shaped fault-detection rate

    Page(s): 169 - 175
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB)  

    A general software reliability model based on the nonhomogeneous Poisson process (NHPP) is used to derive a model that integrates imperfect debugging with the learning phenomenon. Learning occurs if testing appears to improve dynamically in efficiency as one progresses through a testing phase. Learning usually manifests itself as a changing fault-detection rate. Published models and empirical data suggest that efficiency growth due to learning can follow many growth-curves, from linear to that described by the logistic function. On the other hand, some recent work indicates that in a real industrial resource-constrained environment, very little actual learning might occur because nonoperational profiles used to generate test and business models can prevent the learning. When that happens, the testing efficiency can still change when an explicit change in testing strategy occurs, or it can change as a result of the structural profile of the code under test and test-case ordering View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal allocation of s-identical, multi-functional spares in a series system

    Page(s): 118 - 126
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (652 KB)  

    This paper considers the problem of allocating statistically-identical, multi-functional spares to subsystems of a series system. The objective is to maximize the system reliability for mission time T which can be deterministic or stochastic. Several problems which are conceptually similar to this one have been discussed in the literature in different contexts. An algorithm is provided for obtaining standby redundancy allocation, and sufficient conditions are derived for optimality of the resulting allocation for general T. The algorithm is equivalent to a simple allocation rule under the sufficient conditions. The allocation rule gives an optimal allocation for the special cases: the PDF's of component lifetimes are log concave (which implies increasing failure rate), and T is deterministic; the components have exponential failure times, and T follows a gamma distribution; and component lifetime distributions are general, and T follows an exponential or a mixture of exponential distributions. No simpler method is available for latter two cases View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comparison of electronic-reliability prediction models

    Page(s): 127 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB)  

    One of the most controversial procedures in reliability is the use of reliability prediction techniques based on component failure data to estimate system failure rates. The International Electronics Reliability Institute (IERI) at Loughborough University is in a unique position. Over many years, much reliability information has been collected from leading British and Danish electronic manufacturing companies. These data are of such high quality that IERI can perform the comparison exercise with many circuit boards (CB) of different types. Several CB were selected from the IERI field-failure database and their reliability was predicted and compared with the observed field-performance. The prediction techniques were based on the: M217E [US Mil-Hdbk-217E]; HRD4; Siemens (SN29500); CNET; and Bellcore (TR-TSY-000332) models. For each model, the associated published failure rates were used. Hence, parts count analyses were performed on several CB from the database; these analyses were compared with the field failure rate. The prediction values differ greatly from the observed field behavior and from each other. Further analysis showed that each prediction model was sensitive to widely different physical parameters. The results are summarized. Some of the models are more sensitive to a factor that varies according to an Arrhenius model, such as temperature and electrical stress, while others are more sensitive to the discrete π factors used to model environment and quality View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Availability modeling of modular software

    Page(s): 159 - 168
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (820 KB)  

    Dependability evaluation is a basic component in assessing the quality of repairable systems. A general model (Op) is presented and is specifically designed for software systems; it allows the evaluation of various dependability metrics, in particular, of availability measures. Op is of the structural type, based on Markov process theory. In particular, Op is an attempt to overcome some limitations of the well-known Littlewood reliability model for modular software. This paper gives the: mathematical results necessary to the transient analysis of this general model; and algorithms that can efficiently evaluate it. More specifically, from the parameters describing the: evolution of the execution process when there is no failure; failure processes together with the way they affect the execution; and recovery process, the results are obtained for the: distribution function of the number of failures in a fixed mission; and dependability metrics which are much more informative than the usual ones in a white-box approach. The estimation procedures of the Op parameters are briefly discussed. Some simple examples illustrate the interest in such a structural view and explain how to consider reliability growth of part of the software with the transformation approach developed by Laprie et al. The complete transient analysis of Op allows discussion of the Poisson approximation by Littlewood for his model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A set-of-numbers is NOT a data-set

    Page(s): 135 - 140
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (484 KB)  

    Evans (1997) and Rees (1997) have emphasized that great care is needed to obtain good data because, otherwise, garbage in leads to garbage out. This tutorial demonstrates that, even when one has good data, the results still are incorrect if data analysis is performed incorrectly. A central issue in correct statistical analysis is determining the context within which the data arose; and resolving the inherent ambiguities in interpreting failure-data makes it essential to incorporate such a context into reliability data analyses. When this is ignored, as is usually the case (a set-of-numbers is treated as if it were an entire data-set, thus ignoring other essential information), even good data in results in garbage out View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong