Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Reliability, IEEE Transactions on

Issue 2 • Date Jun 1994

Filter Results

Displaying Results 1 - 25 of 25
  • Perspective on Weibull proportional-hazards models

    Publication Year: 1994 , Page(s): 217 - 223
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    This note uses a paper of Elsayed & Chan (1990) to illustrate some of the advantages and some of the limitations of the proportional hazards approach. The role of proportional hazards as one of several tools for exploratory data analysis is described. The emphasis is on exploratory techniques as a way of: (1) measuring the importance of factors influencing system behavior; and (2) determining the form of the model. The semi-parametric version of proportional hazards shows the relative importance of explanatory factors in determining the failure behavior regardless of whether the model is strictly correct. Thus the relative chance of failure can be assessed, but not the absolute chance. The advantage of proportional hazards is that it always yields a quantitative measure of importance for each influence factor. Although Elsayed & Chan clearly establish the importance of temperature as the most critical factor in thin-oxide breakdown, the other analysis technique indicates that more needs to be done to validate a particular model of system behavior. In this case, the failure mechanism remains open, and the use of accelerated test data to predict performance under usual conditions needs further investigation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tests of univariate and bivariate stochastic ageing

    Publication Year: 1994 , Page(s): 233 - 241
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (580 KB)  

    Concepts of ageing describe how a population of units or systems improves or deteriorates with age. Many classes of life distributions are categorized and defined in the literature according to their ageing properties. An important aspect of such classifications is that the exponential distribution is nearly always a member of each class. The notion of stochastic ageing is important in any reliability analysis, and many test statistics have been developed for testing exponentiality against various ageing alternatives. This paper is an overview of these developments. The author begins with a table of ageing classes together with key references, followed by a brief discussion on the characterization of exponentiality. Test procedures are summarized, and followed by the main review. Tests of exponentiality against other alternatives are explained for randomly censored data. Finally, tests of multivariate ageing properties are listed. Some of the life classes have been derived more recently and, as far as is known, no test statistics have been proposed. On the other hand, several tests are available for some classes. Relative efficiency of a test is discussed whenever appropriate View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Prediction intervals for an ordered observation from a Pareto distribution

    Publication Year: 1994 , Page(s): 264 - 269
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB)  

    The prediction of future ordered observations shows how long a sample of units might run until all fail in life testing. This paper presents the prediction intervals on future ordered-observations in a sample of size n from a Pareto distribution with known shape parameter where the first k ordered observations have been observed. A useful method is defined for obtaining a bound on life-test duration for sample from a population having Pareto distributed lifetimes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interval-availability distribution of 2-state systems with exponential failures and phase-type repairs

    Publication Year: 1994 , Page(s): 335 - 343
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    Interval availability is a dependability measure defined as the fraction of time during which a system is in operation over a finite observation period. Usually, for computing systems, the models used to evaluate interval availability distribution are Markov models. Numerous papers using these models have been published, and only complex numerical methods have been proposed as solutions to this problem even in simple cases such as the 2-state Markov model. This paper proposes a new way to compute this distribution when the model is a 2-state semi-Markov process in which the holding times have an exponential distribution for the operational state and a phase-type distribution for the nonoperational one. The main contribution of this paper is to define a new algorithm to compute the interval availability distribution for systems having only one operational state. The computational complexity depends weakly on the number of states of the system, and sometimes it can deal also with infinite state spaces. Moreover, simple closed expressions of this distribution are shown when repair periods are of the Erlang type with eventually absorbing states View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayes stopping rules for reliability testing with the exponential distribution

    Publication Year: 1994 , Page(s): 288 - 293
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (476 KB)  

    These rules for terminating testing involve achieving specified levels of credibility in both: (1) the probability of the posterior estimate of the exponential scale-parameter, θ, after some, m, of the units have been tested, and (2) the mean of the probability of θ over the remaining units, when viewed pessimistically. Sample decision tables and a numerical example illustrate both the sequential and batch testing cases. Large savings in test times can be achieved whenever the first m units present strong evidence in favor of either hypothesis (H0: θ=θ0 vs. H1 : θ=θ1, θ10) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Classification of life distributions in multivariate models

    Publication Year: 1994 , Page(s): 224 - 229
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB)  

    In reliability theory, lifetimes of systems and components are frequently studied through univariate concepts of ageing. By considering suitable multivariate generalizations of the univariate ageing properties, some multivariate ageing classes of life distributions are defined. Properties of these classes with their equivalent definitions and chain of implications are presented along with a few characterizing properties. These results are useful for obtaining reliability bounds (when component lives are independent) at the early stage of product design. Also for model selection, characterization results can be important. Moreover, the underlying mathematical treatment is univariate when viewed through conditional distributions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Failure mechanism models for electromigration

    Publication Year: 1994 , Page(s): 186 - 192
    Cited by:  Papers (18)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (624 KB)  

    This tutorial illustrates situations where electromigration (a wearout failure mechanism) in electronic devices can degrade performance. Electromigration and its relation to microstructure is discussed. Temperature considerations are treated. A practical model for electromigration and two application examples of it are given. Qualitative design procedures for avoiding solid-state electromigration failure are briefly discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation of hot-carrier induced MOS circuit degradation for VLSI reliability analysis

    Publication Year: 1994 , Page(s): 197 - 206
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (824 KB)  

    A new integrated simulation tool is presented for estimating the hot-carrier induced degradation of nMOS transistor characteristics and circuit performance. This reliability simulation tool incorporates: (1) an accurate 1-dimensional MOSFET model for representing the electrical behavior of locally damaged transistors; and (2) physical models for both fundamental device-degradation mechanisms (charge trapping and interface trap generation). Hot-carrier induced oxide damage can be specified by only a few parameters, avoiding extensive parameter extractions for the characterization of device damage. A repetitive simulation scheme ensures accurate prediction of the circuit-level degradation process under dynamic operating conditions. The evolution of hot-carrier related damage in each device is automatically simulated at predetermined time intervals, instead of extrapolating the long-term degradation using only the initial simulation results. Thus, the gradual variation of dynamic stress conditions is accounted for during the long-term damage estimates View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized multistate monotone coherent systems

    Publication Year: 1994 , Page(s): 242 - 250
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (592 KB)  

    Most available models for multistate coherent systems (MCS) assume that the state sets of the system and its components are totally ordered-an assumption that greatly limits the application of the usual MCS models. This paper presents a new MCS model-a generalized multistate coherent system (GMCS) model-which assumes, more generally, that the state sets of the system and its components are partially ordered. Since the structure of a partially ordered set is very flexible, the model applies to the reliability analysis of various multistate coherent systems. As a result, the GMCS model generalizes the coherent system theory from the binary-state case to the multistate case. The authors investigate some of the properties of this generalized model. Then, system reliability is redefined and bounds for its evaluation are derived View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • More goodness-of-fit tests for the power-law process

    Publication Year: 1994 , Page(s): 275 - 278
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB)  

    The power-law process is often used as a model for reliability growth of complex systems or for reliability of repairable systems. There are many results on estimation and hypothesis testing concerning parameters of the power-law process. Goodness-of-fit tests for the power-law process were presented in Park & Kim (1992) using Kolmogorov-Smirnov, Cramer-von Mises, and Anderson-Darling statistics. This paper considers the same problem using three statistics, Kuiper, Watson and weighted Watson. Tables of critical values for the three statistics are presented and the results of a power study are given under the alternative hypothesis that failure data come from a nonhomogeneous Poisson process with log-linear intensity function. The power study shows that the tests have acceptable power for various parameter values and the Cramer-von Mises Statistics, in Park and Kim (1992), has the highest power among the six statistics. An example from the Cox air conditioning repair data is presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Systematic Bayes prior-assignment by coupling the mini-max entropy and moment-matching methods

    Publication Year: 1994 , Page(s): 279 - 287
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB)  

    The prior is chosen somewhat conservatively, but not arbitrarily, in Bayes analysis for the event whose characteristics are not well understood. It is desirable to get the prior through an objective procedure for the given information. The author's approach of assigning the pre-prior is by coupling the principle of maximum entropy with the moment-matching method; i.e., to find upper and lower bounds of the population parameter sets-based on upper and lower bounds of the entropy for the given information. The methodology is demonstrated by applying it to the data sets of initiating events taken from the performance of nuclear power plants View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The effect of compensating faults models on NMR system reliability

    Publication Year: 1994 , Page(s): 294 - 300
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB)  

    Realistic estimates of the reliability of systems with N-tuple modular redundancy (NMR), must consider the effect of compensation of logic faults. Earlier analyses that include compensating faults are impractical to use, yield very complex mathematical formulas for reliability indices, and/or concern the simplest triple modular redundancy (TMR) system only. This paper gives a general approach to the problem. Two models of compensating faults are considered. For either model the lower and upper bounds on frequency of compensating faults are found. By applying some results of NMR system evaluation, the new estimates of upper and lower bounds of NMR system reliability with respect to compensating faults are derived. A simple algebraic form of the final results makes them useful View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal design of large software-systems using N-version programming

    Publication Year: 1994 , Page(s): 344 - 350
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB)  

    Fault tolerant software uses redundancy to improve reliability; but such redundancy requires additional resources and tends to be costly, therefore the redundancy level needs to be optimized. Our optimization models determine the optimal level of redundancy within a software system under the assumption that functionally equivalent software components fail independently. A framework illustrates the tradeoff between the cost of using N-version programming and the improved reliability for a software system. The 2 models deal with: a single task, and multitask software. These software systems consist of several modules where each module performs a subtask and, by sequential execution of modules, a major task is performed. Major assumptions are: 1) several versions of each module, each with an estimated cost and reliability, are available, 2) these module versions fail independently. Optimization models are used to select the optimal set of versions for each module such that the system reliability is maximized and total cost remains within budget View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability of augmented tree networks under pin-out constraints

    Publication Year: 1994 , Page(s): 321 - 326
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB)  

    The binary tree network provides a cost-efficient topology for parallel computers. However, its poor reliability makes it unattractive for applications that demand high reliability. This has motivated consideration of several augmented tree networks, but these networks offer increased reliability at the expense of increased node fanout. This paper studies 2 augmented tree networks that achieve high reliability with low node fanout. Exact reliabilities of both networks are computed in O(log(number of leaves)) time by deriving a system of recurrences that exploit their recursive construction. These structures are considerably more reliable than the single tree. Moreover, reliability is sensitive to the actual inter-leaf augmentation scheme-even a minor change has a noticeable impact on reliability. Two other measures of reliability, mean time-to-failure and mission time are estimated. A closed-form approximate expression for reliability of one of these networks is obtained; it agrees quite well with the exact value View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A general linear-regression analysis applied to the 3-parameter Weibull distribution

    Publication Year: 1994 , Page(s): 255 - 263
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (728 KB)  

    The conventional techniques of linear regression analysis (linear least squares) applied to the 3-parameter Weibull distribution are extended (not modified), and new techniques are developed for the 3-parameter Weibull distribution. The three pragmatic estimation methods in this paper are simple, accurate, flexible, and powerful in dealing with difficult problems such as estimates of the 3 parameters becoming nonpositive. In addition, the inherent disadvantages of the 3-parameter Weibull distribution are revealed; the advantages of a new 3-parameter Weibull-like distribution over the original Weibull distribution are explored; and the potential of a 4-parameter Weibull-like distribution is briefly mentioned. This paper demonstrates how a general linear regression analysis or linear least-squares breaks away from the classical or modern nonlinear regression analysis or nonlinear least-squares. By adding a parameter to the simplest 2-parameter linear regression model (AB-model), two kinds of ABC models (elementary 3-parameter nonlinear regression models) are found, and then a 4-parameter AABC model is built as an example of multi-parameter nonlinear regression models. Although some other techniques are still necessary, additional applications of the ABC models are strongly implied View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of the cumulative-hazard and Kaplan-Meier estimators of the survivor function

    Publication Year: 1994 , Page(s): 230 - 232
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB)  

    When both the product-limit (Kaplan-Meier) and the cumulative-hazard estimates of the Sf are calculated for the same data set, the survival probabilities obtained using the cumulative hazard approach are generally slightly larger than the ones obtained using the product-limit (Kaplan-Meier) approach. This heuristic inter-relation can not be specific to particular data sets; thus this paper shows that it holds consistently. More specifically, this paper proves this empirical relation of a regular deviation between the two main nonparametric reliability estimators View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Results of a pilot-survey about reliability-task effectiveness

    Publication Year: 1994 , Page(s): 193 - 196
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB)  

    Seventy electronic manufacturers (with at least 100 employees) in the northwest USA were contacted in 1990 September with the intent of measuring their perception of reliability-task effectiveness. There were 17 competent respondents; they rated the effectiveness of 26 reliability tasks; the highest rating (from top to third) were for development testing, failure reporting and corrective action, durability analysis, and durability testing. Interestingly, some US Mil-Std-785 reliability tools such as reliability qualification testing, sneak-circuit analysis, and reliability prediction received the lowest ratings. Many respondents thought reliability prediction was ineffective for improving product reliability, although the majority of respondents do use Mil-Hdbk-217. Since the response rate was so low, it is difficult to draw firm conclusions. Both a larger sample size and a virtual 100% response rate are needed for future studies. Other question-areas, especially about corporate culture, are desirable View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Large-sample estimation of reliability under the exponential distribution in life testing

    Publication Year: 1994 , Page(s): 270 - 274
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB)  

    The exact formula for the variance of the uniformly minimum variance unbiased estimator (UMVUE) of the reliability function, R(t), for exponential life is quite cumbersome. A simple expression for the asymptotic variance of the UMVUE of R(t) is found. Interval estimates of R(t) are also constructed. By comparing the approximations to the exact values in several configurations of sample size and population parameter, the derived formula is shown to be useful for sample sizes larger than 15 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Availability for repairable components and series systems

    Publication Year: 1994 , Page(s): 329 - 334
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB)  

    The failure pattern of repairable components is often modelled by an alternating renewal process which implies that a failed component is perfectly repaired. In practice, repair is often imperfect. This paper proposes a generalized availability model for repairable components and series systems. The lifetime of a repaired component has a general distribution which can be different from that of a new component. Availability and some asymptotic quantities in these models are derived. An example illustrates the application of these models View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inference life-models of electrical insulating materials by using a Kalman filter

    Publication Year: 1994 , Page(s): 210 - 216
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB)  

    A Kalman filter is applied to life tests for characterizing electrical or thermal endurance of electrical insulating materials. This recursive estimator provides updated life model parameter values after each life test. The life models are: (1) inverse power law and the exponential law, used for electrical or multi-stress ageing; and (2) Arrhenius model, used for thermal ageing. The state, prediction, and updating equations of the Kalman filter algorithm are specified for insulation endurance inference. Insight into the definition of the state variables, which are directly related to the model parameters, and determination of system and observation errors are developed. A recursive breakdown test detects important changes in the prevailing ageing process. The range of validity of the life model, as well as information on electrical and thermal threshold are considered. A flow chart of the filtering algorithm is presented. Example experimental results relevant to insulating materials and systems subjected to electrical and thermal life tests are processed according to the algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Order-P: an algorithm to order network partitionings

    Publication Year: 1994 , Page(s): 310 - 320
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (864 KB)  

    A network partitioning occurs when failures fragment the network into at least 2 sub-networks. This causes the network performance to degrade; many techniques have been proposed to combat this degradation. The number of possible partitionings in a fully connected network of n nodes is greater than 2'', for large n. Thus, analysis of partitioning-resilient algorithms is extremely difficult due to the difficulty of computing the probabilities of occurrence of the partitionings. The authors propose an algorithm that orders network partitionings in decreasing order of probability. This algorithm is similar to the Most Probable State Enumeration (MPSE) algorithm of Li & Silvester (1984). By looking at only the most probable partitionings, the performance of the network can be estimated well. This approach also gives bounds on the network performance. Two distinct equally-important goals have been attained: the algorithm Order-P is proposed, and the algorithm is applied in the real world and demonstrates its value in performance modeling of distributed systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability indices for electric-power wheeling

    Publication Year: 1994 , Page(s): 207 - 209
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB)  

    Transmission networks are being used more extensively to accommodate energy transactions that use the economic benefits of pooling. This paper evaluates the probabilistic wheeling capability of an interconnected power transmission system. Large electric power wheeling can degrade the reliability of the electric power network. This problem is also important for power system control and planning purposes. This paper discusses, from the viewpoint of power network reliability, the problem of power wheeling from Western Europe to Greece through the Yugoslav electric power system. The relative cost of the wheeling due to reliability indices is 2% of the production cost for the power wheeled. The calculation of reliability indices of wheeling enable more adequate estimation of the wheeling rate for power transactions between utilities. This paper describes an approach to calculate cost of the average energy not supplied and load curtailed due to wheeling. This cost can even be negative, which depends mostly on the amount of wheeling power and consumer loads. The reliability index EENS (expected energy not supplied) is more comprehensive than PI (performance index) for the contingency severity selection procedure View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal design of systems with competing failure modes

    Publication Year: 1994 , Page(s): 251 - 254
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB)  

    Consider the problem of achieving optimal system size, n, and threshold value, k, for {k, n-k+1}-out-of-n and majority systems with competing failure modes. Components are s-independent and identically distributed, and the two system failure modes can have different costs. The authors determine optimal k, given n; optimal n, given k; and optimal k,n. Optimal implies minimum mean system cost for the {k, n-k+1}-out-of-n or majority system. The authors study the behavior of the optimal k as a function of failure probabilities, and of maximizing the reliability of majority systems. Numerical examples illustrate the results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An algorithm for computing the reliability of weighted-k-out-of-n systems

    Publication Year: 1994 , Page(s): 327 - 328
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (112 KB)  

    This paper constructs a new k-out-of-n model, viz, a weighted-k-out-of-n system, which has n components, each with its own positive integer weight (total system weight=w), such that the system is good (failed) if the total weight of good (failed) components is at least k. The reliability of the weighted-k-out-of-n:G system is the complement of the unreliability of a weighted-(w-k+1)-out-of-n:F system. Without loss of generality, the authors discuss the weighted-k-out-of-n:G system only. The k-out-of-n:G system is a special case of the weighted-k-out-of-n:G system wherein the weight of each component is 1. An efficient algorithm is given to evaluate the reliability of the weighted-k-out-of-n:G system. The time complexity of this algorithm is O(n.k) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Phased-mission system reliability under Markov environment

    Publication Year: 1994 , Page(s): 301 - 309
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (500 KB)  

    The authors show how to determine the reliability of a multi-phase mission system whose configuration changes during consecutive time periods, assuming failure and repair times of components are exponentially distributed and redundant components are repairable as long as the system is operational. The mission reliability is obtained for 3 cases, based on a Markov model. (1) Phase durations are deterministic; the computational compact set model is formulated and a programmable solution is developed using eigenvalues of reduced transition-rate matrices. (2) Phase durations are random variables of exponential distributions and the mission is required to be completed within a time limit; the solution is derived as a recursive formula, using the result of case 1 and mathematical treatment-a closed-form solution would be prohibitively complex and laborious to program. (3) Phase durations are random variables and there is no completion time requirement; the solution is derived similarly to case 1 using moment generating functions of phase durations. Generally, reliability problems of phased-mission systems are complex. The authors' method provides exact solutions which can be easily implemented on a computer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong