By Topic

Reliability, IEEE Transactions on

Issue 3 • Date Aug 1989

Filter Results

Displaying Results 1 - 25 of 25
  • Approximate MLE of the scale parameter of the Rayleigh distribution with censoring

    Page(s): 355 - 357
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB)  

    For the Rayleigh distribution, the maximum-likelihood method does not provide an explicit estimator for the scale parameter for left-censored samples; however, such censored samples arise quite often in practice. The author provides a simple method of deriving explicit estimators by approximating the likelihood function and obtains the asymptotic variance of this estimator. He shows that this estimator is as efficient as the best linear unbiased estimator. An example is given to illustrate this method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A note on 3-state systems

    Page(s): 277 - 278
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (148 KB)  

    D.M. Malon (ibid., vol. 38, p.275-6, Aug. 1989) points out a pitfall of certain reliability problems pertaining to systems of three-state devices. The inference to be drawn from his observation is that special care should be used in describing such systems and in specifying exactly what constitutes success and failure. The present authors (ibid., vol.37, p.388-94, Oct. 1988) have modeled systems that can be represented by a network with designated source and sink; the reliability problem they treated is described by the following two system failures: (1) the system fails (short) if there is a path of short-failed components joining the source and the sink; (2) the system fails (open) if every path joining the source and sink includes an open-failed component. The systems they modeled are therefore somewhat more general then the array structures treated by B.W. Jenney and D.J. Sherwin (ibid., vol.R-35, p.532-8, Dec. 1986). But since the two failure-modes are mutually exclusive, the pitfall that Malon describes does not affect the former. Jenney and Sherwin's difficulties do not arise because of the generality of the systems; rather, they appear in some of the more general reliability measures (such as j-out-of-m requirements) used on the sp-arrays and ps-arrays View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of system reliability with common-cause failures, by a pseudo-environments model

    Page(s): 328 - 332
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    An efficient method for calculating system reliability with CCFs (common-cause failures) is presented by applying the factoring (total probability) theorem when the system and its associated class of CCFs are both arbitrary. Existing methods apply this theorem recursively until no CCF remains to be considered, and so can be time-consuming in computation. The method applies such a theorem only once and can be carried out in two steps: (1) determine each state in terms of the occurrence (or not) of every CCF in the associated class, to regard it as a pseudo-environment and to calculate its probability or weight; (2) determine each resulting subsystem of the system under the environment, calculate its reliability as in the no CCF case and take the weighted sum of such reliabilities, viz, the system reliability. This method is in terms of a Markov process and requires only the occurrence rate of each CCF to obtain the probability of each environment and only the failure rate of each component to obtain the system reliability under each environment; hence, it is practical, efficient, and useful View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability analysis of a class of fault-tolerant systems

    Page(s): 333 - 337
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB)  

    An architecture called the digital-data system is proposed to increase the reliability of a class of communication and network control systems. A general expression for the reliability of this system is derived using the total probability theorem, and the issue of minimizing the system cost is discussed. The architecture is quite general in that it models software fault-tolerant systems such as the recovery block scheme. Other software fault-tolerance schemes like the deadline mechanism for real-time recovery can also be modeled using this technique. A numerical example is given to illustrate the technique View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Case study: reliability of the INEL-Site Power System

    Page(s): 279 - 284
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    Recent power outages at the EG&G Idaho National Engineering Laboratory (INEL) prompted some customers to call for major modifications to the power system. The reliability of the INEL-Site Power System (a loop configuration) was analyzed to understand the true performance of the system and the dominant causes of outages. This was done using fault-tree modeling along with the IRRAS-PC computer code. Twenty-nine years of site-specific data were obtained from logbooks maintained by the INEL-Site Power System dispatch office. A detailed model was developed and validated against outage history for the site. The fault-tree analysis identified several major contributors to the outage frequency. It is shown that the fault-tree analysis technique provides a flexible and useful method for quantifying the system unreliability and identifying the major contributions to it. The model accurately describes the overall, INEL-Site Power System (INEL-SPS) performance and can easily be used to quantify the anticipated change in reliability due to potential modifications in the system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Better than Venn?-the Ben diagram

    Page(s): 341 - 342
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB)  

    The Ben diagram is proposed as an alternative to the Venn diagram which carries the same information but has certain advantages including better consistency of representation and a more systematic method of construction. The Ben diagram also lends itself well to representing probabilistic relationships among events. As a tool for teaching event state-space concepts, it is believed to be better than the Venn diagram View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New moment-identities based on the integrated survival function

    Page(s): 358 - 361
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB)  

    A result by F.J. Samaniego (ibid., vol. R-31, p.455-7, 1982) representing moments of positive random variables in terms of multiple integrals of the survivor function is extended in two direction. Identities are developed which apply to random vectors, any or all of whose components may assume negative values. Applications to multivariate new better than used distributions are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A decomposition scheme for the analysis of fault trees and other combinatorial circuits

    Page(s): 312 - 327, 332
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1244 KB)  

    A new decomposition scheme for the analysis of fault trees and other more general combinatorial circuits is presented. The scheme is based on a tabular representation for the necessary information about a subsystem and generalizes the concept of modular decomposition. The basic algorithm is extended to obtain a very fast method for finding the sensitivity of the results to a large class of perturbations of the data. The scheme can be used to compute and analyze the sensitivity of many different types of measures on a circuit. The authors give a pair of axioms that capture sufficient conditions for the scheme to apply to computing a given measure and explicitly consider three different computational problems. A single algorithm is tailored to solve a new problem simply by supplying it with the necessary problem-specific subroutines. The efficiency of the algorithm depends on the choice of decomposition tree; the authors propose two simple heuristics for constructing a good decomposition tree. A theorem is obtained implying that if an efficient decomposition tree is found for the basic algorithm, the extended algorithm will also be efficient View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A statistical approach for determining release time of software system with modular structure

    Page(s): 365 - 372
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (532 KB)  

    An algorithmic procedure is developed for determining the release time of a software system with multiple modules where the underlying module structure is explicitly incorporated. Depending on how much the module is used during exception, the impact of software bugs from one module is distinguished from the impact of software bugs from another module. It is assumed that software bugs in one module have i.i.d. lifetimes but lifetime distributions can vary from one module to another. For the two cases of exponential and Weibull lifetimes, statistical procedures are developed for estimating distribution parameters based on failure data during the test period for individual modules. In the exponential case, the number of software bugs can also be estimated following H. Joe and N. Reid (J. Amer. Statis. Assoc., vol.80, p.222-6, 1985). These estimates enable one to evaluate the average cost due to undetected software bugs. By introducing an objective function incorporating this average cost as well as the time-dependent value of the software system and the cumulative running cost of the software testing, a decision criterion is given for determining whether the software system should be released or the test should be continued further for a certain period Δ. The validity of this procedure is examined through extensive Monte-Carlo simulation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hidden dependence in human errors

    Page(s): 296 - 300
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (448 KB)  

    Two methodological refinements to the technique for human error rate prediction (THERP) for adjusting predictions to accommodate unconsidered sources of dependency are presented. (1) Synchrony adjustment estimates dependencies among errors due to cyclical patterns in performing otherwise unrelated activities. Synchrony adjustments would also be appropriate for modeling repairs and component burn-in in systems which are operated in a cyclical fashion. (2) Common rate adjustment adjusts estimates for multiple errors within an activity to make them consistent with the THERP assumptions about the distribution of error rates. Synchrony adjustment can result in substantial increases in the estimate of joint unavailability. While the effects of rate adjustment are less dramatic they suggest that the probability of events involving very many errors is higher than anticipated. Solutions to these two problems are presented along with numerical examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Smaller sums of disjoint products by subproduct inversion

    Page(s): 305 - 311
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB)  

    A new method is presented for calculating system reliability by sum of disjoint products. While the Abraham algorithm (1979) and its successors invert single variables, this new method applies inversion also to products of several variables. This results in shorter computation time and appreciably fewer disjoint products. Hence, the system reliability formula is considerably reduced in size. The Abraham algorithm, for instance, produces 71 disjoint products for network of 12 components and 24 minipaths, while this new method produces only 41 disjoint terms. This facilitates the numerical evaluation of the system reliability formula by reducing computation and rounding errors. Computer programs for both algorithms are included. They were written in Pascal and run on a microcomputer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Bayes predictive approach in reliability theory

    Page(s): 379 - 382
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB)  

    A strong motivation for reliability analyses is to support decision-making relative to the construction and operation of systems involving an economic/environmental risk. The Bayes approach to making decisions in face of uncertainty about mission survival is presented step by step. The authors show how the decision maker defines his own predictive probability distribution on the system time to failure and ranks the couples [decision taken, observed value of system failure time] by means of a loss function. They also introduce the minimum-expected-loss principle as a leading criterion for decision making. Finally, they address the more general case in which the final decision can be delayed in favor of collecting more information and derive the optimal termination procedure for life testing. For selecting the best course of action in a Bayes reliability frame there is neither need nor room for estimation of probability distribution parameters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Objective and subjective estimates of human error

    Page(s): 301 - 304
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB)  

    It is shown that if subjective estimates of probabilities of events are to be converted to objective estimates by means of one or two empirical anchors, a currently recommended equation can produce meaningless values of probability which exceed unity. This new method guarantees rescaling to keep probability in the range [0, 1]. An example is given to demonstrate the approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The number of failed components in a consecutive-k-of-n:F system

    Page(s): 338 - 340
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB)  

    For a consecutive-k-out-of-n:F system an exact formula and a recursive relation are presented for the distribution of the number of components, X, that fail at the moment the system fails. X estimates how many cold spares are needed to replace all failed components upon system failure. The exact formula expresses the dependence of the distribution of X upon parameters k , n. The recursive formula is suitable for efficient numerical computation of the distribution of X View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bounds for the probability of failure resulting from stress/strength interference

    Page(s): 383 - 385
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB)  

    The failure probability in static stress-strength models is determined by the interference region. Methods have been devised which determine the bounds on this unreliability based only upon the independent probabilities in this region; no distribution identification or knowledge of the means or variances is required. Unfortunately, empirical evidence demonstrates that existing methods can provide inaccurate bounds on the unreliability. An alternate method is developed which bounds the unreliability with certainty View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bivariate mean residual life

    Page(s): 362 - 364
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB)  

    An overview is presented of some theoretical results concerning the mean residual life function used in reliability theory. An extension of the concept to the bivariate case is introduced, and the relationship between the reliability and mean residual life function is derived. The properties of the function and conditions for asymptotic exponentiality of component life lengths are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bias in a stress-strength problem

    Page(s): 386 - 387
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (148 KB)  

    The bias of the maximum likelihood estimator for R≠Pr{ X<Y} where X and Y are independent normal random variables with unknown parameters is discussed. The bias is an odd function with respect to δ=gauf-1 (R), where gauf(·) is the Cdf of the standard normal distribution, so the study is restricted to R ⩾0.5, or equivalently, δ⩾0. There exists δ0>0 such that the bias is positive in the interval 0<δ<δ0. R has a positive bias at least in the interval 0.84<R<0.94 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability analysis of capacity and voltage constrained, optimally operated, electric power systems

    Page(s): 392 - 397
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB)  

    A method for reliability evaluation of capacity and voltage constrained bulk power systems is suggested based upon contingency analysis. For each contingency and consumer load demand situation assumed, the optimal operation of the power system is established that minimizes the total system consumer curtailment cost. Consumer demands at different nodes may be treated as synchronous or as mutually uncorrelated quantities, depending on the data available. The method uses a version of the decoupled power flow network model that makes its possible to determine optimal generation of both real and reactive powers for the system sequentially. This property makes the method superior to other available approaches for bulk power system reliability analysis. An illustrative example is included View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Properties of continuous analog estimators for a discrete reliability-growth model

    Page(s): 373 - 378
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB)  

    A discrete reliability-growth model (appropriate for success-failure data) whose derivation parallels that of a popular nonhomogeneous Poisson process model (appropriate for continuous failure time data) is considered.,Following J. M. Finkelstein (ibid. vol.R-32, p.508-11, Dec. 1983) continuous analog estimators are defined for use with the discrete model when there is a constant prespecified number of test trials between system configuration changes. The large-sample properties of these estimators, including consistency and normality, are established. Large-sample standard-error formulas and confidence interval procedures are developed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On estimating the mean time to failure with unknown censoring

    Page(s): 343 - 347
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (280 KB)  

    Sometimes, in reliability studies, neither the life of all failed units nor the number of units still functioning is known at any specific time due to problems such as administrative delays. Consequently, one might consider an estimate of the mean time to failure (MTTF) based only on known failure times of part of the units. An investigation is conducted into the bias and efficiency of such an estimator for either an exponential or a Weibull distribution. In the exponential case, exact expressions are obtained, and, for the Weibull case, a Monte Carlo simulation was used. The estimate of MTTF based on known lifetimes of failed units alone underestimates with smaller variance and higher mean squared error than does the estimate based on the total accumulated lifetime of both failed and surviving units View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new class of bathtub-shaped hazard rates and its application in a comparison of two test-statistics

    Page(s): 351 - 354
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    A class of life distributions arising in reliability situations has bathtub shaped hazard rates. The author investigates the problem of testing exponentiality (constant hazard rate) against bathtub distributions. A critical note on well-known bathtub distributions is made from a practical point of view, and a class of bathtub-distributions is introduced that is extremal in the context of power investigations. A Monte Carlo power comparison of two test-statistics is performed. The presentation is based on the total time on test concept View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On a common error in open and short circuit reliability computation

    Page(s): 275 - 276
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB)  

    In reliability computations for complex systems subject to two kinds of failure, analysts sometimes underestimate system reliability by using a formula that actually applies only to certain simpler structures. This error occurs several times, for example, in an otherwise instructive paper by B.W. Jenney and D.J. Sherwin (ibid., vol.R-35, p.532-8, Dec. 1986). The purpose of this work is to describe the error, illustrate how it arises in practice, and offer a correction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Availability calculations with exhaustible spares

    Page(s): 388 - 391
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB)  

    Many systems rely solely on the spares they carry to fulfil their missions. The authors develop relatively simple equations for the availability of a system with exhaustible spares. The equations are conservative but, for a large system, are more tractable than simulation or the exact approach based on Markov theory. The equations are useful for tradeoff or sensitivity analysis. Given various complements of spares, system availability can be calculated, or the optimal selection of spares can be determined. Since the equations conservatively approximate system availability if the system consists of several types of units in series, the equations can be used to determine if a system meets its availability requirement. If the calculated system availability is less than the requirement, spares could be added or more exact techniques could be applied. Because of various simplifying assumptions, the equations are most exact when repair time is small compared to mission time and when k is close to n for k-out-of-n:G systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Availability of CNC machines: multiple-input transfer-function modeling

    Page(s): 285 - 295
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (856 KB)  

    Computer numerical control (CNC) machines are an integral part of FMSs (flexible manufacturing systems). Frequent breakdown of CNC machines can lead to an unacceptable level of downtimes in FMSs. The acquisition of CNC machines has been justified on economic grounds; however, some analyses do not consider the breakdown of CNC machines. The interrelationship between the downtimes, type of breakdown, and uptimes of CNC machines is explored. The data are analyzed using multiple-input transfer-function modeling. The authors' approach is a prototype for the analysis of reliability data involving three interrelated sources of data. They show that: (i) uptime is not a leading indicator of downtime; (ii) downtimes and uptimes are not importantly related in the same period, and (iii) each CNC machine downtime has a forward effect on the uptime; however, influence is delayed from 1 to 7 downtime periods. The modeling provides management with useful information for assessing maintenance policies, making replacement policy decisions, and evaluating the availability and readiness of CNC machines. The models can provide decision makers with superior short-term forecasting and important maintenance information which could be used in the development of production plans and future machine requirements View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On estimating parameters in a discrete Weibull distribution

    Page(s): 348 - 350
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    Two discrete Weibull distributions are discussed, and a simple method is presented to estimate the parameters for one of them. Simulation results are given to compare this method with the method of moments. The estimates obtained by the two methods appear to have almost similar properties. The discrete Weibull data arise in reliability problems when the observed variable is discrete. The modeling of such a random phenomenon has already been accomplished. Estimation of parameters in these models is considered. Since the usual methods of estimation are not easy to apply, a simple method is suggested to estimate the unknown parameters. The estimates obtained by this method are comparable to those obtained by the method of moments. The method can be applied in most inferential problems. Though the authors have restricted themselves to type I distribution, their method of proportions for the estimation of parameters can be easily applied to the type II distribution as well View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong