Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. For technical support, please contact us at onlinesupport@ieee.org. We apologize for any inconvenience.
By Topic

Reliability, IEEE Transactions on

Issue 1 • Date Mar 1992

Filter Results

Displaying Results 1 - 22 of 22
  • Availability of systems with partially observable failures

    Publication Year: 1992 , Page(s): 92 - 96
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB)  

    The availability is determined for the following kind of system. When the system is operational (up), it can fail in either one of two modes. However, the system operator does not always diagnose the failure mode correctly. Given that one failure mode has occurred, the authors correctly diagnose the failure mode with a given probability η and misdiagnose it with probability 1-η, where η may vary with failure mode. The problem is modeled by a partially observable Markov process. Numerical results indicate that a high probability of misdiagnosis produces appreciably high system-downtime View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Goodness-of-fit tests for the power-law process

    Publication Year: 1992 , Page(s): 107 - 111
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB)  

    The power-law process is often used as a model for reliability growth of complex systems or reliability of repairable systems. Often goodness-of-fit tests are required to check the hypothesis that failure data came from a power-law process model. Three statistics, Kolmogorov-Smirnov, Cramer-von Mises, and Anderson-Darling, are considered for a goodness-of-fit test of a power-law process in the case of failure-truncated data. Tables of critical values for the three statistics are presented and the results of a power study are given under the alternative hypothesis that failure data came from a nonhomogeneous Poisson process with log-linear intensity function. This power comparison is a new result, which can guide in selecting a test statistic and sample size. The power study shows that the tests have acceptable power for some parameter values and the Cramer-von Mises statistic has the highest power for a sample-size ⩾20 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quadratic statistics for the goodness-of-fit test of the inverse Gaussian distribution

    Publication Year: 1992 , Page(s): 118 - 123
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (444 KB)  

    The problem of using a quadratic test to examine the goodness-of-fit of an inverse Gaussian distribution with unknown parameters is discussed. Tables of approximate critical values of Anderson-Darling, Cramer-von Mises, and Watson test statistics are presented in a format requiring only the sample size and the estimated value of the shape parameter. A relationship is found between the sample size and critical values of these test statistics, thus eliminating a need to interpolate among sample sizes given in the table. A power study showed that the proposed modified goodness-of-fit procedures have reasonably good power View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Availability modeling for the Federal Aviation Administration

    Publication Year: 1992 , Page(s): 97 - 106
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (796 KB)  

    Mathematical models developed to meet the FAA special availability requirements are described. These models must account for frequency of failure, sharing of backup resources, service restoration time, and repair time. An application of one of the models to a particular FAA service is provided, and these results are compared to the results obtained from applying two typical classical models. The comparison shows two things. First, the three models definitely do not agree, even within some tolerance for error. This alone does not indicate a `correct' model, it merely underscores such important differences between the models that one or another must be preferred and the others abandoned. Second, there is a need for increased fidelity as the computed unavailability approached zero, due to the heightened sensitivity to small periods of outage. Finally, in view of the fact that the greatest source of FAA service failure is due to telephone-line problems over which the FAA has little control, the application of the model shows that supplying critical services with functional backup and fast automated switching is the most realistic method to reach the unavailability goal of 10-5 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Note on disjoint products algorithms

    Publication Year: 1992 , Page(s): 81 - 84
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB)  

    The Abraham-Locks revised (ALR) algorithm, given by M.O. Locks (see ibid., vol.R-36, p.445-53, Oct. 1987), and the Abraham-Locks-Wilson algorithm, given by J.M. Wilson (see ibid., vol.39, p.42-6, Apr. 1990), are efficient systematic procedures for obtaining nearly minimal sum of disjoint products (SDP) system-reliability formulas for coherent source-to-terminal networks. These two procedures differ only in the manner in which the minimal paths of the system are ordered, but are the same in all other respects. The same error was made in both papers, based on a misinterpretation of how the rapid Boolean inversion technique operates. As a result, each paper is missing a single term-the ALR 60-term formula for the sample problem should be 61 terms and the ALW 58-term formula should be 59 terms. This note revises the explanation of inversion and presents corrected system formulas, as well as a minimizing Boolean algorithm for building up disjoint subformulas View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A survey of reliability-prediction procedures for microelectronic devices

    Publication Year: 1992 , Page(s): 2 - 12
    Cited by:  Papers (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (868 KB)  

    The author reviews six current reliability prediction procedures for microelectronic devices. The device models are described and the parameters and parameter values used to calculate device failure rates are examined. The procedures are illustrated by using them to calculate the predicted failure rate for a 64 K DRAM; the resulting failure rates are compared under a variety of assumptions. The models used in the procedures are similar in form, but they give very different predicted failure rates under similar operating and environmental conditions, and they show different sensitivities to changes in conditions affecting the failure rates View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cost optimization of maintenance scheduling for a system with assured reliability

    Publication Year: 1992 , Page(s): 21 - 25
    Cited by:  Papers (11)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB)  

    Systems which have to work at or below a maximum acceptable failure rate should be maintained at predetermined points such that the failure rate does not exceed the acceptable level. As the system ages, the post-maintenance failure rate of the system drops to some newer one, unless the system has been replaced, but does not restore the system to the original state. A branching algorithm with effective dominance rules that curtail the number of nodes created is presented; this algorithm determines the number of maintenance interventions before each replacement in order to minimize the total cost over a finite time horizon. The model considers inflationary trends. A numerical example and computational experience are presented. The authors treat the maintenance cost as constant and successive simple-maintenance intervals as decreasing. Though the cost/maintenance is assumed constant, any increasing maintenance cost function could be incorporated. The optimum solutions depend on the constant improvement factor, first simple-maintenance point, rate of increase in acquisition cost, maintenance cost factor, and planning period View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Systems formulation of a theory of diagnosis from first principles

    Publication Year: 1992 , Page(s): 38 - 48
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (872 KB)  

    The author reformulates the theory of diagnosis given by R. Reiter (see Artificial Intelligence vol.32, no.1, p.57-95, 1987) in a systems theory framework and extends it to explicitly cover admissible fault models. The reformulation allows one to use straightforward set theoretic and algebraic concepts to characterize the main theorem relating diagnoses and conflict sets. The authors distinguish between weak and strong diagnoses and show that nonminimal strong diagnoses (multiple faults) may arise where the class of admissible fault models of components is restricted. They argue that the effectiveness of troubleshooting may be greatly enhanced by taking such diagnoses into account. This is true since the nature of the admissible fault model classes can dramatically affect the diagnoses generated. In particular, diagnoses that are not based on models of potential fault behaviors may be quite deceptive in relation to actual failed system behavior. The full family of strong diagnoses, although potentially much more computationally demanding than the minimal diagnoses, should be taken as the basis for troubleshooting View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Failure-mechanism models for excessive elastic deformation

    Publication Year: 1992 , Page(s): 149 - 154
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB)  

    Design situations where excessive elastic deformations can compromise system performance, thereby acting as a failure mechanism, are illustrated. Models, based on continuum mechanics, to design against such failures are presented. Two examples illustrate the use of these models in practical design situations in electronic packaging and in mechanical systems. The examples use deterministic models, for simplicity of presentation. In reality, each of the variables in the design equation is uncertain, e.g. geometric dimensions, material properties, and applied loads. If these uncertainties are important then they must be accounted for by the usual statistical methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The measure of effect in an expert system for automatic generation of fault trees

    Publication Year: 1992 , Page(s): 57 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    The authors introduce the measure of effect, which reflects both the probability of the cause factor and its effect on the event in the context of an expert system for automatic generation of fault trees. Its properties and applications are investigated and several theorems are proven. Using this measure it is possible to delete those events which have a minor effect on the resultant events in real cases and make the fault trees generated in the first stage simpler and more practical. An example illustrates the use of the measure of effect in the inference procedure View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Bayes classifier when the class distributions come from a common multivariate normal distribution

    Publication Year: 1992 , Page(s): 124 - 126
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB)  

    Let (n-1) measurements be taken on a component of some manufactured product prior to the manufacture of the product. The author wants to decide whether to keep or reject this component before it is allowed to enter the manufacturing process, based on the relationship of these measurements to some postmanufacture measurement on the finished product. Let all measurements (`before' and `after'), taken together, form an n-dimensional random vector described by a multivariate normal distribution. The mathematical relationships necessary for the design of a Bayes classifier for the component is derived. The classifier has a relatively simple form, and can be easily implemented on a personal computer. There are two situations where this new classifier is an attractive alternative to the traditional classifier: (1) the assumption of two distinct normal distributions for the good and bad classes is theoretically untenable, and (2) the estimation of two different covariance matrices would be difficult for economic or other practical reasons View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Availability of the crystallization system in the sugar industry under common-cause failure

    Publication Year: 1992 , Page(s): 85 - 91
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB)  

    Some results from an analytic study of reliability and availability of the crystallizer system in sugar plants are presented. The analytic model was developed in a study of an actual plant. The crystallizer system consists of five basic repairable subsystems in series. Each subsystem is considered as being: good, reduced, or failed. Some subsystems can fail together due to a common cause. The model is based on the Chapman-Kolmogorov equations. Steady-state availability and various state probabilities are derived using Laplace transforms. The usefulness of the study is demonstrated through illustrations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modified KS, AD, and C-vM tests for the Pareto distribution with unknown location and scale parameters

    Publication Year: 1992 , Page(s): 112 - 117
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB)  

    Standard goodness-of-fit tests based on the empirical CdF (Edf) require continuous underlying distributions with all parameters specified. Three modified Edf-type tests, the Kolmogorov-Smirnov (K-S), Anderson-Darling (A-D), and Cramer-von Mises (C-vM), are developed for the Pareto distribution with unknown parameters of location and scale and known shape parameter. The unknown parameters are estimated using best linear unbiased estimators. For each test, Monte Carlo techniques are used to generate critical values for sample sizes 5(5)30 and Pareto shape parameters 0.5(0.5)4.0. The powers of the modified tests are investigated under eight alterative distributions. In most cases, the powers of the modified K-S, A-D, C-vM tests are considerably higher than the chi-square test. Finally, a functional relationship is identified between the modified K-S and C-vM test statistics and the Pareto shape parameter. Powerful goodness-of-fit tests that supplement the best linear unbiased estimates are provided View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimal paths and cuts of networks exposed to common-cause failures

    Publication Year: 1992 , Page(s): 76 - 80, 84
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    A method is suggested for determining the minimal cuts and paths of a general network with common-cause failures. The minimal paths (cuts) are deduced from simple network minimal paths (cuts) obtained for the network if the common-cause failures are ignored, by appropriate manipulations with sets of path (cut) branches and sets of branches interrupted by common-cause failures. Calculation procedures are presented for an effective application of the method, to evaluate the reliability indices of the network by taking into account the statistical dependence of the failure events. Two examples are included View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hazard-rate tolerance method for an opportunistic-replacement policy

    Publication Year: 1992 , Page(s): 13 - 20
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (420 KB)  

    A model for a system with several types of units is presented. A unit is replaced at failure or when its hazard (failure) rate exceeds limit L, whichever occurs first. When a unit is replaced because its hazard rates reaches L, all the operating units with their hazard rate falling in the interval (L-u, L) are replaced. This policy allows joint replacements and avoids the disadvantages resulting from the replacement of new units, down time, and unrealistic assumptions for distributions of unit life. The long-run cost rate is derived. Optimal L and u are obtained to minimize the average total replacement cost rate. Application and analysis of results are demonstrated through a numerical example View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault-tolerance considerations for redundant binary-tree-dynamic random-access-memory (RAM) chips

    Publication Year: 1992 , Page(s): 139 - 148
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (760 KB)  

    The binary-tree-dynamic RAM (TRAM) architecture has been proposed to overcome the performance and testing time limits of the traditional architecture of memory chips. A 64-Mb prototype of this architecture is being built. The author investigates manufacturing yield and operational performance of redundant TRAMs with respect to variation of tree depth and redundancy level. For this purpose, a based chip area, a yield and operational performance figure of merit allowing the comparison of various choices, has been formulated and used. The yield is evaluated by a new Markov-chain-based model. The memory operational performance has been analyzed by an innovative technique that substitutes the notion of chip state at the end of the mission time with the cumulative work performed by the chip during the mission time (performability). Optimum values of three tree depth and redundancy level were found for a given RAM size, the adopted reconfiguration strategy, and the kinds of redundancy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Terminal-pair reliability of three-type computer communication networks

    Publication Year: 1992 , Page(s): 49 - 56
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (508 KB)  

    The terminal-pair reliabilities, between the root and a leaf, of the two-center binary tree, the X-tree, and the ring-tree are computed; the beheaded binary tree is used as a benchmark. A building block is identified in the two-center binary tree from which a decomposition method is formulated. Another building block is identified for the X-tree and ring-tree from which a truss-transformation method is obtained. Computation has been carried out using algorithms based on the analysis. Although the ring-tree is the most reliable at all practical ranges of link and mode reliabilities, the X-tree and two-center binary tree are also good candidates because link reliability over 0.95 is quite common, and node reliability can be kept very high. The X-tree in particular is quite desirable due to its lower connectivity at each node and hence a lower implementation complexity. Three computational methods are presented. The simplicity of the two-center binary-tree algorithm blends with the hierarchical structure of the network itself because the states directly show that the level of computation can be summarized by some reliability subcomponents View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simple enumeration of minimal cutsets separating 2 vertices in a class of undirected planar graphs

    Publication Year: 1992 , Page(s): 63 - 71
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (564 KB)  

    The problem of enumerating all the s-t minimal cutsets separating two vertices s and t specified in a class of undirected planar graphs, called D-S (delta-star) reducible graphs, is presented. The problem is handled by a new enumeration approach based on graph reductions that preserve minimal cutsets such that a graph with complex structure is transformed into a single edge connecting s and t by recursive applications. Some classes of undirected planar graphs, such as series-parallel and wheel graphs, are identified to be D-S reducible. The approach is provided with a polynomial-time (measured in total number of vertices) enumeration algorithm which is illustrated with a numerical example. The efficiency is shown through some computational experience View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability of a k-out-of-n warm-standby system

    Publication Year: 1992 , Page(s): 72 - 75
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    A general closed-form equation is developed for system reliability of a k-out-of-n warm-standby system (dormant failures). The equation reduces to the hot and cold standby cases under the appropriate restrictions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sample sizes for estimating the Weibull hazard function from censored samples

    Publication Year: 1992 , Page(s): 133 - 138
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (420 KB)  

    An important part of planning a life test is the specification of the sample size needed to achieve a specified degree of precision from the experiment. The hazard function is frequently used as a decision criterion, especially in replacement decisions. It is shown how to choose the sample size needed to estimate a point on the hazard function with a specified degree of precision. An easy-to-use graph that can be used with the Weibull distribution and a life test that will be terminated after a pre-specified amount of time are provided. How to apply the methodology with other time-to-failure distributions and other kinds of life tests is described View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Prediction intervals for future failures in the exponential distribution under hybrid censoring

    Publication Year: 1992 , Page(s): 127 - 132
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB)  

    A life test having the one-parameter exponential distribution is considered. Available data are from hybrid censored sampling. Methods are derived for predicting future failures, given a record of observed failures. Prediction intervals for future failures are also discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comparative evaluation of four basic system-level diagnosis strategies for hypercubes

    Publication Year: 1992 , Page(s): 26 - 37
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (936 KB)  

    The approach of mutual-testing-based system diagnosis is considered for its cost effectiveness in diagnosing n-cubes ( n-dimensional hypercube multi-computer systems). Processors test each other and the test results are collected and analyzed to determine faulty processors. Four basic diagnosis strategies based on this approach are considered for n-cubes: one-t, one- t'/t', seq-t, and seq-t'/t'. One-t and one-t'/t ' are one-step strategies which involve only one test phase and one repair phase. The goal is to identify and replace all faulty-processors through one mutual test phase. The other two are sequential strategies which involve multiple iterative test and repair phases View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong