By Topic

Reliability, IEEE Transactions on

Issue 4 • Date Dec. 1997

Filter Results

Displaying Results 1 - 13 of 13
  • Comment on: dynamic reliability analysis of coherent multistate systems

    Publication Year: 1997 , Page(s): 460 - 461
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (145 KB)  

    The authors comment on the paper by J. Xue and K. Yang (see ibid., vol.44, p.683-8, 1995) Some typographical errors and their correct versions are given. Two of the errors were pointed out by the original authors in their correspondence with the responsible editor. Some comments on the original paper are also given. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 1997 Index

    Publication Year: 1997 , Page(s): 537 - 544
    Save to Project icon | Request Permissions | PDF file iconPDF (914 KB)  
    Freely Available from IEEE
  • Coordinated warranty and burn-in strategies

    Publication Year: 1997 , Page(s): 512 - 518
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (516 KB)  

    In product reliability assurance, the warranty and burn-in (W&BI) strategies are usually selected separately, despite the fact that both depend on the early-life failure behavior of the product. This paper treats W&BI strategies together in order to examine the possible benefits of coordinated strategies for product performance management. As these strategies are meaningful only for decreasing hazard-rate systems, a Weibull life distribution is assumed for each system component. A net-profit model that includes an increase in product price as a function of warranty duration is constructed. The model shows how a coordinated W&BI strategy can be selected. The model is quite general and its extension to other cases is explained. A central point that is treated thoroughly is the renewal analysis necessary to determine replacement costs during burn-in and during the warranty period. As part of the analysis, a useful approximation is defined, and efficient optimization routines are identified. An example illustrates the use of analytical methods. The analysis and discussion of the example show that there are advantages in coordinating the selection of W&BI strategies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Redundancy optimization of static series-parallel reliability models under uncertainty

    Publication Year: 1997 , Page(s): 503 - 511
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB)  

    This paper extends the classical model of Ushakov on redundancy optimization of series-parallel static coherent reliability systems with uncertainty in system parameters. Their objective function represents the total capacity of a series-parallel static system, while the decision parameters are the nominal capacity and the availability of the elements. They obtain explicit expressions (both analytic and via efficient simulation) for the constraint of the program, viz, for the Cdf of the system total capacity and then show that the extended program is convex mixed-integer. Depending on whether the objective function and the associated constraints are analytically available or not, they suggest using deterministic and stochastic (simulation) optimization approaches, respectively. The last case is associated with likelihood ratios (change of probability measure). A genetic algorithm for finding the optimal redundancy is developed and supporting numerical results are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exact and approximate improvement to the throughput of a stochastic network

    Publication Year: 1997 , Page(s): 473 - 486
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1148 KB)  

    This paper presents a model for throughput-improvement in stochastic networks. It synthesizes and extends the state-of-knowledge on determining the mean value, the lower bound (LB), and upper bound (UB) for the maximum-flow network-design problem. Through the use of an artificial-intelligence programming-language such as PROLOG, the LB was solved very efficiently the first time. The key lies in using `depth-first search' to generate the flow paths, which are then directly fed into a linear programming (LP) or mixed-integer mathematical programming (MIP) package. In all networks, the LB solution is a better approximation to the exact solution than the UB, which is consistent with theoretical findings. This finding is important in that the 7 acyclic networks represent a diversity of geometry and of survival probabilities. Another advantage of using the LB model and the path-enumeration solution-strategy has to do with an approximation of network reliability and vulnerability in that these approximate measures of performance are readily available from the solutions. Built upon the idea of most-probable states, reliability UB and vulnerability are related to the paths actually used. The paths used are but a small fraction of the paths generated, and the number of generated paths is again tiny compared to the 2n probability states. A minute fraction of the total number of failure states accounts for most of the state space-a fact that is further reinforced by the prevalence of a zero-throughput state in the flow distributions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Uniqueness of maximum likelihood estimators of the 2-parameter Weibull distribution

    Publication Year: 1997 , Page(s): 523 - 525
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (172 KB)  

    We present a simple statistic, calculated from either complete failure data or from right-censored data of type-I or -II. It is useful for understanding the behavior of the parameter maximum likelihood estimates (MLE) of a 2-parameter Weibull distribution. The statistic is based on the logarithms of the failure data and can be interpreted as a measure of variation in the data. This statistic provides: (a) simple lower bounds on the parameter MLE, and (b) a quick approximation for parameter estimates that can serve as starting points for iterative MLE routines; it can be used to show that the MLE for the 2-parameter Weibull distribution are unique View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Defect, Fault, Error,..., or Failure?

    Publication Year: 1997 , Page(s): 450 - 451
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability-based optimal task-allocation in distributed-database management systems

    Publication Year: 1997 , Page(s): 452 - 459
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (572 KB)  

    A distributed database management system (DDBMS) manages a collection of data stored at various processing nodes of a computer network. Global queries are processed by DDBMS, and tasks (join operations in these queries) are assigned to various nodes in the network. Optimal allocation of these tasks are discussed, with reliability as the objective function and communication-cost as the constraint function. A model for reliability-based task allocation is developed. It is converted into a state-space search-tree and is implemented using a branch-and-bound algorithm. The query is considered as a multiple join problem (MJP). Reliable task-allocations for all permutations of MJP are found. The permutation which has minimum cost of data transmission is calculated. A method is suggested for calculating the reliability of the path that a DDBMS network follows to deliver a message to the receiver, given the number of spoolers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Egocentric voting algorithms

    Publication Year: 1997 , Page(s): 494 - 502
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (752 KB)  

    An important problem in distributed systems is distributed agreement. One form of distributed agreement is approximate agreement (AA) in which non-faulty processes need to agree on values within a predefined tolerance. This paper partitions AA voting algorithms into 3 broad categories: anonymous, egophobic, and egocentric. Each category is further subdivided into families of algorithms. One such family of voting algorithms which belongs to the egocentric category is examined. Ad-hoc analyses of some members of this family of algorithms have been studied individually under an overly conservative fault-model in which all faults are presumed to behave in the worst-case Byzantine manner. This paper develops a methodology to determine quickly the fault tolerance and convergence rate of any member of this family under a hybrid fault model consisting of asymmetric, symmetric and benign faults. The results are weighted against those of several known voting algorithms. A sub-family of egocentric algorithms with optimal performance is identified. Traditionally, egocentric algorithms used the entire voting multiset, as in fault-tolerant mean, to reach a single voted value. It was not known how the distribution of selected elements would affect the convergence rate. Here, convergence is improved considerably if the appropriate largest and smallest data items are not included in the selected multiset View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Closed-form expressions for distribution of sum of exponential random variables

    Publication Year: 1997 , Page(s): 519 - 522
    Cited by:  Papers (48)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB)  

    In many systems which are composed of components with exponentially distributed lifetimes, the system failure time can be expressed as a sum of exponentially distributed random variables. A previous paper mentions that there seems to be no convenient closed-form expression for all cases of this problem. This is because in one case the expression involves high-order derivatives of products of multiple functions. The authors prove a simple intuitive multi-function generalization of the Leibnitz rule for high-order derivatives of products of two functions and use this to simplify this expression, thus giving a closed form solution for this problem. They similarly simplify the state-occupancy probabilities in general Markov models View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • System-reliability confidence-intervals for complex-systems with estimated component-reliability

    Publication Year: 1997 , Page(s): 487 - 493
    Cited by:  Papers (31)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (572 KB)  

    A flexible procedure is described and demonstrated to determine approximate confidence intervals for system reliability when there is uncertainty regarding component reliability information. The approach is robust, and applies to many system-design configurations and component time-to-failure distributions, resulting in few restrictions for the use of these confidence intervals. The methods do not require any parametric assumptions for component reliability or time-to-failure, and allows type-I or -II censored data records. The confidence intervals are based on the variance of the component and system reliability estimates and a lognormal distribution assumption for the system reliability estimate. This approach applies to any system design which can be decomposed into series and/or parallel connections between the components. To evaluate the validity of the confidence limits, numerous simulations were performed for two hypothetical systems with different data sample-sizes and confidence levels. The test cases and empirical results demonstrate that this new method for estimating confidence intervals provides good coverage, can be readily applied, requires only minimal computational effort, and applies for a much greater range of design configurations and data types compared to other methods. For many design problems, these confidence intervals are preferable because there is no requirement for an exponential time-to-failure distribution nor are component data limited to binomial data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal information-dispersal for increasing the reliability of a distributed service

    Publication Year: 1997 , Page(s): 462 - 472
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (760 KB)  

    This paper investigates the (m,n) information dispersal scheme (IDS) used to support fault-tolerant distributed servers in a distributed system. In an (m,n)-IDS, a file M is broken into n pieces such that any m pieces collected suffice for reconstructing M. The reliability of an (m,n)-IDS is primarily determined by 3 important factors: n=information dispersal degree (IDD), n/m=information expansion ratio (IER), Ps=success-probability of acquiring a correct piece. It is difficult to determine the optimal IDS with the highest reliability from very many choices. Our analysis shows: several novel features of (m,n)-IDS which can help reduce the complexity of finding the optimal IDS with the highest reliability; that an IDS with a higher IER might not have a higher reliability, even when Ps→1. Based on the theorems given herein, we have developed a method that reduces the complexity for computing the highest reliability from, O(ν) [ν=number of servers] to O(1) when the `upper bound of the IER'=1, or O(ν2) to O(1) when the `upper bound of the IER'>1 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semi-Markov models with an application to power-plant reliability analysis

    Publication Year: 1997 , Page(s): 526 - 532
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB)  

    Systems with, (1) a finite number of states, and (2) random holding times in each state, are often modeled using semi-Markov processes. For general holding-time distributions, closed formulas for transition probabilities and average availability are usually not available. Recursion procedures are derived to approximate these quantities for arbitrarily distributed holding-times; these recursion procedures are then used to fit the semi-Markov model with Weibull distributed holding-times to actual power-plant operating data. The results are compared to the more familiar Markov models; the semi-Markov model using Weibull holding-times fits the data remarkably well. In particular comparing the transition probabilities shows that the probability of the system being in the state of refitting converges more quickly to its limiting value as compared to convergence in the Markov model. This could be because the distribution of the holding-times in this state is rather unlike the exponential distribution. The more flexible semi-Markov model with Weibull holding-times describes more accurately the operating characteristics of power-plants, and produces a better fit to the actual operating data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong