By Topic

Reliability, IEEE Transactions on

Issue 4 • Date Dec. 1995

Filter Results

Displaying Results 1 - 25 of 26
  • Editorials

    Publication Year: 1995
    Save to Project icon | Request Permissions | PDF file iconPDF (146 KB)  
    Freely Available from IEEE
  • Comment on: reliability modeling and performance of variable link-capacity networks

    Publication Year: 1995 , Page(s): 620 - 621
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (166 KB)  

    The argues that the algorithm by Varshney, Joshi and Chang (see ibid., vol.48, p.378-82, 1994) is not correct. A counter example is presented in that focuses on the problem of this algorithm and the algorithm by Aggarwal (see ibid., vol.37, p.65-9, 1988). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Numerical methods for reliability evaluation of Markov closed fault-tolerant systems

    Publication Year: 1995 , Page(s): 694 - 704
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (980 KB)  

    This paper compares three numerical methods for reliability calculation of Markov, closed, fault-tolerant systems which give rise to continuous-time, time-homogeneous, finite-state, acyclic Markov chains. The authors consider a modified version of Jensen's method (a probabilistic method, also known as uniformization or randomization), a new version of ACE (acyclic Markov chain evaluator) algorithm with several enhancements, and a third-order implicit Runge-Kutta method (an ordinary-differential-equation solution method). Modifications to Jensen's method include incorporating stable calculation of Poisson probabilities and steady-state detection of the underlying discrete-time Markov chain. The new version of Jensen's method is not only more efficient but yields more accurate results. Modifications to ACE algorithm are proposed which incorporate scaling and other refinements to make it more stable and accurate. However, the new version no longer yields solution symbolic with respect to time variable. Implicit Runge-Kutta method can exploit the acyclic structure of the Markov chain and therefore becomes more efficient. All three methods are implemented. Several reliability models are numerically solved using these methods and the results are compared on the basis of accuracy and computation cost View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using system reliability to determine supportability turnaround time at a repair depot

    Publication Year: 1995 , Page(s): 653 - 657
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB)  

    This paper uses an expression for system reliability at a repair depot to construct a nonlinear, nonpolynomial function which is amenable to numerical analysis and has a zero equal to the supportability turnaround time (STAT) for a failed unit. System reliability is in terms of the constant failure rate for all units, number of spares on-hand at the time a unit fails, and projected repair completion dates for up to four unrepaired units. In this context, STAT represents the longest repair time (for a failed unit) which assures a given reliability level; system reliability is the probability that spares are ready to replace failed units during the STAT period. The ability to calculate STAT-values is important for two reasons: (1) subtraction of the repair time for a failed unit from its STAT-value yields the latest repair start-time (for this unit) which assures a desired reliability, and (2) the earlier the latest repair start-time, the higher the priority for starting the repair of this unit. Theorems show the location of STAT with respect to the list of repair completion dates, and form the foundation of the root-finding-based algorithm for computing STAT-values. Numerical examples illustrate the algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The impact of software enhancement on software reliability

    Publication Year: 1995 , Page(s): 677 - 682
    Cited by:  Papers (8)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (564 KB)  

    This paper exploits the `relationship between functional-enhancement (FE) activity and the distribution of software defects' to develop a discriminant model that identifies high-risk program modules, `FE activity' and `defect data captured during the FE of a commercial programming language processing utility' serve to fit and test the predictive quality of this model. The model misclassification rates demonstrate that FE information is sufficient for developing discriminant models identifying high-risk program modules. Consideration of the misclassified functions leads us to suggest that: (1) the level of routines in the calling hierarchy introduces variation in defect distribution; and (2) the impact of a defect indicates the risk that it presents. Thus consideration of defect impact can improve the discriminant results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using multi-stage and stratified sampling for inferring fault-coverage probabilities

    Publication Year: 1995 , Page(s): 632 - 639
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (696 KB)  

    Development of fault-tolerant computing systems requires accurate reliability modeling. Analytic, simulation, and hybrid models are commonly used for obtaining reliability measures. These measures are functions of component failure rates and fault-coverage (probabilities). Coverage provides information about the fault and error detection, isolation, and system recovery capabilities. This parameter can be derived by physical or simulated fault injection. Statistical inference has been used to extract meaningful information from sample observation. The problem of conducting fault injection experiments and statistically inferring the coverage from the information gathered in those experiments is addressed in this paper. We perform statistical experiments in a multi-dimensional space of events. In this way all major factors which influence the coverage (fault locations, timing characteristics of the fault, and the workload) are accounted for. Multi-stage, stratified, and combined multi-stage and stratified sampling are used in this paper for deriving the coverage. Equations of the mean, variance, and confidence interval of the coverage are provided. The statistical error produced by the injected faults which do not induce errors in the tested system (also known as the nonresponse problem) is considered, A program which emulates a typical fault environment was developed and four hypothetical systems are analyzed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Electrolytic models for metallic electromigration failure mechanisms

    Publication Year: 1995 , Page(s): 539 - 549
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2852 KB)  

    Metallic electromigration is the movement of metallic material across a nonmetallic medium, under the influence of an electric field. It is increasingly important in the performance and reliability of electronic systems as an underling cause of functional failures in electrical and electronic components. This tutorial discusses the characteristics of the various electromigration mechanisms, with primary emphasis on electrolytically-controlled processes that occur under low power and relatively ambient conditions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal configuration of redundant real-time systems in the face of correlated failure

    Publication Year: 1995 , Page(s): 587 - 594
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (564 KB)  

    Real-time computers are frequently used in harsh environments, such as space or industry. Lightning strikes, streams of elementary particles, and other manifestations of a harsh operating environment can cause transient failures in processors. Since the entire system is in the same environment, an especially severe disturbance can result in a momentary, correlated, failure of all the processors. To have the system survive transient correlated failures and still execute all its critical workload on time, designers must use time redundancy. To survive permanent or transient independently-occurring failures, processor redundancy must be used, and the computer configured into redundant clusters. Given a fixed total number of processors, there is a tradeoff between processor- and time-redundancy, This paper considers the tradeoffs between configuring the system into duplexes and triplexes. There are pessimistic and optimistic reliability models for each configuration. For the range of pertinent parameters, these models are very close, indicating that these models are quite accurate. The duplex-tripler tradeoff is between the effects of permanent, independent-transient, and correlated-transient failures. Configuring the system in triplexes provides better protection against permanent and independent-transient failures, but diminishes protection against correlated-transient failures. The better configuration is given for each application View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Use of failure-intensity models in the software-validation phase for telecommunications

    Publication Year: 1995 , Page(s): 658 - 665
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (636 KB)  

    Telephone switching systems require the use of more complex and bulky software. It is therefore important for France Telecom to take an interest in software quality before the operational phase. Trend tests show the evolution of detected failures during the validation phase. When these tests show a tendency towards improvement, we can apply software-reliability growth models. This paper describes experiments conducted at CNET, Lannion A Center, concerning the use of failure-intensity models in the software-validation phase of 3 telecommunication products. The experiments show that the logarithmic model is the most accurate. This paper also addresses future perspectives in data collection and use of reliability models View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improved task-allocation algorithms to maximize reliability of redundant distributed computing systems

    Publication Year: 1995 , Page(s): 575 - 586
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (820 KB)  

    This paper considers the problem of finding an optimal and sub-optimal task-allocation (assign the processing node for each module of a task or program) in redundant distributed computing systems, with the goal of maximizing system-reliability (probability that the system completes the entire task successfully). Finding an optimal task-allocation is NP-hard in the strong sense. An efficient algorithm is presented for optimal task-allocation in a distributed computing system with level-2 redundancy. The algorithm, (a) uses branch and bound with underestimates for reducing the average time complexity of optimal task-allocation computations, and (b) reorders the list of modules to allow a subset of modules that does not intra-communicate to be assigned last, for further reduction in the computations of optimal task-allocation for maximizing reliability. An efficient heuristics algorithm is given which obtains sub-optimal solutions in a reasonable computation time. The performance of our algorithms is given over a wide range of parameters such as number of modules, number of processing nodes, ratio of average execution cost to average communication cost, and connectivity of modules. The effectiveness of the optimal task-allocation algorithm is demonstrated by comparing it to a competing optimal task-allocation algorithm, for maximizing reliability. The performance of our algorithm improves very much when the difference between the number of modules and the connectivity increases View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Symmetric relations in multistate systems

    Publication Year: 1995 , Page(s): 689 - 693
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (348 KB)  

    This paper: (1) derives the upper bound for the number of critical upper (lower) vectors to level j of a monotone increasing multi-state system; (2) discusses the symmetric relations among the components; (3) gives several theoretical conclusions; and (4) proposes a simplified form for the structure function of the multi-state systems with symmetric relations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Calculating exact top-event probabilities using Σ-Patrec

    Publication Year: 1995 , Page(s): 640 - 644
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB)  

    This paper presents a method for calculating top-event exact probability. This responds to a surge in the application of probabilistic risk assessment (PRA) techniques to ecological and weapon safety assessments. In these domains, basic event probabilities can be large; events characterizing human error and some natural phenomena are typical examples. The method described combines the ΣII algorithm of Corynen and the pattern recognition scheme of Keen et al. The PC-based program that is developed using this method is called ΣII-Patrec and computes the exact probability of top-event of a system fault-tree model as defined by its cut sets. The ΣII module partitions and disjoints the cut sets and solves the resultant sub-models recursively. The pattern recognition module reduces the computational complexity by recognizing repeated sub-models in the calculation process and thus avoiding repeated evaluations. ΣII-Patrec is designed to quantify the fault-tree models of both coherent and incoherent systems, and interfaces with the graphic package, SEATREE, for interactive generation of fault trees. In the anticipated case, the ΣII-Patrec method of evaluation of exact top-event probability is polynomial in the number of cut sets; it can, however, be weakly exponential in the worst case. Either way, this method results in a substantial reduction in computation requirements compared to the inclusion-exclusion method. The algorithm is described through an example problem. The results of several experiments with large accident sequence are also presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability prediction using nondestructive accelerated-degradation data: case study on power supplies

    Publication Year: 1995 , Page(s): 562 - 566
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (380 KB)  

    This paper describes a conceptual framework for reliability evaluation from nondestructive accelerated degradation data (NADD). A numerical example of data sets from power supply units for electronic products is presented using this framework. The authors model NADD as a collection of stochastic processes for which the parameters depend on the stress levels. The relationship between these parameters and the associated stresses is explored using regression. The failure-time of power-supply units is modeled by the Birnbaum-Saunders distribution, for which the confidence bounds and tolerance limits can be easily obtained View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mean time to first failure of repairable systems with one cold spare

    Publication Year: 1995 , Page(s): 567 - 574
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (500 KB)  

    A general (nonMarkov) 1-out-of-2:G system with statistically-identical components, repair, and cold standby is reviewed. Coverage is considered, viz, failures of the switching mechanism for activating the spare. The explicit derivation of the mean time-to-first-failure and its in-depth discussion appear to be new. The state transition graph and the Petri net both show the way to general (n-1)-out-of-n:G systems of this category. However, for n>2, results are given only for statistically-identical components which are all as good as new, when a repaired component is put to use. This limits applicability of results to electronic systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Domination of k-out-of-n systems

    Publication Year: 1995 , Page(s): 705 - 708
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB)  

    The main objective of this paper is to derive a formula for the signed domination of k-out-of-n systems. The behavior of such systems is investigated when pivotal decomposition is applied to them. The nature of the two resulting subsystems has been examined; the signed domination theorem has been extended to those systems and used as a proving tool for the main objective. A closed formula is presented for computing exactly the reliability of k-out-of-systems with the same component reliabilities by means of paths or cut sets. Most of the theoretical results based on domination theory are still restricted to linear networks (without duplicated edges). They should be extended to the realistic cases, such as fault trees and block diagrams. This paper is such an effort and is a beginning in this direction. A major area for further investigation is to attend to exploit the extraction of these results to broader and more general classes of nonlinear systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability problems of polysilicon/Al contacts due to grain-boundary enhanced thermomigration effects

    Publication Year: 1995 , Page(s): 550 - 555
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (836 KB)  

    This paper presents results of a reliability study of n+polysilicon/Al contacts. The contact resistance of this structure ranged dramatically from sample to sample, and in some cases the contact resistance was extremely large (e.g. 80 kΩ.μm2). In addition, important changes in contact resistance were caused by temperature stress. This variation in contact resistance poses a serious problem in the manufacturability of accurate polysilicon resistors. This paper briefly describes the measurement procedures and measurement data. The measurements used to deduce and analyze the reliability problem include differential resistance and thermal stress. The samples were obtained from three industrial 2 μm CMOS sources. Finally, the paper discusses the data in detail and gives a reason for this reliability problem View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • All opportunity-triggered replacement policy for multiple-unit systems

    Publication Year: 1995 , Page(s): 648 - 652
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB)  

    A model is presented for a system which consists of n i.i.d units. Hazard rates of these units are increasing in time. A unit is replaced at failure or when the age of a unit exceeds T, whichever occurs first. When a unit is replaced, all the operating units with their age in the interval (T-w,T) are replaced. Both failure replacement and active replacement create the opportunities to replace other units preventively. This policy allows joint replacements and avoids the disadvantages resulting from replacement of new units, down time, and unrealistic assumptions for distributions of unit life. An algorithm is developed to compute the steady-state cost rate. Optimal T&W are obtained to minimize the mean total replacement cost rate. Application and analysis of results are illustrated through a numerical example View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A recursive variance-reduction algorithm for estimating communication-network reliability

    Publication Year: 1995 , Page(s): 595 - 602
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (604 KB)  

    In evaluating the capacity of a communication network architecture to resist possible faults of some of its components, several reliability metrics are used. This paper considers the 𝒦-terminal unreliability measure. The exact evaluation of this parameter is, in general, very costly since it is in the NP-hard family. An alternative to exact evaluation is to estimate it using Monte Carlo simulation. For highly reliable networks, the crude Monte Carlo technique is prohibitively expensive; thus variance reduction techniques must be used. We propose a recursive variance-reduction Monte-Carlo scheme (RVR-MC) specifically designed for this problem, RVR-MC is recursive, changing the original problem into the unreliability evaluation problem for smaller networks. When all resulting systems are either up or down independently of components state, the process terminates. Simulation results are given for a well-known test topology. The speedups obtained by RVR-MC with respect to crude Monte Carlo are calculated for various values of component unreliability. These results are compared to previously published results for five other methods (bounds, sequential construction, dagger sampling, failure sets, and merge process) showing the value of RVR-MC View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault severity in models of fault-correction activity

    Publication Year: 1995 , Page(s): 666 - 671
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB)  

    This study applies canonical correlation analysis to investigate the relationships between source-code (SC) complexity and fault-correction (FC) activity. Product and process measures collected during the development of a commercial real-time product provide the data for this analysis. Sets of variables represent SC complexity and FC activity. A canonical model represents the relationships between these sets. s-significant canonical correlations along 2 dimensions support the hypothesis that SC complexity exerted a causal influence on FC activity during the system-test phase of the real-time product. Interpretation of the s-significant canonical correlations suggests that two subsets of product measures had different relationships with process activity. One is related to design-change activity that resulted in faults, and the other is related directly to faults. Further, faults having less impact on the system-test process associated with design-change activity that occurred during the system-test phase, while those having more impact associated with SC complexity at entry to the system-test phase. The study demonstrates canonical correlation analysis as a useful exploratory tool for understanding influences that affected past development efforts. However, generalization of the canonical relationships to all software development efforts is untenable since the model does not represent many important influences on the modeled latent variables, e.g., schedule pressure, testing effort, product domain, and level of engineering expertise View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal algorithms for synthesis of reliable application-specific heterogeneous multiprocessors

    Publication Year: 1995 , Page(s): 603 - 613
    Cited by:  Papers (8)  |  Patents (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (956 KB)  

    Fast and optimally-reliable application-specific multiprocessor-synthesis is critical in system-level design, especially in medical, automotive, space, and military applications. Previous work in multiprocessor-synthesis and task-allocation for performance and reliability requires exponential time, and therefore, is useful only for small examples. We present the first deterministic and provably-optimal algorithm (RELSYN-OPT) to synthesize real-time, reliable multiprocessors using a heterogeneous library of N processors and L link types. We prove that for a series-parallel graph with M subtasks and nested-depth d, the worst-case computational complexity of RELSYN-OPT Is O(M·(L+N)·Nd). For tree-structured task graphs, RELSYN-OMT runs in O(M·(L+N)), and is asymptotically optimum, RELSYN-OPT, because of its speed, applies to static and dynamic task allocation for an ultra-reliable distributed processing environment for which, until now, research has produced only suboptimal heuristic solutions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An empirical model of enhancement-induced defect activity in software

    Publication Year: 1995 , Page(s): 672 - 676
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB)  

    This study exploits the relationship between functional enhancement (FE) activity and defect distribution to produce a model for predicting FE induced defect activity. We achieve this in 2 steps: (1) apply canonical correlation analysis to model the relationship between a set of FE activity indicators and a set of defect activity indicators; this analysis isolates 1 dimension of this relationship having strong correlation; and (2) model the relationship between the latent variables at this dimension as a simple linear regression; this model demonstrates predictive quality sufficient for application as a software engineering tool. The predictive model considers FE activity as the sole source of variation in defect activity. Other sources of variation are not modeled, but remained constant throughout the development effort that yielded the modeled data. Models developed with this technique are intended for predicting defect activity in the program modules that result from the next iteration of the same development process, in production of the next release of the modeled product, with the same key people implementing the software changes that introduce FE. Even in this application, software engineers should understand and control the impacts of the unmodeled sources of variation. The modeling technique scales to larger development efforts involving several key people by either developing unique models for each area of responsibility, or adding independent variables that account for variation introduced by differing skill and understanding levels View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulating IC reliability with emphasis on process-flaw related early failures

    Publication Year: 1995 , Page(s): 556 - 561
    Cited by:  Papers (1)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (620 KB)  

    A Monte-Carlo reliability simulator for integrated circuits (IC) that incorporates the effects of process flaws, material properties, the mask layout, and use conditions is presented. The mask layout is decomposed into distinct objects, such as contiguous metal runs, vias, contacts, and gate-oxides, for which user-defined distributions are used for determining the failure probability. These distributions are represented by a mixture of defect-related and wearout-related distributions. The failure distributions for nets (sets of interconnected layout objects) are obtained by combining the distributions of their component objects. System reliability is obtained by applying control variate sampling to the logic network which is comprised of all nets. The effects of k-out-of-n substructures within the reliability network are accounted for. The methodology is illustrated by the effect of particulate-induced defects on metal runs and vias in a simple test circuit. The results qualitatively verify the methodology and show that predictions which incorporate failures due to process flaws are appreciably more pessimistic than those obtained from current practice View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal test-times for intermittent faults

    Publication Year: 1995 , Page(s): 645 - 647
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (172 KB)  

    Su et al. (1978) considered continuous and repetitive tests for a continuous-parameter Markov model with intermittent faults. Periodic tests for intermittent faults are scheduled at times k·T (k=1, 2,...). This paper presents a simple algorithm to compute the optimal time to minimize the mean cost until detection when the test model is imperfect. First an upper bound is found for the optimal time. Then a bisection-algorithm is used to minimize the cost of detecting faults for a system in which faults are intermittent and unpredictable. Using this algorithm, the solution of example 1 is better than that of Nakagawa and Yasui (1989) by at least 10%. This algorithm can be more useful than the Newton-Raphson method to locate an optimum because Newton-Raphson involves the first derivative whereas the bisection method does not View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Component redundancy vs system redundancy in the hazard rate ordering

    Publication Year: 1995 , Page(s): 614 - 619
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (416 KB)  

    Design engineers are well aware of the stochastic result which says that (under the appropriate assumptions) redundancy at the component level is superior to redundancy at the system level. Given the importance of the hazard rate in reliability and life testing, we investigate to what extent this principle holds for the stronger stochastic ordering, viz, hazard rate ordering. Surprisingly, this does not hold for even series systems if the spares do not match the original components in distribution. It is true for series systems however for matching spares, and we conjecture that this is the case in general for k-out-of-n:G systems. We also investigate this principle for cold-standby redundancy (as opposed to active or parallel redundancy) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic reliability analysis of coherent multistate systems

    Publication Year: 1995 , Page(s): 683 - 688
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (440 KB)  

    This paper generalizes 2-state (binary-state) reliability parameters `R(t), F(t), λ(t), and mean-time to-failure' to multi-state reliability parameters `R(t,i), G(t,i), λ(t,i), mean-life-span-at-specified-performance-level(i), for each state'. By using these generalized reliability parameters, multi-state reliability dynamic analysis can be transformed to a set of 2-state reliability dynamic analyses. Then, the authors extend some theoretical results in 2-state reliability analysis to the multi-state case. Measures for evaluating performance degradation of multi-state system are developed. By combining Markov processes and s-coherent multi-state system structure functions, the dynamic multi-state reliability can be analyzed. Several examples are given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong