By Topic

Reliability, IEEE Transactions on

Issue 1 • Date March 1997

Filter Results

Displaying Results 1 - 23 of 23
  • Changes To: Sensitivity Of Reliability-growth Models To Operational Profile Errors Vs Testing Accuracy

    Publication Year: 1997 , Page(s): 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (106 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mixture model of the power law

    Publication Year: 1997 , Page(s): 146 - 153
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB)  

    This paper deals with a new degradation model, a power-law mixture-model which extends the power law in accelerated life testing. Some types of insulation deteriorate at a rate determined by the stress. If the stresses are classified into distinct groups, a total degradation model could be a mixture or several degradation models. If the breakdown voltage (VB) has a certain failure probability distribution and the degradation follows a power law, then a mixture model of the power law can be constructed. The cluster analysis is done by maximum likelihood estimation. For instance, by using the intermittently-inspected VB test-data of electrical insulation removed from active power-lines, the remaining life of the insulation can be assessed based on this mixture model. Application of the model to the VB data of cross-linked polyethylene insulated cables is demonstrated: the breakdown data are classified into groups using the Akaike Information Criterion and the generalized likelihood ratio test method. The remaining life of the insulation can be estimated with approximate standard deviation, for Japanese 22 kV and 33 kV insulation-class inspected cables, samples are clustered into 4 groups and a positive threshold stress is not found View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software fault tolerance: t/(n-1)-variant programming

    Publication Year: 1997 , Page(s): 60 - 68
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (832 KB)  

    This paper describes the software fault tolerance scheme, t/(n-1)-variant programming (t/(n-1)-VP), which is based on a particular system diagnosis technique used in hardware and thereby has some spectral advantages involving a simplified adjudication mechanism and enhanced capability of tolerating faults. The dependability of the t/(n-1)-VP architecture is evaluated and then compared with two similar schemes: N-version programming (NVP) and N self-checking programming (NSCP). The comparison shows that t/(n-1)-VP is a viable addition or alternative to present techniques. Much of the classical dependability-analysis of software fault tolerance approaches has focused on the simplest architectural examples that tolerate only single software faults, without considering tolerance to multiple and/or related faults. The results obtained from such analyses are thus restricted. The dependability evaluation in this paper deals with more-complicated and general software redundancy: various architectures tolerating two or more faults. It is no surprise that we came to new conclusions: both t/(n-1)-VP and the NVP scheme have the ability to tolerate some related faults between software variants; in general, t/(n-1)-VP has higher reliability, whereas NVP is better from the safety viewpoint View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Electrochemical impedance spectroscopic study of encapsulated triple tracks test (TTT) circuits

    Publication Year: 1997 , Page(s): 45 - 51, 55
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (540 KB)  

    Electrochemical impedance spectroscopy (EIS) is effective for studying the reliability of protective coatings on microelectronic packaging devices. Triple track test (TTT) circuits were evaluated as a function of exposure time at 121°C and 2 atm pressure. Specimens were deliberately contaminated by various chemicals and particles before coating. The impedance of some specimens decreased after the first 10 hours in the Pressure Cooker Test (PCT) while some lasted more than 150 hours in the PCT without an appreciable change in the impedance values. Impedance of the specimens changed appreciably due to water uptake in the epoxy coatings. The ionic contamination trapped under the coating caused faster failure in the TTT with respect to clean specimens. Water diffused to the polymer/gold interface causing a change in the dielectric properties of the specimen, shown as a shift in the EIS spectrum. This shift was observed in EIS spectra for most of the specimens which were exposed for more than 150 hours. The change suggests water that penetrated the coating made a conductive pathway to the interfacial region, a common cause of debonding between the metal (or ceramic) polymer interface. In one case (specimens contaminated by SiO2 particles) the shift was not observed-suggesting that SiO2 particles do not accelerate the adhesive failure at the metal (or ceramic)/polymer interface. The calculated saturated volume fraction of water at the metal (or ceramic)/polymer interface was 1.1±0.1 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayes results for classical Pareto distribution via Gibbs sampler, with doubly-censored observations

    Publication Year: 1997 , Page(s): 56 - 59
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB)  

    The paper considers the full Bayes analysis of the Pareto distribution when the observations are doubly censored, and provides sample-based estimates of posterior distributions using Gibbs sampler algorithm. The approach is not only computationally simple but fully explores the low-dimensional posterior surfaces-which otherwise seems difficult. Complexities through censored data always arise in life testing experiments; these complexities are no longer problems with the Gibbs sampler algorithm, unlike the situations with nonsample-based approaches View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A termination rule for degradation experiments

    Publication Year: 1997 , Page(s): 130 - 133
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (280 KB)  

    Analysis of degradation data can provide information about the lifetime of highly reliable products, if there exists a product characteristic whose degradation over time can be related to reliability. To obtain a precise estimator of a product mean time-to-failure, one practical problem arising from designing a degradation experiment is: how long should the experiment last? This paper proposes a termination rule to determine an appropriate stopping time of a degradation experiment. A case study of an LED product illustrates the method. The proposed procedure is computationally simple and provides reliability analysis for on-line real-time information of the product lifetime when the termination time is determined View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time-censored ramp tests with stress bound for Weibull life distribution

    Publication Year: 1997 , Page(s): 99 - 107
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB)  

    This paper considers ramp tests for Weibull life distribution when there are limitations on test stress and test time. The inverse power law and a cumulative exposure model are assumed. Maximum likelihood estimators of model parameters and their asymptotic covariance matrix are shown. The optimum ramp test plans are given which minimize the asymptotic variance of the ML estimator of a specified quantile of log(life) at design constant stress. The effects of the pre-estimates of design parameters are studied View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A general formula for the failure-rate function when distribution information is partially specified

    Publication Year: 1997 , Page(s): 116 - 121
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (408 KB)  

    This paper presents a new formula for the failure-rate function (FRF), derived from a recently introduced 4-parameter family of distributions. The new formula can be expressed in terms of its Cdf, is characterized by algebraic simplicity, and can replace more complex hazard functions by using routine distribution fitting. When the actual Cdf is unknown and partial distribution-information is available (or can be extracted from sample data), new fitting procedures that use only first-degree or first- and second-degree moments are used to approximate the unknown FRF. This new approach is demonstrated for some commonly used Cdfs and shown to yield highly accurate values for the FRF. Relative to current practice, the new FRF has four major advantages: it does not require specification of an exact distribution thus avoiding errors incurred by the use of a wrong model; since estimates of only low-degree (at most first- or second-degree) moments are required to determine the parameters of the FRF, the associated mean-square-deviations are relatively small; the new FRF can be easily adapted for use with censored data; and simple maximum likelihood estimates can be developed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accelerated life-tests for intermittent destructive inspection, with logistic failure-distribution

    Publication Year: 1997 , Page(s): 122 - 129
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (604 KB)  

    Statistically-optimal accelerated life-test plans are suggested for items whose lifetime follows a logistic distribution. Both the scale and location parameters of the lifetime distribution are functions of the stress level. The test plans accommodate intermittent destructive sampling. The number of sampled items which fail to pass the test at the time of each inspection follows a hypergeometric distribution; the number of defective items in the remaining sample which have not yet been tested follows a binomial distribution. Statistically-optimal designs provide test planners with a set of design inputs, such as: 2 stress-levels higher than use stress-level, a set of inspection times, sample allocation, and a censoring time that minimizes the asymptotic variance of the maximum likelihood estimator of a specified quantile of the lifetime distribution. However such a 2 stress-level optimal plan is not practical because model validation is rarely impossible with so few stress-levels at which to test. In order to overcome such impracticality compromise plans that require 3 stress-levels are also suggested at a fixed inspection interval-although these plans lose statistical efficiency View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bounding flow-performance in probabilistic weighted networks

    Publication Year: 1997 , Page(s): 3 - 10
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (576 KB)  

    Much research has been done on computing connectivity-probability in probabilistic networks. More recently, the area has been expanded to other performance measures, such as maxflow and distance. These problems are, in general, NP-hard. Therefore efficiently computable, and accurate, lower and upper bounds are sought. This paper presents: 1) distance renormalization which generalizes the reliability renormalization algorithm of Harms and Colbourn for bounding probabilistic distances in bi-directional probabilistic weighted networks; and 2) a mechanism based on nonplanar duality for using this algorithm to bound probabilistic maxflow and mean maxflow. The results obtained via distance renormalization compare favorably with other current techniques for the authors' suite of test cases View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayes computation for reliability estimation

    Publication Year: 1997 , Page(s): 52 - 55
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (292 KB)  

    Bayes estimation of complicated functions requires simpler estimation techniques due to the mathematical difficulties involved in the classical Bayes approach. Bayes estimation enjoys many approximation techniques and computational methods like Metropolis, and Gibbs sampler. Bayes estimation of the reliability of a mixture inverse Gaussian distribution requires a numerical approach since the calculations are immensely difficult from the exact Bayes point of view. Lack of full conditional prior distributions for all 3 parameters of this particular case makes the use of Gibbs sampler inefficient. Application of the rejection method, however, is reasonable since it is very simple to implement without any constraints on the prior distributions or on the hyper-parameters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic life-estimation of CMOS ICs in real operating environment: precise electrical method and MLE

    Publication Year: 1997 , Page(s): 31 - 37
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB)  

    The functionality of an IC in field use can be maintained by replacing an IC shortly before its anticipated failure. An accurate estimation of circuit lifetime is important in selecting a replacement time to: (1) avoid unanticipated circuit failure by replacing them as early as possible; and (2) use the IC fully by replacing them as late as possible. Since the problem is different from lifetime estimation with accelerated test results, this approach continually measures circuit performance and then analyzes the measurements statistically. The whole estimation process is covered from selection of circuit parameters for performance measurement to development of an aging model for the statistical analysis. The circuit-delay change due to hot-carrier effects is selected to quantify the performance degradation; an aging model, founded on the hot-carrier induced failure mechanism, is developed. Maximum likelihood estimation (MLE) is used for the statistical characteristics of future aging, where the severity of the operating environment is assumed to be a stationary random process. The MLE are used to choose an efficient time for IC replacement. The practical use of the suggested method in IC maintenance is demonstrated with statistically simulated data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability modeling for safety-critical software

    Publication Year: 1997 , Page(s): 88 - 98
    Cited by:  Papers (24)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (800 KB)  

    Software reliability predictions can increase trust in the reliability of safety critical software such as the NASA Space Shuttle Primary Avionics Software System (Shuttle flight software). This objective was achieved using a novel approach to integrate software-safety criteria, risk analysis, reliability prediction, and stopping rules for testing. This approach applies to other safety-critical software. The authors cover only the safety of the software in a safety-critical system. The hardware and human-operator components of such systems are not explicitly modeled nor are the hardware and operator-induced software failures. The concern is with reducing the risk of all failures attributed to software. Thus, safety refers to software-safety and not to system-safety. By improving the software reliability, where the reliability measurements and predictions are directly related to mission and crew safety, they contribute to system safety. Software reliability models provide one of several tools that software managers of the Shuttle flight software are using to assure that the software meets required safety goals. Other tools are inspections, software reviews, testing, change control boards, and perhaps most important-experience and judgement View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of robust and least-squares regression in computer-generated probability plots

    Publication Year: 1997 , Page(s): 108 - 115
    Cited by:  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (564 KB)  

    ML-estimators (maximum likelihood) offer a promising alternative to least squares (LS) methods and the best-guess of a researcher when fitting a line to failure-time data plotted on a computer-generated probability plot. In terms or model statistics, these robust regression algorithms performed better than LS in most data sets. The Andrews function and the Ramsay function always performed better than LS. The Huber function and the Hampel function usually performed better than LS except for those data sets where the residuals did not exceed the threshold criteria and all residuals were assigned a weight of 1.0. In those situations, these two ML-estimators provided results which were equivalent to LS. The ML-estimators were particularly effective in situations involving near-neighbors in the low region of the x-space (early contiguous failures). In terms of parameter estimation, there was no noticeable difference between LS and the ML-estimators View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effect of parallel planning on system reliability of real-time expert systems

    Publication Year: 1997 , Page(s): 81 - 87
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (636 KB)  

    Real-time expert systems (RTXS) are expert systems embedded in process-control systems which must plan and execute control strategies in response to external events within a real-time constraint. This paper presents a method for estimating the reliability of uni-processor and multi-processor RTXS. The paper discusses why there are intrinsic faults in RTXS programs that must be considered in their reliability modeling. Then, it shows that for uni-processor RTXS, no single planning algorithm can avoid all types of intrinsic faults. Finally, it presents a multiprocessor architecture with parallel planning with the objective of reducing intrinsic faults of RTXS and improving the embedded system reliability. A robot control system illustrates the method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical analysis of a power-law model for repair data

    Publication Year: 1997 , Page(s): 27 - 30
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB)  

    The power-law process is an alternative model to the homogeneous Poisson process for analyzing repair data. When many nominally-identical systems are in service, the repair data for all systems can be used to assess the aptness of the power-law model. To test the hypothesis of a homogeneous Poisson process versus the power-law process, the maximum likelihood estimates of the power-law process parameters are used. A table of the critical values for the estimates is given, along with an example of their use View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Upper and lower bounds of stress-strength interference reliability with random strength-degradation

    Publication Year: 1997 , Page(s): 142 - 145
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB)  

    A stress-strength interference reliability model with strength degradation is established under the assumptions: stress and strength are statistically independent; dynamic loading forces follow a Poisson-process and Gaussian distribution; and random strength-degradation is described by distribution parameter regression models. Simple formulas for estimating upper and lower bounds for strength-stress interference reliability are presented. The results can be applied to reliability estimation in wear out, fatigue, crack growth with dynamic loading forces, etc View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayes inference for S-shaped software-reliability growth models

    Publication Year: 1997 , Page(s): 76 - 80, 87
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    Bayes inference for a nonhomogeneous Poisson process with an S-shaped mean value function is studied. In particular, the authors consider the model of Ohba et al. (1983), and its generalization to a class of gamma distribution growth curves. Two Gibbs sampling approaches are proposed to compute the Bayes estimates of the mean number of errors remaining and the current system reliability. One algorithm is a Metropolis within Gibbs algorithm, The other is a stochastic substitution algorithm with data augmentation. Model selection based on the posterior Bayes factor is studied. A numerical example with simulated data is given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient optimization of all-terminal reliable networks, using an evolutionary approach

    Publication Year: 1997 , Page(s): 18 - 26
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (684 KB)  

    The use of computer communication networks has been rapidly increasing in order to: (1) share expensive hardware and software resources, and (2) provide access to main system from distant locations. The reliability and cost of these systems are important and are largely determined by network topology. Network topology consists of nodes and the links between nodes. The selection of optimal network topology is an NP-hard combinatorial problem so that the classical enumeration-based methods grow exponentially with network size. In this study, a heuristic search algorithm inspired by evolutionary methods is presented to solve the all-terminal network design problem when considering cost and reliability. The genetic algorithm heuristic is considerably enhanced over conventional implementations to improve effectiveness and efficiency. This general optimization approach is computationally efficient and highly effective on a large suite of test problems with search spaces up to 2·1090 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A probabilistic characterization of adhesive wear in metals

    Publication Year: 1997 , Page(s): 38 - 44
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB)  

    Adhesive wear is one of the predominant mechanisms responsible for mechanical component failures that result in a huge economic loss. Adhesive wear has been studied; the deterministic model formulated by Archard is frequently used. However, its parameters, such as material-hardness and wear coefficient show a considerable variation around the nominal value; this variation necessitates a statistical framework for studying the wear law. This paper treats these parameters as probabilistic quantities. Investigation of the model involved an experiment consisting of a pin-on-bushing machine. Load and sliding speed are used as variables while the geometry and material of the friction couple are constant. The generated data are analyzed using simple statistical methods. The randomness of wear and hardness are best modeled by the Weibull distribution, whereas the wear coefficient is modeled by a log normal distribution. Scatter parameters and median life of wear are explored for various velocities as time progresses; the median life characteristics are mathematically modeled. The application of these models in accelerated wear testing is highlighted; accelerating by increasing the speed of operation provides a better extrapolation as compared to using heavier loads View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of the geometric survival distribution

    Publication Year: 1997 , Page(s): 134 - 141
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB)  

    This paper discusses both point and interval estimation of the survivor function S0=Pr{X⩾x0} for the geometric distribution. When the number of devices n⩾50, the performance of the maximum likelihood estimator (MLE) and uniformly minimum variance unbiased estimator (UMVUE) of S0 are essentially equivalent with respect to the relative mean-square-error (RMSE) to S0. However, when the failure probability per time unit p⩾0.50, and n⩽30, the UMVUE is preferable to the MLE with respect to the RMSE. For interval estimation of S0 with no censoring, 4 asymptotic interval-estimators are derived from large-sample theory, and one from the exact distribution of the negative binomial. When p⩽0.2 and n⩾30, all 5 interval-estimators perform reasonably well with respect to coverage probability. Since using the interval estimator derived from the exact distribution can assure “coverage probability”⩾“desired confidence”, this estimator is probably preferable to the other asymptotic ones when p⩾0.50, and n⩽10. Finally, consider right-censoring, in which the failure-time that occurs after a fixed follow-up time period, is censored. We extend the interval estimator using the asymptotic properties of the MLE to account for right censoring. Monte Carlo simulation is used to evaluate the performance of this interval estimator; the censoring effect on efficiency is discussed for a variety of situations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Censored software-reliability models

    Publication Year: 1997 , Page(s): 69 - 75
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (508 KB)  

    Nonfailure stops of software execution processes can be viewed as a type of censored data. They can occur in a wide range of computing systems (e.g., concurrent computing systems, data sampling systems, transaction processing systems) due to technical or nontechnical reasons. Using existing software reliability models to deal with this type of censored software reliability data, viz, successive inter-stop times, where a stop can be failure or nonfailure stop, means that nonfailure stops are disregarded. This paper develops censored software reliability models, or censored forms of existing software reliability models, to account for nonfailure stops and directly deal with censored data of software reliability. The paper shows how to develop censored forms for the models: Jelinski-Moranda, Schick-Wolverton, Moranda Geometric, and Littlewood-Verrall, and discusses the corresponding validation forms. Censored forms of other software reliability models can be developed in a similar way. Censored software reliability models reduce to noncensored software reliability models if no nonfailure stop occurs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fault-tolerant protocol for election in chordal-ring networks with fail-stop processor failures

    Publication Year: 1997 , Page(s): 11 - 17
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (708 KB)  

    In a distributed computer system, a group of processors is connected by communication links into a network. Each processor (node) of the network has an identity (a unique integer value) that is not related to its position in the network (its address). A processor's identity is known only to the processor. In the problem of leader election, exactly one processor among a network of processors has to be distinguished as the leader. Previously, many efficient election protocols have been proposed for networks with a sense of direction. In particular, the sequential search is used for election in a reliable complete network, and a multi-token search method is used in a faulty complete network. However, election protocols on a faulty ChRgN (chordal ring network) have not been investigated by other researchers. This paper addresses this issue by: studying the problem of leader election in an asynchronous ChRgN with a sense of direction and with the presence of undetectable fail-stop processor failures; proposing a new election protocol which (a) combines the concept of sequential search and multi-token search techniques, and (b) uses an efficient routing algorithm to reduce the total number of messages used; presenting a protocol for a ChRgN of n processors with I chords/processor and at most f fail-stop faulty processors, with message complexity O(n+(n/l)log(n)+k·f), where k is the number of processors starting the election process spontaneously and at most f<l processors are faulty; and showing that the message complexity of the protocol is optimal within a constant factor when l⩾log(n). This paper considers only processor fail-stop failure-types View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong