By Topic

Reliability, IEEE Transactions on

Issue 3 • Date Sept. 2007

Filter Results

Displaying Results 1 - 25 of 32
  • Table of contents

    Page(s): C1 - 365
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE
  • R-Impact: Reliability-Based Citation Impact Factor

    Page(s): 366 - 367
    Save to Project icon | Request Permissions | PDF file iconPDF (100 KB)  
    Freely Available from IEEE
  • In this issue - Technically

    Page(s): 368
    Save to Project icon | Request Permissions | PDF file iconPDF (83 KB)  
    Freely Available from IEEE
  • Accelerated Discrete Degradation Models for Leakage Current of Ultra-Thin Gate Oxides

    Page(s): 369 - 380
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (419 KB) |  | HTML iconHTML  

    Using degradation measurements is becoming more important in reliability studies because fewer failures are observed during short experiment times. Most of the literature discusses continuous degradation processes such as Wiener, gamma, linear, and nonlinear random effect processes. However, some types of degradation processes do not occur in a continuous pattern. Discrete degradations have been found in many practical problems, such as leakage current of thin gate oxides in nano-technology, crack growth of metal fatigue, and fatigue damage of laminates used for industrial specimens. In this research, we establish a procedure based on a likelihood approach to assess the reliability using a discrete degradation model. A non-homogeneous Weibull compound Poisson model with accelerated stress variables is considered. We provide a general maximum likelihood approach for the estimates of model parameters, and derive the breakdown time distributions. A data set measuring the leakage current of nanometer scale gate oxides is analyzed by using the procedure. Goodness-of-fit tests are considered to check the proposed models for the amount of degradation increment, and the rate of event occurrence. The estimated reliabilities are calculated at lower stress of the accelerated variable, and the approximate confidence intervals of quantiles for breakdown time distribution are given to quantify the uncertainty of the estimates. Finally, a simulation study based on the gate oxide data is built for the discrete degradation model to explore the finite sample properties of the proposed procedure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating Transient Error Effects in Digital Nanometer Circuits

    Page(s): 381 - 391
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1332 KB) |  | HTML iconHTML  

    Radiation-induced transient errors have become a great threat to the reliability of nanometer circuits. The need for cost-effective robust circuit design mandates the development of efficient reliability metrics. We present a novel ldquonoise impact analysisrdquo methodology to evaluate the transient error effects in static CMOS digital circuits. With both the circuit, and the transient noise abstracted in the format of matrices, the circuit-noise interaction is modeled by a series of matrix transformations. During the transformation, factors that potentially affect the propagation & capture of transient errors are modeled as matrix operations. Finally, a ldquonoise capture ratiordquo is computed as the probability of a sequential element capturing transient noise inside the combinational logics, It is used as a measure of the transient noise effects in the circuit. Comparison with SPICE simulation demonstrates that our technique can accurately, yet quickly estimate the probability of transient errors causing observable error effects. The proposed methodology will greatly facilitate the economic design of robust nanometer circuits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical Models for Hot Electron Degradation in Nano-Scaled MOSFET Devices

    Page(s): 392 - 400
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (678 KB) |  | HTML iconHTML  

    In a MOS structure, the generation of hot carrier interface states is a critical feature of the item's reliability. On the nano-scale, there are problems with degradation in transconductance, shift in threshold voltage, and decrease in drain current capability. Quantum mechanics has been used to relate this decrease to degradation, and device failure. Although the lifetime, and degradation of a device are typically used to characterize its reliability, in this paper we model the distribution of hot-electron activation energies, which has appeal because it exhibits a two-point discrete mixture of logistic distributions. The logistic mixture presents computational problems that are addressed in simulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Review of Reliability Research on Nanotechnology

    Page(s): 401 - 410
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB) |  | HTML iconHTML  

    Nano-reliability measures the ability of a nano-scaled product to perform its intended functionality. At the nano scale, the physical, chemical, and biological properties of materials differ in fundamental, valuable ways from the properties of individual atoms, molecules, or bulk matter. Conventional reliability theories need to be restudied to be applied to nano-engineering. Research on nano-reliability is extremely important due to the fact that nano-structure components account for a high proportion of costs, and serve critical roles in newly designed products. This review introduces the concepts of reliability to nano-technology; and presents the current work on identifying various physical failure mechanisms of nano-structured materials, and devices during fabrication process, and operation. Modeling techniques of degradation, reliability functions, and failure rates of nano-systems are also reviewed in this work. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Moving Average Non-Homogeneous Poisson Process Reliability Growth Model to Account for Software with Repair and System Structures

    Page(s): 411 - 421
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (647 KB) |  | HTML iconHTML  

    We develop a moving average non-homogeneous Poisson process (MA NHPP) reliability model which includes the benefits of both time domain, and structure based approaches. This method overcomes the deficiency of existing NHPP techniques that fall short of addressing repair, and internal system structures simultaneously. Our solution adopts a MA approach to cover both methods, and is expected to improve reliability prediction. This paradigm allows software components to vary in nature, and can account for system structures due to its ability to integrate individual component reliabilities on an execution path. Component-level modeling supports sensitivity analysis to guide future upgrades, and updates. Moreover, the integration capability is a benefit for incremental software development, meaning only the affected portion needs to be re-evaluated instead of the entire package, facilitating software evolution to a higher extent than with other methods. Several experiments on different system scenarios and circumstances are discussed, indicating the usefulness of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring Genetic Programming and Boosting Techniques to Model Software Reliability

    Page(s): 422 - 434
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (682 KB) |  | HTML iconHTML  

    Software reliability models are used to estimate the probability that a software fails at a given time. They are fundamental to plan test activities, and to ensure the quality of the software being developed. Each project has a different reliability growth behavior, and although several different models have been proposed to estimate the reliability growth, none has proven to perform well considering different project characteristics. Because of this, some authors have introduced the use of Machine Learning techniques, such as neural networks, to obtain software reliability models. Neural network-based models, however, are not easily interpreted, and other techniques could be explored. In this paper, we explore an approach based on genetic programming, and also propose the use of boosting techniques to improve performance. We conduct experiments with reliability models based on time, and on test coverage. The obtained results show some advantages of the introduced approach. The models adapt better to the reliability curve, and can be used in projects with different characteristics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint Failure Importance for Noncoherent Fault Trees

    Page(s): 435 - 443
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    There exist several risk importance measures in the literature to rank the relative importance among basic events within a fault tree. Most of the importance measures indicate how important a basic event is with respect to the top event of the fault tree. However, the mutual influence among the basic events should also be considered. This is particularly true in practice when making maintenance decisions with a limited resource. This paper investigates the Joint Failure Importance (JFI), which reflects the interaction among basic events, namely, the change in the Birnbaum Importance of one basic event when the probability of another basic event changes. Even though the JFI for coherent fault trees and its properties have been examined in the literature, the results cannot be easily extended to noncoherent fault trees. The current work has shown that, for both coherent, and noncoherent fault trees, the sign of the JFI can provide useful information. However, the properties of the JFI for noncoherent fault trees are more complex, and do not always share with those for coherent fault trees. The Shutdown System Number One (SDS1) in a Canadian Deuterium-Uranium (CANDU) Nuclear Power Plant (NPP) is utilized to illustrate the theoretical results developed in this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Resource Allocation for Maximizing Performance and Reliability in Tree-Structured Grid Services

    Page(s): 444 - 453
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (388 KB) |  | HTML iconHTML  

    The paper considers a grid computing systems in which the resource management systems (RMS) can divide service tasks into execution blocks (EB), and send these blocks to different resources. To provide a desired level of service reliability, the RMS can assign the same EB to several independent resources for parallel (redundant) execution. According to the optimal schedule for service task partition, and distribution among resources, one can achieve the greatest possible expected service performance (i.e. least execution time), or reliability. For solving this optimization problem, the paper suggests an algorithm that is based on graph theory, Bayesian approach, and the evolutionary optimization approach. A virtual tree-structure model is constructed in which failure correlation in common communication channels is taken into account. Illustrative examples are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Recent Generalizations of the Weibull Distribution

    Page(s): 454 - 458
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (187 KB) |  | HTML iconHTML  

    This short communication first offers a clarification to a claim by Nadarajah & Kotz. We then present a short summary (by no means exhaustive) of some well-known, recent generations of Weibull-related lifetime models for quick information. A brief discussion on the properties of this general class is also given. Some future research directions on this topic are also discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Exponential Damage Model for Strength of Fibrous Composite Materials

    Page(s): 459 - 463
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (169 KB) |  | HTML iconHTML  

    When modeling experimental data observed from carefully performed tensile strength tests, statistical distributions are typically used to describe the strength of composite specimens. Recently, cumulative damage models derived for predicting tensile strength have been shown to be superior to other models when used to fit composite strength data. Here, an alternative model is developed which is based on an exponential cumulative damage approach. The model is shown to exhibit a similar structural form to the other models in the literature so that previous theory for cumulative damage models can be utilized to find parameter estimates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • k-out-of-n:G System Reliability With Imperfect Fault Coverage

    Page(s): 464 - 473
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (273 KB) |  | HTML iconHTML  

    Systems requiring very high levels of reliability, such as aircraft controls or spacecraft, often use redundancy to achieve their requirements. Reliability models for such redundant systems have been widely treated in the literature. These models describe k-out-of-n:G systems, where n is the number of components in the system, and k is the minimum number of components that must work if the overall system is to work. Most of this literature treats the perfect fault coverage case, meaning that the system is perfectly capable of detecting, isolating, and accommodating failures of the redundant elements. However, the probability of accomplishing these tasks, termed fault coverage, is frequently less than unity. Correct modeling of imperfect coverage is critical to the design of highly reliable systems. Even very high values of coverage, only slightly less than unity, will have a major impact on the overall system reliability when compared to the ideal system with perfect coverage. The appropriate coverage modeling approach depends on the system design architecture, particularly the technique(s) used to select among the redundant elements. This paper demonstrates how coverage effects can be computed, using both combinatorial, and recursive techniques, for four different coverage models: perfect fault coverage (PFC), element level coverage (ELC), fault level coverage (FLC), and one-on-one level coverage (OLC). The designation of PFC, ELC, FLC, and OLC to distinguish types of coverage modeling is suggested in this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Classification Tree Based Approach for the Development of Minimal Cut and Path Vectors of a Capacitated Network

    Page(s): 474 - 487
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1154 KB) |  | HTML iconHTML  

    This paper presents a holistic method that links together Monte-Carlo simulation, exact algorithms, and a data mining technique, to develop approximate bounds on the reliability of capacitated two-terminal networks. The method uses simulation to generate network configurations that are then evaluated with exact algorithms to investigate if they correspond to a network success or failure. Subsequently, the method implements commercially available software to generate a classification tree that can then be analyzed, and transformed into capacitated minimal cut or path vectors. These vectors correspond to the capacitated version of binary minimal cuts & paths of a network. This is the first time that these vectors are obtained from a decision tree, and in this respect, research efforts have been focused on two main directions: 1) deriving an efficient yet intuitive approach to simulate network configurations to obtain the most accurate information; and given that the classification tree method could for some applications provide imperfect or incomplete information without the explicit knowledge of the reliability engineer, 2) understand the relationship between the obtained vectors, and the real capacitated minimal cut & path vectors of the network, and its reliability. The proposed method has been tested on a set of case networks to assess its validity & accuracy. The results obtained show that the technique described is effective, simple, and widely applicable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Simple Heuristic Algorithm for Generating All Minimal Paths

    Page(s): 488 - 494
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (267 KB) |  | HTML iconHTML  

    Evaluating network reliability is an important topic in the planning, designing, and control of network systems. In this paper, an intuitive heuristic algorithm is developed to find all minimal paths (MP) by adding a path, or an edge into a network repeatedly until the network is equal to the original network. The proposed heuristic algorithm is easier to understand & implement than the existing known heuristic algorithm. Without generating any duplicate MP, it is also more efficient. The correctness of the proposed algorithm will be analysed, and proven. One bench example is illustrated to show how to evaluate the network reliability using the proposed heuristic algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ternary State Circular Sequential k-out-of-n Congestion System

    Page(s): 495 - 505
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (347 KB) |  | HTML iconHTML  

    A ternary state circular sequential k-out-of-n congestion (TSCSknC) system is presented. The system is an extension of the circular sequential k-out-of-n congestion (CSknC) system which consists of two connection states: a) congestion (server busy), and b) successful. In contrast, a TSCSknC system considers three connection states: i) congestion, ii) break down, and iii) successful. It finds applications in some reliable systems to prevent single-point failures, such as the ones used in (k,n) secret key sharing systems. The system further assumes that each of the n servers has known connection probabilities in congestion, break-down, and successful states. These n servers are arranged in a circle, and are made with connection attempts sequentially round after round. If a server is not congested, the connection can be either successful, or a failure. Previously connected servers are blocked from reconnecting if they were in either states ii), or iii). Congested servers are attempted repeatedly until k servers are connected successfully, or (n-k+1) servers have a break-down status. In other words, the system works when k servers are successfully connected, but fails when (n-k+1) servers are in the break-down state. In this paper, we present the recursive, and marginal formulas for the system successful probability, the system failure probability, as well as the average stop length, i.e. number of connections needed to terminate the system to a successful or failure state, and its computational complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • K-Terminal Network Reliability Measures With Binary Decision Diagrams

    Page(s): 506 - 515
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (430 KB) |  | HTML iconHTML  

    We present a network decomposition method using binary decision diagrams (BDD), a state-of-the-art data structure to encode, and manipulate Boolean functions, for computing the reliability of networks such as computer, communication, or power networks. We consider the K-terminal reliability measure RK, which is defined as the probability that a subset K of nodes can communicate with each other, taking into account the possible failures of the network links. We present an exact algorithm for computing the if-terminal reliability of a network with perfect vertices in O(m.Fmax .2Fmax.BFmax), where BFmax is the Bell number of the maximum boundary set of vertices Fmax, and m is the number of network links. Several examples, and experiments show the effectiveness of this approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability for Sparsely Connected Consecutive-k Systems

    Page(s): 516 - 524
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (235 KB) |  | HTML iconHTML  

    We introduce the system of consecutive failures with sparse d which is a natural extension of consecutive-k systems. Then a series of generalizations of consecutive-k systems are discussed, such as consecutive-k-out-n:F systems with sparse d, M consecutive-k-out-of-n:F systems with sparse d, and (n, f, k) :F systems with sparse d. We present the formulation for the system reliability of these generalized consecutive-k systems with various component settings in terms of the finite Markov chain imbedding idea, along with two numerical examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability Analysis of Hierarchical Systems Using Statistical Moments

    Page(s): 525 - 533
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB) |  | HTML iconHTML  

    In many practical engineering circumstances, systems reliability analysis is complicated by the fact that the failure time distributions of the constituent subsystems cannot be accurately modeled by standard distributions. In this paper, we present a low-cost, compositional approach based on the use of the first four statistical moments to characterize the failure time distributions of the constituent components, subsystems, and top-level system. The approach is based on the use of Pearson distributions as an intermediate analytical vehicle, in terms of which the constituent failure time distributions are approximated. The analysis technique is presented for -out-of- systems with identical subsystems, series systems with different subsystems, and systems exploiting standby redundancy. The moment-in-moment-out approach allows for the analysis of systems with arbitrary hierarchy, and arbitrary (unimodal) failure time distributions, provided the subsystems are independent such that the resulting failure time can be expressed in terms of sums or order statistics. The technique consistently exhibits very good accuracy (on average, much less than 1 percent error) at very modest computing cost. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability and Profit Evaluation of a PLC Hot Standby System Based on a Master-Slave Concept and Two Types of Repair Facilities

    Page(s): 534 - 539
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (265 KB) |  | HTML iconHTML  

    Programmable Logic Controllers (PLC) are frequently used by a good number of companies like steel plants, biscuit manufacturing companies, etc. Various plants/companies use two PLC at a time: one operative, and the other as a hot standby to avoid big losses. Analysis of the reliability, and profit of a hot standby PLC system is of great importance; and hence the present paper examines such a system wherein two PLC are working in master-slave fashion. Initially, the master unit is operative, and the slave unit is in hot standby. The slave unit can also fail, but with a lower failure rate than the master unit. The master unit has the priority of operation ∓mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp;mp; repair over the slave unit. While operating, the latest information from the master unit keeps on transferring to the slave unit. There are three types of failure: minor, major-repairable, and major-irreparable. The ordinary repairman who stays with the system repairs the minor failures. The expert repairman who is available upon demand repairs the major failures. Various measures of the system effectiveness, such as the mean time to system failure, steady-state availability, busy period of the ordinary as well as expert repairmen, expected number of replacements, etc. are obtained by using semi-Markov processes, and regenerative point Techniques. Profit incurred to the system is evaluated, and a graphical study is also made. Real data from an industrial application is used in this study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability Analysis of Phased-Mission System With Independent Component Repairs

    Page(s): 540 - 551
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (396 KB) |  | HTML iconHTML  

    This paper proposes a hierarchical modeling approach for the reliability analysis of phased-mission systems with repairable components. The components at the lower level are described by continuous time Markov chains which allow complex component failure/repair behaviors to be modeled. At the upper level, there is a combinatorial model whose structure function is represented by a binary decision diagram (BDD). Two BDD ordering strategies, and consequently two evaluation algorithms, are proposed to compute the phased-mission system (PMS) reliability based on Markov models for components, and a BDD representation of system structure function. The performance of the two evaluation algorithms is compared. One algorithm generates a smaller BDD, while the other has shorter execution time. Several examples, and experiments are presented in the paper to illustrate the application, and the advantages of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Classifying Weak, and Strong Components Using ROC Analysis With Application to Burn-In

    Page(s): 552 - 561
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (430 KB) |  | HTML iconHTML  

    Any population of components produced might be composed of two sub-populations: weak components are less reliable, and deteriorate faster whereas strong components are more reliable, and deteriorate slower. When selecting an approach to classifying the two sub-populations, one could build a criterion aiming to minimize the expected mis-classification cost due to mis-classifying weak (strong) components as strong (weak). However, in practice, the unit mis-classification cost, such as the cost of mis-classifying a strong component as weak, cannot be estimated precisely. Minimizing the expected mis-classification cost becomes more difficult. This problem is considered in this paper by using ROC (Receiver Operating Characteristic) analysis, which is widely used in the medical decision making community to evaluate the performance of diagnostic tests, and in machine learning to select among categorical models. The paper also uses ROC analysis to determine the optimal time for burn-in to remove the weak population. The presented approaches can be used for the scenarios when the following information cannot be estimated precisely: 1) life distributions of the sub-populations, 2) mis-classification cost, and 3) proportions of sub-populations in the entire population. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quasi-Random Testing

    Page(s): 562 - 568
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (209 KB) |  | HTML iconHTML  

    Our paper proposes an implementable procedure for using the method of quasi-random sequences in software debug testing. In random testing, the sequence of tests (if considered as points in an -dimensional unit hypercube) will give rise to regions where there are clusters of points, as well as underpopulated regions. Quasi-random sequences, also known as low-discrepancy or low-dispersion sequences, are sequences of points in such a hypercube that are spread more evenly throughout. Based on the observation that program faults tend to lead to contiguous failure regions within a program's input domain, and that an even spread of random tests enhances the failure detection effectiveness for certain failure patterns, we examine the use of quasi-random sequences as a replacement for random sequences in automated testing. Because there are only a small number of quasi-random sequence generation algorithms, and each of them can only generate a small number of distinct sequences, the applicability of quasi-random sequences in testing real programs is severely restricted. To alleviate this problem, we examine the use of two types of randomized quasi-random sequences, which are quasi-random sequences permuted in a nondeterministic fashion in such a way as to retain their low discrepancy properties. We show that testing using randomized quasi-random sequences is often significantly more effective than random testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong