Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 5:00 PM ET (12:00 - 21:00 UTC). We apologize for the inconvenience.
By Topic

Reliability, IEEE Transactions on

Issue 2 • Date June 2004

Filter Results

Displaying Results 1 - 19 of 19
  • Table of contents

    Publication Year: 2004 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability publication information

    Publication Year: 2004 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE
  • Comments on PMS BDD generation in "A BDD-based algorithm for Reliability Analysis of phased-mission systems

    Publication Year: 2004 , Page(s): 169 - 173
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (291 KB) |  | HTML iconHTML  

    This paper discusses some of the difficulties in the paper: "A BDD-based algorithm for reliability analysis of phased-mission systems" by X. Zang, H. Sun, and K. S. Trivedi. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A separable ternary decision diagram based analysis of generalized phased-mission reliability

    Publication Year: 2004 , Page(s): 174 - 184
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB) |  | HTML iconHTML  

    This paper considers the reliability analysis of a generalized phased-mission system (GPMS) with two-level modular imperfect coverage. Due to the dynamic behavior & the statistical dependencies, generalized phased-mission systems offer big challenges in reliability modeling & analysis. A new family of decision diagrams called ternary decision diagrams (TDD) is proposed for use in the resulting separable approach to the GPMS reliability evaluation. Compared with existing methods, the accuracy of our solution increases due to the consideration of modular imperfect coverage; the computational complexity decreases due to the nature of the TDD, and the separation of mission imperfect coverage from the solution combinatorics. In this paper, the TDD-based separable approach is presented, and compared with existing methods for analyzing the GPMS reliability. An example generalized phased-mission system is analyzed to illustrate the advantages of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transient analysis of a repairable system, using phase-type distributions and geometric processes

    Publication Year: 2004 , Page(s): 185 - 192
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    The transient behavior of a system with operational and repair times distributed following phase-type distributions is studied. These times are alternated in the evolution of the system, and they form 2 separate geometric processes. The stationary study of this system when the repair times form a renewal process has been made . This paper also considers that operational times are partitioned into two well-distinguished classes successively occupied: good, and preventive. An algorithmic approach is performed to determine the transition probabilities for the Markov process which governs the system, and other performance measures beyond those in are calculated in a well-structured form. The results are applied to a numerical example, and the transient quantities are compared with the ones obtained in the stationary case. The computational implementation of the mathematical expressions formulated are performed using the Matlab program. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal allocation of minimal & perfect repairs under resource constraints

    Publication Year: 2004 , Page(s): 193 - 199
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    The effect of a repair of a complex system can usually be approximated by the following two types: minimal repair for which the system is restored to its functioning state with minimum effort, or perfect repair for which the system is replaced or repaired to a good-as-new state. When both types of repair are possible, an important problem is to determine the repair policy; that is, the type of repair which should be carried out after a failure. In this paper, an optimal allocation problem is studied for a monotonic failure rate repairable system under some resource constraints. In the first model, the numbers of minimal & perfect repairs are fixed, and the optimal repair policy maximizing the expected system lifetime is studied. In the second model, the total amount of repair resource is fixed, and the costs of each minimal & perfect repair are assumed to be known. The optimal allocation algorithm is derived in this case. Two numerical examples are shown to illustrate the procedures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An examination of correlation effects among warranty claims

    Publication Year: 2004 , Page(s): 200 - 204
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    This paper models & examines the impact of correlation in a failure arrival process which generates a warranty claim with each occurring failure. We show that even slight positive correlation in the warranty claims arrival process can significantly impact the expected number of claims observed, and the length of the transient period. We are not suggesting that renewal processes are necessarily inappropriate or incorrect as models for the arriving warranty claims process. Rather, we emphasize the importance of considering correlation in the claims arrival process, and suggest that more research is needed to explore mathematical models which explicitly consider correlation in the warranty analysis framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bounds on the reliability of distributed systems with unreliable nodes & links

    Publication Year: 2004 , Page(s): 205 - 215
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    The reliability of distributed systems & computer networks in which computing nodes and/or communication links may fail with certain probabilities have been modeled by a probabilistic network. Computing the residual connectedness reliability (RCR) of probabilistic networks under the fault model with both node & link faults is very useful, but is an NP-hard problem. Up to now, there has been little research done under this fault model. There are neither accurate solutions nor heuristic algorithms for computing the RCR. In our recent research, we challenged the problem, and found efficient algorithms for the upper & lower bounds on RCR. We also demonstrated that the difference between our upper & lower bounds gradually tends to zero for large networks, and are very close to zero for small networks. These results were used in our dependable distributed system project to find a near-optimal subset of nodes to host the replicas of a critical task. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On achieving optimal survivable routing for shared protection in survivable next-generation Internet

    Publication Year: 2004 , Page(s): 216 - 225
    Cited by:  Papers (45)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB)  

    This paper proposes a suite of approaches to solve the survivable routing problem with shared protection. We first define in mathematics the maximum extent of resource sharing for a protection path given the corresponding working path according to the current network link-state. Then the problem of solving the least-cost working & protection path-pair (in terms of the sum of the cost) is formulated into an Integer Linear Programming process. Due to the dependency of the protection path on its working path, however, the formulation is not scalable with the network size, and takes an extra effort to solve. Therefore, we introduce two heuristic algorithms, called Iterative Two-Step-Approach (ITSA) & Maximum Likelihood Relaxation (MLR), which aim to explore the approximating optimal solutions with less computation time. We evaluate the performance of the proposed schemes, and make a comparison with some reported counterparts. The simulation results show that the ITSA scheme, with a properly defined tolerance to optimality, can achieve the best performance at the expense of more computation time. On the other hand, MLR delivers a compromise between computation efficiency & performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability governed by the relative locations of random variables following a homogeneous Poisson Process in a finite domain

    Publication Year: 2004 , Page(s): 226 - 237
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB) |  | HTML iconHTML  

    A powerful equation is derived for determining the probability of safe/failure states dependent on random variables, following a homogeneous Poisson process in a finite domain. The equation is generic & gives the probability of all types of relative configurations of the random variables governing reliability. The significance of the derived equation stems from the fact that the reliability problems solved are not restricted to one-dimensional problems only, or to a simple function of the relative distances between the locations of the random variables. Many intractable reliability problems can be solved easily using the derived equation which reduces complex reliability problems to problems with trivial solutions. The power of the equation is illustrated by the derivation of a number of important special cases, and numerous applications: i) determining the probability of existence of a set of minimum intervals between the locations of random variables in a finite interval, ii) determining the number density envelope of the random variables which prevents clustering within a critical distance, and iii) a closed-form relationship for determining reliability during shock loading. The new equation is at the basis of a new reliability measure which consists of a combination of a set of specified minimum free gaps before/between random variables in a finite interval, and a minimum specified probability with which they must exist. The new reliability measure is at the heart of a technology for setting quantitative reliability requirements based on minimum failure-free operating periods (MFFOP). The equation is applied to cases where failure is triggered because of clustering of two ore more demands, forces, or manufacturing/material flaws following a homogeneous Poisson process in a specified time interval/length. A number of important applications have also been considered related to the conditional case (a homogeneous Poisson process conditioned on the number of random variables in a finite interval). Solutions have been provided for common problems related to: i) collisions of demands from a given number of customers using a particular equipment for a specified time; ii) overloading of supply systems from a given number of consumers connecting independe- ntly & randomly, and iii) analysis of random failures following a homogeneous Poisson process where only the number of failures is known, but not the actual failure times. In the paper, it is demonstrated that even for a small number of random variables in a finite interval, the probability of clustering of two or more random variables is surprisingly high, and should always be accounted for in risk assessments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exact analysis of a class of GI/G/1-type performability models

    Publication Year: 2004 , Page(s): 238 - 249
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (576 KB) |  | HTML iconHTML  

    We present an exact decomposition algorithm for the analysis of Markov chains with a GI/G/1-type repetitive structure. Such processes exhibit both M/G/1-type & GI/M/1-type patterns, and cannot be solved using existing techniques. Markov chains with a GI/G/1 pattern result when modeling open systems which accept jobs from multiple exogenous sources, and are subject to failures & repairs; a single failure can empty the system of jobs, while a single batch arrival can add many jobs to the system. Our method provides exact computation of the stationary probabilities, which can then be used to obtain performance measures such as the average queue length or any of its higher moments, as well as the probability of the system being in various failure states, thus performability measures. We formulate the conditions under which our approach is applicable, and illustrate it via the performability analysis of a parallel computer system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling inefficiencies in a reliability system using stochastic frontier regression

    Publication Year: 2004 , Page(s): 250 - 254
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB) |  | HTML iconHTML  

    For some reliability systems, it is possible to have the system reliability smaller than the reliability obtained using the configuration of the components. This may be due to the inefficiency of the system. By inefficiency, we mean any tendency or attribute that will bring down the performance of the system from the level the configuration is capable of or expected to provide or designed for. This sets a maximum limit (or frontier) for the performance of the system. Therefore, deviation of the observed level from this limit would then be an indicator of the inefficiency. In this paper, we have made an attempt to model inefficiencies in the working of a reliability system, and to define an inefficiency index. The paper discusses the practical estimation of the coefficient of inefficiency in the system performance. The stochastic frontier regression methods are used to estimate the inefficiency. The validity of the methodology has been assessed for an exponential model, using a limited simulation study. The inefficiency indices proposed in this paper are simple, as they must be to be useful to engineers. We found that the suggested indices & their estimation procedures work well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discrete Rayleigh distribution

    Publication Year: 2004 , Page(s): 255 - 260
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    Using a general approach for discretization of continuous life distributions in the univariate & bivariate situations, we have proposed a discrete Rayleigh distribution. This distribution has been examined in detail with respect to two measures of failure rate. Characterization results have also been studied to establish a direct link between the discrete Rayleigh distribution, and its continuous counterpart. This discretization approach not only expands the scope of reliability modeling, but also provides a method for approximating probability integrals arising out of a continuous setting. As an example, the reliability value of a complex system has been approximated. This discrete approximation in a nonnormal setting can be of practical use & importance, as it can replace the much relied upon simulation method. While the replication required is minimal, the degree of accuracy remains reasonable for our suggested method when compared with the simulation method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Alternative time scales for systems with random usage

    Publication Year: 2004 , Page(s): 261 - 264
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (95 KB) |  | HTML iconHTML  

    In various reliability applications, there can be different time scales in which to measure time to failure, or assess the performance of objects. For instance, for automobiles the chronological age & the total number of driving hours, as measures of usage, are good candidates for the time scale. An impact of a random usage on some properties of time to failure distribution functions is studied. It is shown that a random usage can change the shape of the failure rate of an object compared with the shape of the failure rate for the deterministic usage. Specifically, the sharply increasing failure rate can turn into a decreasing one, which is, in fact, surprising. The random usage can also change the aging properties of distributions under consideration, and this should be taken into account in applications. Several simple examples illustrate the developed concept. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the exponential formula for reliability

    Publication Year: 2004 , Page(s): 265 - 268
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (120 KB) |  | HTML iconHTML  

    It is usually assumed that the exponential formula commonly used in reliability & survival analysis holds for the case of conditioning on "smooth" external covariates, and is not valid for the case of internal covariates. Using an example of external shocks affecting an item, it is shown that, though formally the influence is not smooth in this case, the corresponding exponential formula still holds. On the other hand, internal covariates do not necessarily lead to the not-absolutely-continuous conditional Cdf. If the internal covariate process specifies the full information on the item's failure process, then the corresponding conditional Cdf is not absolutely continuous. If the observed internal covariate does not provide complete information on the item's state, then the corresponding conditional Cdf can be still absolutely continuous, and the exponential formula holds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhancing reliability of RTL controller-datapath circuits via Invariant-based concurrent test

    Publication Year: 2004 , Page(s): 269 - 278
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB) |  | HTML iconHTML  

    We present a low-cost concurrent test methodology for enhancing the reliability of RTL controller-datapath circuits, based on the notion of path invariance. The fundamental observation supporting the proposed methodology is that the inherent transparency behavior of RTL components, typically utilized for hierarchical off-line test, renders rich sources of invariance within a circuit. Furthermore, additional sources of invariance are obtained by examining the algorithmic interaction between the controller, and the datapath of the circuit. A judicious selection & combination of modular transparency functions, based on the algorithm implemented by the controller-datapath pair, yields a powerful set of invariant paths in a design. Compliance to the invariant behavior is checked whenever the latter is activated. Thus, such paths enable a simple, yet very efficient concurrent test capability, achieving fault security in excess of 90% while keeping the hardware overhead below 40% on complicated, difficult-to-test, sequential benchmark circuits. By exploiting fine-grained design invariance, the proposed methodology enhances circuit reliability, and contributes a low-cost concurrent test direction, applicable to general RTL circuits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Contact discontinuity modeling of electromechanical switches

    Publication Year: 2004 , Page(s): 279 - 283
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB) |  | HTML iconHTML  

    This paper discusses contact discontinuity of electromechanical switches, and presents a model that considers the effect of vibration-induced inertial force on operational reliability. Using this model, the operational reliability of a switch with a specific contact assembly can be assessed for given vibration conditions. Under the vibration conditions, a minimum contact spring force is necessary for proper functioning of the switch, while the magnitude of contact uncertainty determines the need of an additional force to ensure operational reliability. With reduced contact uncertainty, the operational reliability of switches approaches either 0 or 1, solely depending upon the design & operational conditions. In this model, the parameters of c', b, & σ/μ need to be determined either empirically or experimentally before the model can be used for reliability assessment. To theoretically determine those parameters, Hertz contact with randomly distributed surface asperities needs to be considered. Finally, although this model starts from a log-normal distribution of electrical contact, the approach can be applied to other distributions, such as the inverse Gaussian and Weibull distributions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal test planning for the fatigue-limit model when the fatigue-limit distribution is known

    Publication Year: 2004 , Page(s): 284 - 292
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    This article discusses tools for planning life tests under the random fatigue-limit model, when the form of the fatigue-limit distribution is completely specified. Expressions for the Fisher information matrix elements are provided. Test-plan objectives, such as minimum variance of quantile or parameter estimator, can be written in terms of these expressions. Under Type I censoring, all experimental conditions can be described by a standardized slope parameter & a standardized log(censoring time). This article provides equivalence theorems, which can be used to check the optimality of test plans which often have two levels. The best three-level standard test plans are not optimal, but, in general, involve fewer extrapolations to low stresses than the optimal plans. Test plans for a titanium-alloy fatigue test are presented to illustrate the methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Strain energy imaging of a power MOS transistor using speckle interferometry

    Publication Year: 2004 , Page(s): 293 - 296
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB) |  | HTML iconHTML  

    Mechanical characterization of electronic devices is often quite uneasy; most of the techniques used require a contact with the sample under study. In this paper, we propose an optical noncontact interferometric imaging method to study the thermomechanical behavior of running devices, and in particular, to deduce the corresponding elastic strain energy. This analysis will permit us to localize the zones of fragility of the device. Results obtained on a power MOS transistor to detect the region of maximum elastic strain energy are presented. It is in particular adapted in microelectronics applications to detect stress accumulation due to dilation coefficient mismatches when assembling microchips. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong