By Topic

Reliability, IEEE Transactions on

Issue 3 • Date Sept. 2006

Filter Results

Displaying Results 1 - 24 of 24
  • Table of contents

    Publication Year: 2006 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability publication information

    Publication Year: 2006 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE
  • Safety Assessment of Fuel Rods via Generalized Bernoulli Chains

    Publication Year: 2006 , Page(s): 393 - 396
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (135 KB) |  | HTML iconHTML  

    The probabilistic safety assessed to N fuel rods assembled in one core of a nuclear reactor is commonly modelled by the sum of N independent Bernoulli random variables, i.e. 1 or 0, with individual safety probability pi that the i-th rod shows no failure during one cycle, coded by 1. The requirement set by the German Reaktor-Sicherheitskommission (Reactor Safety Commission) demands that the expected number of unfailed rods in the core within one cycle is at least N-1, whereby a confidence level of 0.95 for the verification of this condition is demanded. There is an ongoing debate that this requirement based on an expected value might be a misleading probabilistic safety measure as it does not take into account the accumulated safety probabilities that at least x fuel rods show no failure during one cycle. In this paper we establish a bound for the accumulated safety probability under this safety condition, which implies that with probability greater than 0.98 at least N-3 fuel rods show no failure during one cycle View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Formalism for Designing and Specifying RAMS Parameters for Complex Distributed Control Systems: The Safe-SADT Formalism

    Publication Year: 2006 , Page(s): 397 - 410
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (787 KB) |  | HTML iconHTML  

    Dependability evaluation is a fundamental step in distributed control system design. However, the current dependability evaluation methods are not appropriate due to the level of complexity of such systems. Given the ineffectiveness of these methods, we propose the Safe-SADT formalism for dependability evaluation (SADT stands for Structured Analysis and Design Techniques). This formalism allows the explicit formalization of functional interaction, the identification of the characteristic values affecting complex system dependability, the quantification of RAMS parameters (Reliability, Availability, Maintainability, and Safety) for the system's operational architecture, and the validation of the operational architecture in terms of the dependability objectives and constraints required by the functional specifications. The results presented in this paper are limited to RAMS quantification View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of Software Aging in a Web Server

    Publication Year: 2006 , Page(s): 411 - 420
    Cited by:  Papers (72)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB) |  | HTML iconHTML  

    Several recent studies have reported & examined the phenomenon that long-running software systems show an increasing failure rate and/or a progressive degradation of their performance. Causes of this phenomenon, which has been referred to as "software aging", are the accumulation of internal error conditions, and the depletion of operating system resources. A proactive technique called "software rejuvenation" has been proposed as a way to counteract software aging. It involves occasionally terminating the software application, cleaning its internal state and/or its environment, and then restarting it. Due to the costs incurred by software rejuvenation, an important question is when to schedule this action. While periodic rejuvenation at constant time intervals is straightforward to implement, it may not yield the best results. The rate at which software ages is usually not constant, but it depends on the time-varying system workload. Software rejuvenation should therefore be planned & initiated in the face of the actual system behavior. This requires the measurement, analysis, and prediction of system resource usage. In this paper, we study the development of resource usage in a web server while subjecting it to an artificial workload. We first collect data on several system resource usage & activity parameters. Non-parametric statistical methods are then applied toward detecting & estimating trends in the data sets. Finally, we fit time series models to the data collected. Unlike the models used previously in the research on software aging, these time series models allow for seasonal patterns, and we show how the exploitation of the seasonal variation can help in adequately predicting the future resource usage. Based on the models employed here, proactive management techniques like software rejuvenation triggered by actual measurements can be built View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generic Fault Tolerant Software Architecture Reasoning and Customization

    Publication Year: 2006 , Page(s): 421 - 435
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2240 KB) |  | HTML iconHTML  

    This paper proposes a novel heterogeneous software architecture GFTSA (Generic Fault Tolerant Software Architecture) which can guide the development of safety critical distributed systems. GFTSA incorporates an idealized fault tolerant component concept, and coordinated error recovery mechanism in the early system design phase. It can be reused in the high level model design of specific safety critical distributed systems with reliability requirements. To provide precise common idioms & patterns for the system designers, formal language Object-Z is used to specify GFTSA. Formal proofs based on Object-Z reasoning rules are constructed to demonstrate that the proposed GFTSA model can preserve significant fault tolerant properties. The inheritance & instantiation mechanisms of Object-Z can contribute to the customization of the GFTSA formal model. By analyzing the customization process, we also present a template of GFTSA, expressed in x-frames using the XVCL (XML-based Variant Configuration Language) methodology to make the customization process more direct & automatic. We use an LDAS (Line Direction Agreement System) case study to illustrate that GFTSA can guide the development of specific safety critical distributed systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software Reliability Analysis by Considering Fault Dependency and Debugging Time Lag

    Publication Year: 2006 , Page(s): 436 - 450
    Cited by:  Papers (34)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1126 KB) |  | HTML iconHTML  

    Over the past 30 years, many software reliability growth models (SRGM) have been proposed. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of personnel, the size of debugging team, the technique(s) being used, and so on. During software testing, practical experiences show that mutually independent faults can be directly detected and removed, but mutually dependent faults can be removed iff the leading faults have been removed. That is, dependent faults may not be immediately removed, and the fault removal process lags behind the fault detection process. In this paper, we will first give a review of fault detection & correction processes in software reliability modeling. We will then illustrate the fact that detected faults cannot be immediately corrected with several examples. We also discuss the software fault dependency in detail, and study how to incorporate both fault dependency and debugging time lag into software reliability modeling. The proposed models are fairly general models that cover a variety of known SRGM under different conditions. Numerical examples are presented, and the results show that the proposed framework to incorporate both fault dependency and debugging time lag for SRGM has a better prediction capability. In addition, an optimal software release policy for the proposed models, based on cost-reliability criterion, is proposed. The main purpose is to minimize the cost of software development when a desired reliability objective is given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of Software-Implemented Fault-Tolerance (SIFT) Approach in Gracefully Degradable Multi-Computer Systems

    Publication Year: 2006 , Page(s): 451 - 457
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (243 KB) |  | HTML iconHTML  

    This paper presents an analytical method for evaluating the reliability improvement for any size of multi-computer system based on Software-Implemented Fault-Tolerance (SIFT). The method is based on the equivalent failure rate Gamma, the single node failure rate lambda, the number of nodes in the system, N, the repair rate mu, the fault coverage factor c, the reconfiguration rate delta, and the percentage of blocking faults b1 and b2. The impact of these parameters on the reliability improvement has been evaluated for a gracefully degradable multi-computer system using our proposed analytical technique based on Markov chains. To validate our approach, we used the SIFT method which implements error detection at the node level, combined with a fast reconfiguration algorithm for avoiding faulty nodes. It is worth noting that the proposed method is applicable to any multi-computer systems' topology. The evaluation work presented in this paper focuses on the combination of analytical and experimental approaches, and more precisely on Markov chains. The SIFT method has been successfully implemented for a multi-computer system, nCube. The time overhead (reconfiguration & recomputation time) incurred by the injected fault, and the fault coverage factor c, are experimentally evaluated by means of a parallel version of the Software Object-Oriented Fault-Injection Tool (nSOFIT). The implemented SIFT approach can be used for real-time applications, when the time constraints should be met despite failures in the gracefully degradable multi-computer system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Methodology for Predicting Software Reliability in the Random Field Environments

    Publication Year: 2006 , Page(s): 458 - 468
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (873 KB) |  | HTML iconHTML  

    This paper presents a new methodology for predicting software reliability in the field environment. Our work differs from some existing models that assume a constant failure detection rate for software testing and field operation environments, as this new methodology considers the random environmental effects on software reliability. Assuming that all the random effects of the field environments can be captured by a unit-free environmental factor, eta, which is modeled as a random-distributed variable, we establish a generalized random field environment (RFE) software reliability model that covers both the testing phase and the operating phase in the software development cycle. Based on the generalized RFE model, two specific random field environmental reliability models are proposed for predicting software reliability in the field environment: the gamma-RFE model, and the beta-RFE model. A set of software failure data from a telecommunication software application is used to illustrate the proposed models, both of which provide very good fittings to the software failures in both testing and operation environments. This new methodology provides a viable way to model the user environments, and further makes adjustments to the reliability prediction for similar software products. Based on the generalized software reliability model, further work may include the development of software cost models and the optimum software release policies under random field environments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bounds on MTBF of Systems Subjected to Periodic Maintenance

    Publication Year: 2006 , Page(s): 469 - 474
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (209 KB) |  | HTML iconHTML  

    Mean time between failures (MTBF) is a common reliability measure used to assess the failure behavior of repairable systems. In order to increase MTBF, in most systems, it is a common practice to perform preventive maintenance activities at periodic intervals. In this paper: We first discuss the validity of a commonly used equation for computing MTBF of systems subjected to periodic maintenance.) For complex systems where this equation is valid, we propose a simple and better approximation than the exponential approximation proposed in a recent paper. In addition, we prove that for systems with increasing failure rate on average (IFRA) distributions, the exponential approximation proposed in a recent paper always underestimates the MTBF; hence, it is a lower bound at best.) The proposed approximation and bounds are applicable for a wide range of systems because systems which contain components with exponential or any increasing failure rate (IFR) distribution (viz., Weibull with beta>1, gamma, Gumbel, s-normal, and uniform) follow an IFRA distribution. As a special case, the proposed bounds & approximations provide better results for systems that contain only exponential failure distributions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Three-Parameter Extension to the Birnbaum-Saunders Distribution

    Publication Year: 2006 , Page(s): 475 - 479
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (162 KB) |  | HTML iconHTML  

    The Binrbaum-Saunders (B-S) distribution was derived in 1969 as a lifetime model for a specimen subjected to cyclic patterns of stresses and strains, and the ultimate failure of the specimen is assumed to be due to the growth of a dominant crack in the material. The derivation of this model will be revisited, and because the assumption of independence of crack extensions from cycle to cycle can be quite unrealistic, one new model will be derived by relaxing this independence assumption. Here, the sequence of crack extensions is modeled as a long memory process, and characteristics of this development introduces a new third parameter. The model is investigated in detail, and interestingly the original B-S distribution is included as a special case. Inference procedures are also discussed, and an example dataset is used for model comparison View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Customer-Rush Near Warranty Expiration Limit, and Nonparametric Hazard Rate Estimation From Known Mileage Accumulation Rates

    Publication Year: 2006 , Page(s): 480 - 489
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB) |  | HTML iconHTML  

    Time or mileage data obtained from warranty claims are generally more accurate for hard failures than for soft failures. For soft failures, automobile users sometimes delay reporting the warranty claim until the warranty coverage is about to expire. This results in an unusually high number of warranty claims near the end of warranty coverage. Because such a phenomenon of customer-rush near the warranty expiration limit occurs due to user behavior rather than due to the vehicle design, it creates a bias in the warranty dataset. Design improvement activities that use field reliability studies based on such data can potentially obtain a distorted picture of the reality, and lead to unwarranted, costly design changes. Research in the area of field reliability studies using warranty data provides several methods for warranty claims resulting from hard failures, and assumes reported time or mileage as actual time or mileage at failure. In this article, the phenomenon of customer-rush near the warranty expiration limit is addressed for arriving at nonparametric hazard rate estimates. The proposed methodology involves situations where estimates of mileage accumulation rates in the vehicle population are available. The claims influenced by soft failures are treated as left-censored, and are identified using information in technician comments about the repair carried out plus, if required, a more involved engineering analysis of field returned parts. Maximum likelihood estimates for the hazard function and their confidence limits are then obtained using Turnbull's iterative procedure. An application example illustrates use of the proposed methodology View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of Expected Failure Times for Several Replacement Policies

    Publication Year: 2006 , Page(s): 490 - 495
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (162 KB) |  | HTML iconHTML  

    Planned replacement policies are used to reduce the incidence of system failures, or to return a failed system to work. New better than used aging classes are commonly used in reliability theory to model situations in which the lifetime of a new unit is "better" than the lifetime of a used one. The purpose of this paper is to establish comparisons of expected failure times of an age (block) replacement policy, and a renewal process with no planned replacements when the lifetime of the unit is NBUE. As we will see, age and block replacement policies improve the stochastic behavior compared with the renewal process with no planned replacements when the underlying distribution is NBUE. Some interpretations, applications, and a discussion about some related results are included View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Optimal p -Cycle-Based Protection in WDM Optical Networks With Sparse-Partial Wavelength Conversion

    Publication Year: 2006 , Page(s): 496 - 506
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (527 KB) |  | HTML iconHTML  

    We study the optimal configuration of p-cycles in survivable wavelength division multiplexing (WDM) optical mesh networks with sparse-partial wavelength conversion while 100% restorability is guaranteed against any single failures. We formulate the problem as two integer linear programs (Optimization Models I, and II) which have the same constraints, but different objective functions. p-cycles and wavelength converters are optimally determined subject to the constraint that only a given number of nodes have wavelength conversion capability, and the maximum number of wavelength converters that can be placed at such nodes is limited. Optimization Model I has a composite sequential objective function that first (G1) minimizes the cost of link capacity used by all p-cycles in order to accommodate a set of traffic demands; and then (G2) minimizes the total number of wavelength converters used in the entire network. In Optimization Model II, the cost of one wavelength converter is measured as the cost of a deployed wavelength link with a length of alpha units; and the objective is to minimize the total cost of link capacity & wavelength converters required by p-cycle configuration. During p-cycle configuration, our schemes fully takes into account wavelength converter sharing, which reduces the number of converters required while attaining a satisfactory level of performance. Our simulation results indicate that the proposed schemes significantly outperform existing approaches in terms of protection cost, number of wavelength conversion sites, and number of wavelength converters needed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability and Performance of Star Topology Grid Service With Precedence Constraints on Subtask Execution

    Publication Year: 2006 , Page(s): 507 - 515
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (244 KB) |  | HTML iconHTML  

    The paper considers grid computing systems with star architectures in which the resource management system (RMS) divides service tasks into subtasks, and sends the subtasks to different specialized resources for execution. To provide the desired level of service reliability, the RMS can assign the same subtasks to several independent resources for parallel execution. Some subtasks cannot be executed until they have received input data, which can be the result of other subtasks. This imposes precedence constraints on the order of subtask execution. The service reliability & performance indices are introduced, and a fast numerical algorithm for their evaluation given any subtask distribution is suggested. Illustrative examples are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Study on the Design of Survivable Optical Virtual Private Networks (O-VPN)

    Publication Year: 2006 , Page(s): 516 - 524
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (478 KB) |  | HTML iconHTML  

    This paper tackles the resource allocation problem in wavelength division multiplexing (WDM) networks supporting virtual private networks (O-VPN), in which working, and spare capacity are allocated in the networks for satisfying a series of traffic matrices corresponding to a group of O-VPN. Based on the (M:N)n protection architecture where multiple protection groups (PG) are supported in a single network domain, we propose two novel integer linear programming (ILP) models, namely ILP-I, and ILP-II, aiming to initiate a graceful compromise between the capacity efficiency, and computation complexity without losing the ability of addressing the quality of service (QoS) requirements in each O-VPN. ILP-I considers all the connection requests of each O-VPN in a single formulation, which may suffer from long computation time when the number of connection requests in an O-VPN is large. To trade capacity efficiency with computation complexity, ILP-II is developed such that each O-VPN can be further divided into multiple small PG based on specific grouping policies that satisfy multiple QoS requirements. With ILP-II, it is expected that all the working, and spare capacity of the O-VPN can be allocated with a polynomial time complexity provided that the size of each PG is well constrained. Experimental results show that, in terms of capacity efficiency, a significant improvement can be achieved by ILP-I compared to that by ILP-II at the expense of much more computation time. Although ILP-II is outperformed by ILP-I, it can handle the situation with an arbitrary size of O-VPN. We conclude that the proposed ILP-II model yields a scalable solution for the capacity planning in the survivable optical networks supporting O-VPN based on the (M:N)n protection architecture View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability Modeling in Spatially Distributed Logistics Systems

    Publication Year: 2006 , Page(s): 525 - 534
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (396 KB) |  | HTML iconHTML  

    This article proposes methods for modeling service reliability in a supply chain. The logistics system in a supply chain typically consists of thousands of retail stores along with multiple distribution centers (DC). Products are transported between DC & stores through multiple routes. The service reliability depends on DC location layouts, distances from DC to stores, time requirements for product replenishing at stores, DC's capability for supporting store demands, and the connectivity of transportation routes. Contingent events such as labor disputes, bad weather, road conditions, traffic situations, and even terrorist threats can have great impacts on a system's reliability. Given the large number of store locations & multiple combinations of routing schemes, this article applies an approximation technique for developing first-cut reliability analysis models. The approximation relies on multi-level spatial models to characterize patterns of store locations & demands. These models support several types of reliability evaluation of the logistics system under different probability scenarios & contingency situations. Examples with data taken from a large-scale logistics system of an automobile company illustrate the importance of studying supply-chain system reliability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Some Aging Properties of the Residual Life of k -out-of- n Systems

    Publication Year: 2006 , Page(s): 535 - 541
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (179 KB) |  | HTML iconHTML  

    The k-out-of-n structure is a very popular type of redundancy in fault-tolerant systems. It has been applied in industrial, and military systems. In this paper, we investigate the general residual life (GRL) of a (n-k+1)-out-of-n system with i.i.d. components, given that the total number of the failures of components is less than l-1(1lesl<klesn) at time tges0. It is shown that the GRL is decreasing in l in terms of the likelihood ratio order; Behavior of IFR, and NBU of life distributions are discussed in terms of the monotonicity of GRL. Finally, comparison of the GRL of two (n-k+1)-out-of-n systems are conducted given that the lifetime of their components are assumed to be ordered in the hazard rate order View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized Shock Models Based on a Cluster Point Process

    Publication Year: 2006 , Page(s): 542 - 550
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (263 KB) |  | HTML iconHTML  

    We review the principal progress of shock models during the last three decades. Our new model, the delta-shock model, and its latest developments are also introduced. Furthermore, for adapting shock models to a wider reliability field, we put forward a generalized framework for studying shock models, based on cluster point processes with cluster marks. In addition, we provide a noteworthy case under this framework as a concrete example in reliability with an insurance background, and give the asymptotic distribution of the risk process View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple Weighted Objectives Heuristic for the Redundancy Allocation Problem

    Publication Year: 2006 , Page(s): 551 - 558
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (324 KB) |  | HTML iconHTML  

    A new heuristic is proposed and tested for system reliability optimization. The multiple weighted objective heuristic is based on a transformation of the problem into a multiple objective optimization problem, and then ultimately, transformation into a different single objective problem. The multiple objectives are to simultaneously maximize the reliability of each individual subsystem. This is a logical approach because system reliability is the product of the subsystem reliabilities, so if they are maximized, the system reliability will also be high. This new formulation and associated heuristic are then based on solving a sequence of linear programming problems. It is one of the very few optimization approaches that allow for linear programming algorithms and software to be used for the redundancy allocation problem when mixing of functionally equivalent components is allowed. Thus, it represents an efficient solution method that relies on readily available optimization tools. The heuristic is tested on many example problems, and compared to competing solution approaches. Overall, the heuristic performance is observed to be very good on the tested problem, and superior to the max-min heuristic regarding both efficiency, and performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parameter Estimation and Performance of the p -Chart for Attributes Data

    Publication Year: 2006 , Page(s): 559 - 566
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (363 KB) |  | HTML iconHTML  

    Effects of parameter estimation are examined for the well-known p-chart for the fraction nonconforming based on attributes (binary) data. The exact run-length distribution of the chart is obtained for Phase II applications, when the fraction of nonconforming items, p, is unknown, by conditioning on the observed number of nonconformities in a set of reference data (from Phase I) used to estimate p. Numerical illustrations show that the actual performance of the chart can be substantially different from what one would nominally expect, in terms of the false alarm rate and/or the in-control average run-length. Moreover, the performance of the p-chart can be highly degraded in that an exceedingly large number of false alarms are observed, particularly when p is estimated, unless the number of reference observations is substantially large, much larger than what might be commonly used in practice. These results are useful in the study of the reliability of products or systems that involve binary data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Reliability information for authors

    Publication Year: 2006 , Page(s): 567 - 568
    Save to Project icon | Request Permissions | PDF file iconPDF (53 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability institutional listings

    Publication Year: 2006 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (312 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability institutional listings

    Publication Year: 2006 , Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (605 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong