By Topic

Reliability, IEEE Transactions on

Issue 2 • Date June 2005

Filter Results

Displaying Results 1 - 25 of 25
  • Table of contents

    Page(s): c1 - 193
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability publication information

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (32 KB)  
    Freely Available from IEEE
  • A bibliography of accelerated test plans

    Page(s): 194 - 197
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (81 KB) |  | HTML iconHTML  

    This article provides a current bibliography of 159 references on statistical plans for accelerated tests. It will aid practitioners in selecting plans, and will stimulate researchers to develop needed plans & software. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On optimal burn-in procedures - a generalized model

    Page(s): 198 - 206
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB) |  | HTML iconHTML  

    Burn-in is a manufacturing technique that is intended to eliminate early failures. In this paper, burn-in procedures for a general failure model are considered. There are two types of failure in the general failure model. One is Type I failure (minor failure), which can be removed by a minimal repair or a complete repair; and the other is Type II failure (catastrophic failure), which can be removed only by a complete repair. During the burn-in process, two types of burn-in procedures are considered. In Burn-In Procedure I, the failed component is repaired completely regardless of the type of failure; whereas, in Burn-In Procedure II, only minimal repair is done for the Type I failure, and a complete repair is performed for the Type II failure. Under the model, various additive cost functions are considered. It is assumed that the component before undergoing the burn-in process has a bathtub-shaped failure rate function with the first change point t1, and the second change point t2. The two burn-in procedures are compared in cases when both the procedures are applicable. It is shown that the optimal burn-in time b* minimizing the cost function is always before t1. It is also shown that a large initial failure rate justifies burn-in, i.e., b*>0. The obtained results are applied to some examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Some considerations on system burn-in

    Page(s): 207 - 214
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB) |  | HTML iconHTML  

    The questions of whether or not to perform system burn-in, and how long the burn-in period should be, can be answered by developing a probabilistic model of the system lifetime. Previously, such a model was obtained to relate component burn-in information & assembly quality to the system lifetime, assuming that the assembly defects introduced in various locations of a system are capable of connection failures represented by an exponential distribution. This paper extends the exponential-based results to a general distribution so as to study the dependence of system burn-in on the defect occurrence distribution. In particular, a method of determining an optimal burn-in period that maximizes system reliability is developed based on the system lifetime model, assuming that systems are repaired at burn-in failures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Test power reductions through computationally efficient, decoupled scan chain modifications

    Page(s): 215 - 223
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB) |  | HTML iconHTML  

    SOC test time minimization hinges on the attainment of core test parallelism; yet test power constraints hamper this parallelism as excessive power dissipation may damage the SOC being tested. We propose a test power reduction methodology for SOC cores through scan chain modification. By inserting logic gates between scan cells, a given set of test vectors & captured responses is transformed into a new set of inserted stimuli & observed responses that yield fewer scan chain transitions. In identifying the best possible scan chain modification, we pursue a decoupled strategy wherein test data are decomposed into blocks, which are optimized for power in a mutually independent manner. The decoupled handling of test data blocks not only ensures significantly high levels of overall power reduction but it furthermore delivers computational efficiency at the same time. The proposed methodology is applicable to both fully, and partially specified test data; test data analysis in the latter case is performed on the basis of stimuli-directed controllability measures which we introduce. To explore the tradeoff between the test power reduction attained by the proposed methodology & the computational cost, we carry out an analysis that establishes the relationship between block granularity & the number of scan chain modifications. Such an analysis enables the utilization of the proposed methodology in a computationally efficient manner, while delivering solutions that comply with the stringent area & layout constraints in SOC as well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A hierarchical model for estimating the early reliability of complex systems

    Page(s): 224 - 231
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB) |  | HTML iconHTML  

    We describe a hierarchical Bayesian model for assessing the early reliability of complex systems, for which sparse or no system level failure data are available, except that which exists for comparable systems developed by different categories of manufacturers. Novel features of the model are the inclusion of a "quality" index to allow separate treatment for systems produced by "experienced" & "inexperienced" manufacturers. We show how this index can be employed to distinguish the behavior of systems produced by each category of manufacturer for the first few applications, with later pooling of outcomes from both categories of manufacturers after the first few uses (i.e., after inexperienced manufacturers gain experience). We demonstrate how this model, together with suitable informative priors, can reproduce the reliability growth in the modeled systems. Estimation of failure probabilities (and associated uncertainties) for early launches of new space vehicles is used to illustrate the methodology. Disclaimer-This paper is provided solely to illustrate how hierarchical Bayesian methods can be applied to estimate systems reliability (including uncertainties) for newly introduced complex systems with sparse or nonexistent system level test data. The example problem considered (i.e. estimating failure probabilities of new launch vehicles) is employed solely for illustrative purposes. The authors have made numerous assumptions & approximations throughout the document in order to demonstrate the central techniques. The specific methodologies, results, and conclusions presented in this paper are neither approved nor endorsed by the United States Air Force or the Federal Aviation Administration. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability analysis for various communication schemes in wireless CORBA

    Page(s): 232 - 242
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB) |  | HTML iconHTML  

    For the purpose of designing more reliable networks, we extend the traditional reliability analysis from wired networks to wireless networks with imperfect components. Wireless network systems, such as wireless CORBA, inherit the unique handoff characteristic which leads to different communication structures with various components & links. Therefore, the traditional definition of two-terminal reliability is not applicable any more. We propose a new term, end-to-end expected instantaneous reliability, to integrate those different communication structures into one metric, which includes not only failure parameters but also service parameters. Nevertheless, it is still a monotonously decreasing function of time. The end-to-end expected instantaneous reliability, and its corresponding MTTF, are evaluated quantitatively in different wireless communication schemes. To observe the gain in overall reliability improvement, the reliability importance of imperfect components are also evaluated. The results show that the failure parameters of different components take different effects on MTTF & reliability importance. With different expected working time of a system, the focus of reliability improvement should change from one component to another in order to receive the highest reliability gain. Furthermore, the number of engaged components during a communication state is more critical than the number of system states. For simplicity, we assume that the wired & wireless communication links are perfect, and omit them in the reliability analysis. If these two are engaged into the proposed end-to-end expected instantaneous reliability, it can give a more detailed & complete reliability assessment of a wireless network system. Our quantitative measurements are conducted as an example with the assumption that the failure & service rate are constant; however, in practice, failure & service processes may follow other distributions. After all, our investigation provides an initial yet overall approach to measure the reliability of wireless networks. Although our analysis is conducted on wireless CORBA platforms, it is easily extensible to generic wireless network systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal design of reliable network systems in presence of uncertainty

    Page(s): 243 - 253
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (512 KB) |  | HTML iconHTML  

    In practice, network designs can be based on multiple choices of redundant configurations, and different available components which can be used to form links. More specifically, the reliability of a network system can be improved through redundancy allocation, or for a fixed network topology, by selection of highly reliable links between node pairs, yet with limited overall budgets, and other constraints as well. The choice of a preferred network system design requires the estimation of its reliability. However, the uncertainty associated with such estimates must also be considered in the decision process. Indeed, network system reliability is generally estimated from estimates of the reliability of lower-level components (nodes & links) affected by uncertainties. The propagation of the estimation uncertainty from the components degrades the accuracy of the system reliability estimation. This paper formulates a multiple-objective optimization approach aimed at maximizing the network reliability estimate, and minimizing its associated variance when component types, with uncertain reliability, and redundancy levels are the decision variables. In the proposed approach, Genetic Algorithms (GA) and Monte Carlo (MC) simulation are effectively combined to identify optimal network designs with respect to the stated objectives. A set of Pareto optimal solutions are obtained so that the decision-makers have the flexibility to choose the compromised solution which best satisfies their risk profiles. Sample networks are solved in the paper using the proposed approach. The results indicate that significantly different designs are obtained when the formulation incorporates estimation uncertainty into the optimal design problem objectives. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new dynamic programming method for reliability & redundancy allocation in a parallel-series system

    Page(s): 254 - 261
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    Reliability & redundancy allocation is one of the most frequently encountered problems in system design. This problem is subject to constraints related to the design, such as required structural, physical, and technical characteristics; and the components available in the market. This last constraint implies that system components, and their reliability, must belong to a finite set. For a parallel-series system, we show that the problem can be modeled as an integer linear program, and solved by a decomposition approach. The problem is decomposed into as many sub-problems as subsystems, one sub-problem for each subsystem. The sub-problem for a given subsystem consists of determining the number of components of each type in order to reach a given reliability target with a minimum cost. The global problem consists of determining the reliability target of subsystems. We show that the sub-problems are equivalent to one-dimensional knapsack problems which can be solved in pseudopolynomial time with a dynamic programming approach. We show that the global problem can also be solved by a dynamic programming technique. We also show that the obtained method YCC converges toward an optimal solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A practical method for obtaining prior distributions in reliability

    Page(s): 262 - 269
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB) |  | HTML iconHTML  

    In this paper, we propose a comprehensive methodology to specify prior distributions for commonly used models in reliability. The methodology is based on characteristics easy to communicate by the user in terms of time to failure. This information could be in the form of intervals for the mean and standard deviation, or quantiles for the failure-time distribution. The derivation of the prior distribution is done for two families of proper initial distributions, namely s-normal-gamma, and uniform distribution. We show the implementation of the proposed method to the parameters of the s-normal, lognormal, extreme value, Weibull, and exponential models. Then we show the application of the procedure to two examples appearing in the reliability literature, and . By estimating the prior predictive density, we find that the proposed method renders consistent distributions for the different models that fulfill the required characteristics for the time to failure. This feature is particularly important in the application of the Bayesian approach to different inference problems in reliability, model selection being an important example. The method is general, and hence it may be extended to other models not mentioned in this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bathtub shaped failure rates from mixtures: a practical point of view

    Page(s): 270 - 275
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB) |  | HTML iconHTML  

    We show that a bathtub shaped failure rate can be obtained from a mixture of two increasing failure rate (IFR) models. Specifically, we study the failure rate of the mixture of an exponential distribution, and a Weibull distribution with strictly increasing failure rate. Under some reasonable conditions, we show that, from a practical point of view, the mixture failure rate is bathtub. Similar results can be obtained from other mixtures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Origins, properties, and parameters estimation of the hyperbolic reliability model

    Page(s): 276 - 281
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB) |  | HTML iconHTML  

    This paper deals with a new reliability model for nonrepairable systems, the Hyperbolic. A decreasing hazard rate approaching a value greater than zero is its distinctive characteristic. Potentially, many mechanisms of failure can produce mortality laws of Hyperbolic type. The "Deterioration", "Stress-Strength", and "Shocks" failure models discussed here are such models. To make the use of the Hyperbolic model easier, the maximum likelihood estimators of its parameters are given. They can be calculated without any difficulty, and appear to be acceptable for practical technological applications. Finally, two applicative examples are given. These examples exhibit that when the mechanism of failure is one of the types discussed, the Hyperbolic model shows a better fit compared with other alternative reliability models, like Gamma, and Weibull. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parameter estimation of incomplete data in competing risks using the EM algorithm

    Page(s): 282 - 290
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB) |  | HTML iconHTML  

    Consider a system which is made up of multiple components connected in a series. In this case, the failure of the whole system is caused by the earliest failure of any of the components, which is commonly referred to as competing risks. In certain situations, it is observed that the determination of the cause of failure may be expensive, or may be very difficult to observe due to the lack of appropriate diagnostics. Therefore, it might be the case that the failure time is observed, but its corresponding cause of failure is not fully investigated. This is known as masking. Moreover, this competing risks problem is further complicated due to possible censoring. In practice, censoring is very common because of time and cost considerations on experiments. In this paper, we deal with parameter estimation of the incomplete lifetime data in competing risks using the EM algorithm, where incompleteness arises due to censoring and masking. Several studies have been carried out, but parameter estimation for incomplete data has mainly focused on exponential models. We provide the general likelihood method, and the parameter estimation of a variety of models including exponential, s-normal, and lognormal models. This method can be easily implemented to find the MLE of other models. Exponential and lognormal examples are illustrated with parameter estimation, and a graphical technique for checking model validity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data mapping and the prediction of common cause failure probability

    Page(s): 291 - 296
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    General failure event data from various sources are often used to estimate the failure probability for the system of interest, especially when s-dependence exists among component failures, where common cause failure plays an important role. Failure event data from different sources must be reasonably explained, and correctly applied, so that the information about load environment, and component/system property can be used correctly. In estimating the probability for s-dependent system failure, both the load distribution, and component strength distribution are much more important than component failure probability index. Based on the relationship among different multiple failures, this paper presents a data mapping approach to estimating dependent system failure probability through multiple failure event data of other systems with different sizes. The underlying assumption on data mapping is that failures of different multiples (including single) are correlated with each other for a group of components if they are subjected to the same or correlated random load (loads). Taking the situation of a group of s-independent components operating under the same random load as an example, the likelihood of a component failure at a trial depends not only on the strength of the individual component but also on the realization of the random load. The likelihood of a specific multiple failure at a trial is also determined by both the component strengths, and the realization of the random load. Furthermore, if a larger load sample appears, the likelihoods for failure are higher. Conversely, if a smaller load sample appears, the likelihoods of failure are lower. We emphasized in this paper that system failure event data should be interpreted & applied under the principle that various multiple failures are distinguished by their respective failure multiplicity and/or system size, and are inherently interrelated through correlated load environments. The approach starts with determining the load parameter, and component strength parameter according to multiple (including single) failure event data available. Then, these parameters are used to calculate the probability of multiple failures for systems of different sizes. This approach is applicable to predict high multiple fai- lure probability based on low multiple failure event data. Examples of estimating multiple failure probabilities of EDG (emergency diesel generators) with mapped data illustrate that the proposed approach is desirable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability modeling of multi-state degraded systems with multi-competing failures and random shocks

    Page(s): 297 - 303
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB) |  | HTML iconHTML  

    In this paper, we develop a generalized multi-state degraded system reliability model subject to multiple competing failure processes, including two degradation processes, and random shocks. The operating condition of the multi-state systems is characterized by a finite number of states. We also present a methodology to generate the system states when there are multi-failure processes. The model can be used not only to determine the reliability of the degraded systems in the context of multi-state functions, but also to obtain the states of the systems by calculating the system state probabilities. Several numerical examples are given to illustrate the concepts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrating preventive maintenance planning and production scheduling for a single machine

    Page(s): 304 - 309
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    Preventive maintenance planning, and production scheduling are two activities that are inter-dependent but most often performed independently. Considering that preventive maintenance, and repair affect both available production time, and the probability of machine failure, we are surprised that this inter-dependency seems to be overlooked in the literature. We propose an integrated model that coordinates preventive maintenance planning decisions with single-machine scheduling decisions so that the total expected weighted completion time of jobs is minimized. Note that the machine of interest is subject to minimal repair upon failure, and can be renewed by preventive maintenance. We investigate the value of integrating production scheduling with preventive maintenance planning by conducting an extensive experimental study using small scheduling problems. We compare the performance of the integrated solution with the solutions obtained from solving the preventive maintenance planning, and job scheduling problems independently. For the problems studied, integrating the two decision-making processes resulted in an average improvement of approximately 2% and occasional improvements of as much as 20%. Depending on the nature of the manufacturing system, an average savings of 2% may be significant. Certainly, savings in this range indicate that integrated preventive maintenance planning, and production scheduling should be focused on critical (bottleneck) machines. Because we use total enumeration to solve the integrated model for small problems, we propose a heuristic approach for solving larger problems. Our analysis is based on minimizing total weighted completion time; thus, both the scheduling, and maintenance problems favor processing shorter jobs in the beginning of the schedule. Given that due-date-based objectives, such as minimizing total weighted job tardiness, present more apparent trade-offs & conflicts between preventive maintenance planning, and job scheduling, we believe that integrated preventive maintenance planning & production scheduling is a worthwhile area of study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An alternative degradation reliability modeling approach using maximum likelihood estimation

    Page(s): 310 - 317
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB) |  | HTML iconHTML  

    An alternative degradation reliability modeling approach is presented in this paper. This approach extends the graphical approach used by several authors by considering the natural ordering of performance degradation data using a truncated Weibull distribution. Maximum Likelihood Estimation is used to provide a one-step method to estimate the model's parameters. A closed form expression of the likelihood function is derived for a two-parameter truncated Weibull distribution with time-independent shape parameter. A semi-numerical method is presented for the truncated Weibull distribution with a time-dependent shape parameter. Numerical studies of generated data suggest that the proposed approach provides reasonable estimates even for small sample sizes. The analysis of fatigue data shows that the proposed approach yields a good match of the crack length mean value curve obtained using the path curve approach and better results than those obtained using the graphical approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An inspection-maintenance model for systems with multiple competing processes

    Page(s): 318 - 327
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB) |  | HTML iconHTML  

    In some applications, the failure rate of the system depends not only on the time, but also upon the status of the system, such as vibration level, efficiency, number of random shocks on the system, etc., which causes degradation. In this paper, we develop a generalized condition-based maintenance model subject to multiple competing failure processes including two degradation processes, and random shocks. An average long-run maintenance cost rate function is derived based on the expressions for the degradation paths & cumulative shock damage, which are measurable. A geometric sequence is employed to develop the inter-inspection sequence. Upon inspection, one needs to decide whether to perform a maintenance, such as preventive or corrective, or to do nothing. The preventive maintenance thresholds for degradation processes & inspection sequences are the decision variables of the proposed model. We also present an algorithm based on the Nelder-Mead downhill simplex method to calculate the optimum policy that minimizes the average long-run maintenance cost rate. Numerical examples are given to illustrate the results using the optimization algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Repairable consecutive-k-out-of-n: G systems with r repairmen

    Page(s): 328 - 337
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    In this paper, we study repairable consecutive-k-out-of-n: G systems with r repairmen. The systems are either circular or linear. It is assumed that the working time, and the repair time of each component in the system are both exponentially distributed; and every component after repair is 'as good as new'. Each component is classified as either a key component, or an ordinary one according to its priority role to the system's repair. By using the definition of generalized transition probability, the state transition probabilities of the system are derived. Some important indices related to repairmen are obtained, and some reliability indices are derived by using the Laplace transform technique. Numerical examples are then studied in detail to demonstrate the theoretical results developed in the paper. The proposed method, and its numerical illustrations for the examples verify the validity & generality of the studied system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal maintenance policies under different operational schedules

    Page(s): 338 - 346
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB) |  | HTML iconHTML  

    In the reliability literature, maintenance time is usually ignored during the optimization of maintenance policies. In some scenarios, costs due to system failures may vary with time, and the ignorance of maintenance time will lead to unrealistic results. This paper develops maintenance policies for such situations where the system under study operates iteratively at two successive states: up or down. The costs due to system failure at the up state consist of both business losses & maintenance costs, whereas those at the down state only include maintenance costs. We consider three models: Model A, B, and C: · Model A makes only corrective maintenance (CM). · Model B performs imperfect preventive maintenance (PM) sequentially, and CM. · Model C executes PM periodically, and CM; this PM can restore the system as good as the state just after the latest CM. The CM in this paper is imperfect repair. Finally, the impact of these maintenance policies is illustrated through numerical examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal policies with decreasing probability of imperfect maintenance

    Page(s): 347 - 357
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB) |  | HTML iconHTML  

    This study applies periodic preventive maintenance to three repair models: major repaired, minimal repaired, or fixed until perfect preventive maintenance upon failure. Two types of preventive maintenance are performed, namely imperfect preventive maintenance, and perfect preventive maintenance. The probability that preventive maintenance is perfect depends on the number of imperfect maintenance operations performed since the last renewal cycle. Mathematical formulas for the expected cost per unit time are obtained. For each model, the optimum preventive maintenance time T*, which would minimize the cost rate, is discussed. Various special cases are considered. A numerical example is presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identify unrepairability to speed-up spare allocation for repairing memories

    Page(s): 358 - 365
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB) |  | HTML iconHTML  

    In this paper, we discuss some strategies for identifying unrepairable memories, and from that to introduce a novel theorem that can make more precise identification. A new algorithm for searching repair solutions is also proposed, which characterizes the rows, and columns of defective memory cells with revised effective coefficients. We have simulated it on many generated example maps, and compared it with the previous algorithms to verify its efficiency. It's combined with those arranged strategies of judging unrepairability to generate a complete flow. The complete algorithm has also been run on many examples with various memory sizes, defect numbers, and distribution types. The simulation results further show that identifying unrepairability in advance can help the reconfiguration procedure run much faster than searching solutions directly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Special issue on reliability studies on nanotechnology

    Page(s): 367 - 368
    Save to Project icon | Request Permissions | PDF file iconPDF (167 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong