By Topic

Reliability, IEEE Transactions on

Issue 1 • Date March 2011

Filter Results

Displaying Results 1 - 25 of 43
  • Table of contents

    Page(s): C1 - 1
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • Special Section on Prognostics and Systems Health Management (PHM), Extended Papers From the PHM Macau 2010 Conference

    Page(s): 2
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE
  • Fusion Approach for Prognostics Framework of Heritage Structure

    Page(s): 3 - 13
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1100 KB) |  | HTML iconHTML  

    The Cutty Sark is undergoing major conservation to slow down the deterioration of the original Victorian fabric of the ship. While the conservation work being carried out is “state of the art,” there is no evidence at present of the effectiveness of the conservation work over the next fifty years. A prognostics framework is being developed to monitor the “health” of the ship's iron structures to help ensure a 50 year life once conservation is completed, with only minor deterioration taking place over time. This paper presents the prognostics framework being developed, which encompasses four approaches: 1-Canary and Parrot devices, 2-Physics-of-Failure (PoF) models, 3-Precursor Monitoring and Data Trend Analysis, and 4-Bayesian Networks. “Canary” and “Parrot” devices have been designed to mimic the actual mechanisms that would lead to failure of the iron structures, with canary devices failing faster to act as an indicator of forthcoming failures, while parrot devices fail at the same rate as the structure under consideration. A PoF model based on a decrease of the corrosion rate over time is used to predict the remaining life of an iron structure. Mahalanobis Distance (MD) is used as a precursor monitoring technique to obtain a single comparison metric from multiple sensor data to represent anomalies detected in the system. Bayesian Network models are then used as a fusion technique, integrating remaining life predictions from PoF models with information of possible anomalies from MD analysis to provide a new prediction of remaining life. This paper describes why, and how the four approaches are used for diagnostic and prognostics purposes, and how they are integrated into the prognostics framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combined Probability Approach and Indirect Data-Driven Method for Bearing Degradation Prognostics

    Page(s): 14 - 20
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (489 KB) |  | HTML iconHTML  

    This study proposes an application of relevance vector machine (RVM), logistic regression (LR), and autoregressive moving average/generalized autoregressive conditional heteroscedasticity (ARMA/GARCH) models to assess failure degradation based on run-to-failure bearing simulating data. Failure degradation is calculated by using an LR model, and then regarded as the target vectors of the failure probability for training the RVM model. A multi-step-ahead method-based ARMA/GARCH is used to predict censored data, and its prediction performance is compared with one of Dempster-Shafer regression (DSR) method. Furthermore, RVM is selected as an intelligent system, and trained by run-to-failure bearing data and the target vectors of failure probability obtained from the LR model. After training, RVM is employed to predict the failure probability of individual units of bearing samples. In addition, statistical process control is used to analyze the variance of the failure probability. The result shows the novelty of the proposed method, which can be considered as a valid machine degradation prognostic model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gear Damage Assessment Based on Cyclic Spectral Analysis

    Page(s): 21 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1099 KB) |  | HTML iconHTML  

    With regard to the AMFM characteristics, and especially the cyclostationarity of gear vibrations, cyclic spectral analysis is used to extract the modulation features of gearbox vibration signals to detect and assess localized gear damage. The explicit equation for the cyclic spectral density in a closed form for AMFM signals is deduced, and its properties in the joint cyclic frequency-frequency domain are summarized. The ratio between the sum of the cyclic spectral density magnitude along the frequency axis at the cyclic frequencies of modulating frequency and 0 Hz varies monotonically with the amplitude modulation magnitude. Hence it is useful to track modulation magnitude. Localized gear damage generates periodic impulses, and its growth increases the magnitude of periodic impulses. Consequently, the amplitude modulation magnitude of gear AMFM vibration signals increases. Hence the ratio can be used as an indicator of the health condition of gearboxes. The analysis of both gear crack simulation vibration signals and gearbox lifetime experiments shows a globally monotonic increase as gear damage severity increases. The proposed approach has the potential to assess the health of gearboxes, and predict severe damage. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Electrostatic Monitoring of Gas Path Debris for Aero-engines

    Page(s): 33 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (365 KB) |  | HTML iconHTML  

    We present advanced condition monitoring technology based on electrostatic induction for detecting the debris in aero-engines exhaust gas. We also discuss the key technologies related to electrostatic monitoring systems, such as sensing technology, signal processing, feature extraction, and abnormal particle identification. The finite element method and data fitting method are applied to analyze the sensing characteristics of the sensor. We apply empirical mode decomposition and independent component analysis to effectively remove the noise mixed in with the monitoring signal. Certain diagnostic features extracted from the de-noised signal are presented here. A knowledge-acquisition model based on rough sets theory and artificial neural networks is constructed to identify the abnormal particles. The experiment results show the effectiveness of the methods proposed in this paper, and provide some guidelines for future research in this field for the aviation industry. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Machine Condition Classification Using Deterioration Feature Extraction and Anomaly Determination

    Page(s): 41 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (298 KB) |  | HTML iconHTML  

    Condition classification has been widely used for assessing equipment status for machine condition monitoring and diagnostics. An engine was fitted with one temperature and two pressure sensors to study the machine conditions in prognostics with an added abnormal state, in addition to the conventional normal and failure states. This work enables a better classification capability in order to predict deterioration in the engine. Information related to three deterioration processes was collected, and preprocessed using singular point elimination, deviation value acquisition, and data normalization. Wavelet transforms were used to extract deterioration features with different mother wavelets. The mother wavelets were selected using tests to optimize the wavelet selection. The deterioration was related to the amount of anomaly, with the abnormal states defined to distinguish the functional from the failure states. A Learning Vector Quantization (LVQ) neural network was used to classify the machine conditions, including normal, abnormal, and failure states. The results showed that the deterioration features defined using the Daubechies wavelet (db8) most strongly correlated with the original signal, so that the classification accuracy based on the deterioration features was greatly improved. The LVQ classification system had good accuracy for machine condition classification, and was adaptable to various engine conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recent Research and Developments in Temporal and Spatiotemporal Surveillance for Public Health

    Page(s): 49 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (345 KB) |  | HTML iconHTML  

    The objective of public health surveillance is to systematically collect, analyze, and interpret public health data (about chronic or infectious diseases) to understand trends; detect changes in disease incidence and death rates; and plan, implement, and evaluate public health practices. Recently, studies have been conducted to develop methods and algorithms for health surveillance and disease detection. This paper attempts to review recent research on temporal and spatiotemporal surveillance methods. We have addressed specific challenges and research gaps in the relevant research. Lastly, we discuss a comparative example using a dataset of male thyroid cancer cases in New Mexico. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Special Section on Reliability and Risk Assessment of Complex Systems

    Page(s): 59 - 60
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • Network Reliability of a Time-Based Multistate Network Under Spare Routing With p Minimal Paths

    Page(s): 61 - 69
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB) |  | HTML iconHTML  

    This paper constructs a time-based multistate network composed of multistate edges to study network reliability. Each edge involves three attributes: variable capacity, lead time, and cost. The transmission time from the source to the sink is thus not fixed. Two problems are discussed in this paper. First, we evaluate the probability that the given amount of data can be sent through minimal paths simultaneously under both time threshold, and budget constraint. Such a probability we term network reliability. It can be treated as a performance index to measure the transmission ability of a complex multistate system. Then, the calculation procedures are proposed to make solution. To enhance network reliability, the network administrator decides the spare routing in advance to indicate the first and the second priority minimal paths. The second path will be responsible for the transmission duty if the first fails. The second problem is addressed to evaluate network reliability according to the spare routing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improved Efficiency in the Analysis of Phased Mission Systems With Multiple Failure Mode Components

    Page(s): 70 - 79
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (449 KB) |  | HTML iconHTML  

    Systems often operate in phased missions where their reliability structure varies over a set of consecutive time periods, known as phases. The reliability of a phased mission is defined as the probability that all phases in the mission are completed without failure. While the Binary Decision Diagram (BDD) method has been shown to be the most efficient solution for measuring the reliability of phased missions with non-repairable components with mutually exclusive failure modes, the existing BDD based methods are still unable to analyze large systems without considerable computational expense. This paper introduces a new BDD based method that is shown to provide improved efficiency and accuracy in the repeat analysis of this type of phased mission. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multi-State Reliability Systems Under Discrete Time Semi-Markovian Hypothesis

    Page(s): 80 - 87
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (253 KB) |  | HTML iconHTML  

    We consider repairable Multi-state reliability systems with components, the lifetimes and the repair times of which are -independent. The -th component can be either in the complete failure state 0, in the perfect state , or in one of the degradation states . The sojourn time in any of these states is a random variable following a discrete distribution. Thus, the time behavior of each component is described by a discrete-time semi-Markov chain, and the time behavior of the whole system is described by the vector of paired processes of the semi-Markov chain and the corresponding backward recurrence time process. Using recently obtained results concerning the discrete-time semi-Markov chains, we derive basic reliability measures. Finally, we present some numerical results of our proposed approach in specific reliability systems, namely series, parallel, k-out-of-n:F, and consecutive-k-out-of-n:F systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Addressing the Most Reliable Edge-Disjoint Paths With a Delay Constraint

    Page(s): 88 - 93
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (290 KB) |  | HTML iconHTML  

    Recently, multipath solutions have been proposed to improve the quality-of-service of the source to destination (s,t)-path in communication networks (CNs). This paper de scribes the λDP/DR problem to obtain λ ≥ 1 edge-disjoint (s,t)-paths (λDP) such that its reliability is maximized while keeping its delay no longer than a delay constraint D ≥ 1. This problem is NP-hard, and thus this paper proposes an approximate solution using Lagrange-relaxation. Our algorithm generates λDP with δ(λDP) ≤ D, and reliability bounded by |log(Rmin)| ≤ |log(ρ(λDP))| ≤ (1 + k)*|log(Rmin)|, where Rmin is the minimum reliability of any (s, t)-path in the CN, and k ≥ 1. Our simulations on forty random CNs and large grid networks show that our solution produces λDP with delay and reliability comparable to those obtained by the optimal but exponential time algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling Interdependent Network Systems for Identifying Cascade-Safe Operating Margins

    Page(s): 94 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB) |  | HTML iconHTML  

    Infrastructure interdependency stems from the functional and logical relations among individual components in different distributed systems. To characterize the extent to which a contingency affecting an infrastructure is going to weaken, and possibly disrupt, the safe operation of an interconnected system, it is necessary to model the relations established through the connections linking the multiple components of the involved infrastructures. In this work, the modeling of interdependencies among network systems and of their effects on failure propagation is carried out within the simulation framework of a failure cascade process. The sensitivity of the critical loading value (the lower bound of the cascading failure region) and of the average cascade size with respect to the coupling parameters defining the interdependency strength is investigated as a means to arrive at the definition and prescription of cascade-safe operating margins. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Criticality Assessment Models for Failure Mode Effects and Criticality Analysis Using Fuzzy Logic

    Page(s): 102 - 110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (513 KB) |  | HTML iconHTML  

    Traditional Failure Mode and Effects Analysis (FMEA) has shown its effectiveness in defining, identifying, and eliminating known and/or potential failures or problems in products, process, designs, and services to help ensure the safety and reliability of systems applied in a wide range of industries. However, its approach to prioritize failure modes through a crisp risk priority number (RPN) has been highly controversial. This paper proposes two models for prioritizing failures modes, specifically intended to overcome such limitations of traditional FMEA. The first proposed model treats the three risk factors as fuzzy linguistic variables, and employs alpha level sets to provide a fuzzy RPN. The second model employs an approach based on the degree of match and fuzzy rule-base. This second model considers the diversity and uncertainty in the opinions of FMEA team members, and converts the assessed information into a convex normalized fuzzy number. The degree of match (DM) is used thereafter to estimate the matching between the assessed information and the fuzzy number characterizing the linguistic terms. The proposed models are suitably supplemented by illustrative examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis and Optimization of Repairable Flow Networks With Complex Topology

    Page(s): 111 - 124
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (297 KB) |  | HTML iconHTML  

    We propose a framework for analysis and optimization of repairable flow networks by (i) stating and proving the maximum flow minimum flow path resistance theorem for networks with merging flows (ii) a discrete-event solver for determining the variation of the output flow from repairable flow networks with complex topology (iii) a procedure for determining the threshold flow rate reliability for repairable networks with complex topology (iv) a method for topology optimization of repairable flow networks and (v) an efficient algorithm for maximizing the flow in non-reconfigurable flow networks with merging flows. Maximizing the flow in a static flow network does not necessarily guarantee that the flow in the corresponding non-reconfigurable repairable network will be maximized. In this respect, we introduce a new concept related to repairable flow networks: `a specific resistance of a flow path' which is essentially the average percentage of losses from component failures for a flow path from the source to the sink. A very efficient algorithm based on adjacency arrays has also been proposed for determining all minimal flow paths in a network with complex topology and cycles. We formulate and prove a fundamental theorem about non-reconfigurable repairable flow networks with merging flows. The flow in a repairable flow network with merging flows can be maximized by preferentially saturating directed flow paths from the sources to the sink, characterized by the largest average availability. The procedure starts with the flow path with the largest average availability (the smallest specific resistance), and continues by saturating the unsaturated directed flow path with the largest average availability until no more flow paths can be saturated. A discrete-event solver for reconfigurable repairable flow networks with complex topology has also been constructed. The proposed discrete-event solver maximizes the flow rate in the network upon each component failure and return - - from repair. By maximizing the flow rate upon each component failure and return from repair, the discrete-event solver ensures a larger total output flow during a specified time interval. The designed simulation procedure for determining the threshold flow rate reliability is particularly useful for comparing flow network topologies, and selecting the topology characterized by the largest threshold flow rate reliability. It is also very useful in deciding whether the resources allocated for purchasing extra redundancy are justified. Finally, we propose a new optimization method for determining the network topology combining a maximum output flow rate attained within a specified budget for building the network. The optimization method is based on a branch and bound algorithm combined with pruning the full-complexity network as a way of exploring the possible repairable networks embedded in the full-complexity network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling U.S. Mortality and Risk-Cost Optimization on Life Expectancy

    Page(s): 125 - 133
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (895 KB) |  | HTML iconHTML  

    Human life expectancy has risen in most developed countries over several decades, causing observed demographic shifts. Many researchers have developed models to determine the expectancy of life at birth. Yet due to the complexity of the real-world mortality data, and recent global economy impacts, there is a great demand to search for new models to accurately predict the life expectancy, especially in the United States. In this paper, we focus on the analysis of mortality rate in the United States over a period of six decades (data from 1946 to 2005) with considerations of the six most common distribution functions in the mortality area. These functions are: Gompertz, Gompertz-Makeham, logistic, log logistic, loglog, and Weibull. Given complex mortality data, we develop models including algorithms to compute the mortality measures such as mortality rate, and then select the best function for predicting the life expectancy in the United States. The modeling results include life expectancy at birth, life expectancy at each age, mortality rate, and forecasting models. We also discuss a new risk-cost function with respect to life expectancy, and determine the optimal threshold lifetime level that maximizes the expected risk cost. Numerical examples are given to illustrate the expected risk-cost results. The proposed risk-cost optimization model, and the optimal lifetime threshold value can help the policy-decision makers of related healthcare organizations including insurance companies to carefully perform the tradeoff between the cost and benefits. This approach can affect both short term and long term related healthcare policies and plans. The results show that the logistic distribution uniquely out performs other distributions based on the mean square error criterion. We find that on average the life expectancy at birth in the United States in 2005 for both sexes, males, and females are 84.0, 81.9, and 84.6 years, respectively. This new result shows that the true lif- - e expectancy on average in the U.S. is significantly larger than in existing reports. Life expectancy obviously changes as one will get older. By late adulthood, a person's chances of living longer increase as is reflected by the life expectancy function. For example, although the life expectancy from birth in the United States in 2005 is 84.0 years, those who live to age 65 will have an average of 20.4 additional years left to live, making their life expectancy almost 85.4 years. Similarly, those who live to age 75 will gain an additional few years, making their life expectancy almost 87.6 years. On average, a child who will be born in the US in 2015 can expect to live 85.5 years, and in 2025 a child born can expect to live nearly 87 years. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring Complex Systems Aspects of Blackout Risk and Mitigation

    Page(s): 134 - 143
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (643 KB) |  | HTML iconHTML  

    Electric power transmission systems are a key infrastructure, and blackouts of these systems have major consequences for the economy and national security. Analyses of blackout data suggest that blackout size distributions have a power law form over much of their range. This result is an indication that blackouts behave as a complex dynamical system. We use a simulation of an upgrading power transmission system to investigate how these complex system dynamics impact the assessment and mitigation of blackout risk. The mitigation of failures in complex systems needs to be approached with care. The mitigation efforts can move the system to a new dynamic equilibrium while remaining near criticality and preserving the power law region. Thus, while the absolute frequency of blackouts of all sizes may be reduced, the underlying forces can still cause the relative frequency of large blackouts to small blackouts to remain the same. Moreover, in some cases, efforts to mitigate small blackouts can even increase the frequency of large blackouts. This result occurs because the large and small blackouts are not mutually independent, but are strongly coupled by the complex dynamics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cooperative Predictive Maintenance of Repairable Systems With Dependent Failure Modes and Resource Constraint

    Page(s): 144 - 157
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (517 KB) |  | HTML iconHTML  

    Many works on condition-based maintenance of repairable systems apply to either a single failure mode, or statistically independent failure modes. Different from these works, this paper considers the problem of predictive maintenance of repairable systems with dependent failure modes, and resource constraints. Assume that (i) a repairable system is subject to two statistically dependent failure modes bidirectionally affecting each other, (ii) imperfect maintenance actions are cooperatively performed on two dependent failure modes by allocating insufficient resources spent for maintenance, and (iii) future maintenance scheduled at the current time depend on both the predicted number of future failures and the minimization of the expected maintenance cost rate defined in the long term. To resolve the above problem, a novel cooperative predictive maintenance model is proposed. Its basis is the incorporation of the hazard-rate function, and effective age. In this model, two failure modes are statistically dependent in such a way that the hazard rate of one failure mode depends on the accumulated number of failures of the other failure mode. The effect of imperfect maintenance is interpreted in terms of how the hazard rate function and the effective age are changed by maintenance actions. The age reduction factor for each failure mode due to maintenance has some deterministic relation to the degree of resources cooperatively allocated to perform maintenance. The decision variables in the maintenance policy, namely the number of maintenance actions to be performed, the interval between successive maintenance actions, and the cooperatively allocated degree of resources, can be recursively updated when new monitored information arrives. This approach relies on both the predicted number of future failures, and the minimization of the expected maintenance cost rate defined in the long term. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Adaptive Reliability Analysis Using Path Testing for Complex Component-Based Software Systems

    Page(s): 158 - 170
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1227 KB) |  | HTML iconHTML  

    With the growing size and complexity of software applications, traditional software reliability methods are insufficient to analyze inter-component interactions of modular software systems. The number of test cases may be extremely large for this application; therefore, it is hard for us to extensively test each software component given resource limitations. In this paper, we propose an adaptive framework of incorporating path testing into reliability estimation for modular software systems. Three estimated methods based on common program structures, namely, sequence, branch, and loop structures, are proposed to calculate the path reliability. Consequently, the derived path reliabilities can be applied to the estimates of software reliability. Some experiments are performed based on two real systems. In addition, the accuracy and correlation with respect to the experiments are investigated by simulation and sensitivity analysis. Experimental results show that the path reliability has a high correlation to the actual software reliability. For software with loop structures, a smaller loop number can be assigned to derive an acceptable estimation of path reliability. Further, the sensitivity analysis can be used to identify critical modules and paths for resource allocation. It can be concluded that the proposed methods are useful and helpful for estimating software reliability and can be adaptively used in the early stages of software development. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Codesign-Oriented Performability Modeling for Hardware-Software Systems

    Page(s): 171 - 179
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (335 KB) |  | HTML iconHTML  

    We discuss the performability evaluation model for the computer-based system, introducing the concept of codesign. We assume that the computer system consists of one hardware, and one software subsystems; and consider both of hardware, and software failure & restoration characteristics. In particular, the reliability growth process, the upward tendency of difficulty in debugging, and the imperfect debugging environment are described for the software subsystem. Assuming that the system can process the multiple tasks simultaneously, and that the arrival process of the tasks follows a nonhomogeneous Poisson process (NHPP), we use infinite server queueing theory to analyze the distribution of the number of tasks whose processes can be completed within the processing time limit. We derive several performability measures considering the real-time property, which are given as the functions of time, and the number of debugging activities. Finally we illustrate several numerical examples of the measures to investigate the impact of hardware and software failure & restoration characteristics on the system performability evaluation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic Hybrid Fault Modeling and Extended Evolutionary Game Theory for Reliability, Survivability and Fault Tolerance Analyses

    Page(s): 180 - 196
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (692 KB) |  | HTML iconHTML  

    We introduce a new layered modeling architecture consisting of dynamic hybrid fault modeling and extended evolutionary game theory for reliability, survivability, and fault tolerance analyses. The architecture extends traditional hybrid fault models and their relevant constraints in the Agreement algorithms with survival analysis, and evolutionary game theory. The dynamic hybrid fault modeling (i) transforms hybrid fault models into time- and covariate-dependent models; (ii) makes real-time prediction of reliability more realistic, and allows for real-time prediction of fault-tolerance; (iii) sets the foundation for integrating hybrid fault models with reliability and survivability analyses by integrating them with evolutionary game modeling; and (iv) extends evolutionary game theory by stochastically modeling the survival (or fitness) and behavior of `game players.' To analyse survivability, we extend dynamic hybrid fault modeling with a third-layer, operational level modeling, to develop the three-layer survivability analysis approach (dynamic hybrid fault modeling constitutes the tactical and strategic levels). From the perspective of evolutionary game modeling, the two mathematical fields, i.e., survival analysis and agreement algorithms, which we applied for developing dynamic hybrid fault modeling, can also be utilized to extend the power of evolutionary game theory in modeling complex engineering, biological (ecological), and social systems. Indeed, a common property of the areas where our extensions to evolutionary game theory can be advantageous is that the risk analysis and management are a core issue. Survival analysis (including competing risks analysis, and multivariate survival analysis) offers powerful modeling tools to analyse time-, space-, and/or covariate-dependent uncertainty, vulnerability, and/or frailty which `game players' may experience. The agreement algorithms, which are not limited to the agreement algorithms from distributed computing,- - when applied to extend evolutionary game modeling, can be any problem (game system) specific rules (algorithms or models) that can be utilized to dynamically check the consensus among game players. We expect that the modeling architecture and approaches discussed in the study should be implemented as a software environment to deal with the necessary sophistication. Evolutionary computing should be particularly convenient to serve as the core optimization engine, and should simplify the implementation. Accordingly, a brief discussion on the software architecture is presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cost Based Risk Analysis to Identify Inspection and Restoration Intervals of Hidden Failures Subject to Aging

    Page(s): 197 - 209
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (680 KB) |  | HTML iconHTML  

    This paper develops a cost rate function (CRF) to identify the optimum interval and frequency of inspection and restoration of aircraft's repairable components which are undergoing aging, and whose failures are hidden, i.e. are detectable by inspection or upon demand. The paper considers two prevalent strategies, namely Failure Finding Inspection (FFI), and a combination of FFI with restoration actions (FFI+Res), for both the “non-safety effect,” and the “safety effect” categories of hidden failures. As-bad-as-old (ABAO) inspection effectiveness, and as-good-as-new (AGAN) restoration effectiveness are considered. In case of repair due to findings by inspection, as-bad-as-old repair effectiveness is considered. The proposed method considers inspection and repair times, and takes into account the costs associated with inspection, repair, and restoration; and the potential losses due to the inability to use the aircraft (maintenance downtime). It also considers the cost associated with accidents caused by the occurrence of multiple failure. The approach used in this study for risk constraint optimization is based on the mean fraction of time during which the unit is not functioning within inspection intervals (MFDT), and the average interval unavailability behavior within the restoration period. In the case of an operational limit, when it is not possible to remove the unit for restoration, or one needs to use the unit longer than the expected operating time, the paper introduces an approach to analyzing the possibility of and conditions for providing an extension to the restoration interval that satisfies the risk constraints and the business requirements at the same time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Novel Risk Assessment for Complex Structural Systems

    Page(s): 210 - 218
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (355 KB) |  | HTML iconHTML  

    Risk management is an essential tool for safe, economical, and efficient design, operation, and maintenance of complex engineering systems. Seismic risk assessment of structures, particularly in nuclear power plants, needs special attention from the reliability community. Available risk assessment methods may not be sufficient to estimate the risk of complex systems made with different materials, numerous ways elements are connected to each other, and excited by dynamic loadings including seismic loading applied in the time domain. A hybrid risk assessment approach is proposed by intelligently integrating the stochastic finite element method, and the response surface method. It is capable of estimating the probability just before failure considering all major sources of nonlinearity and uncertainty, eliminating the deficiencies of the currently available reliability methods. With the help of the illustrative examples, it is shown that the method is robust, accurate, and efficient in estimating risk of complex systems excited by dynamic loadings applied in the time domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong