Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Reliability, IEEE Transactions on

Early Access Articles

Early Access articles are new content made available in advance of the final electronic or print versions and result from IEEE's Preprint or Rapid Post processes. Preprint articles are peer-reviewed but not fully edited. Rapid Post articles are peer-reviewed and edited but not paginated. Both these types of Early Access articles are fully citable from the moment they appear in IEEE Xplore.

Filter Results

Displaying Results 1 - 25 of 64
  • Estimation of Reliability in a Multicomponent Stress-Strength Model Based on a Marshall-Olkin Bivariate Weibull Distribution

    Publication Year: 2015 , Page(s): 1 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2677 KB)  

    In this paper, we consider a system which has $k$ s-independent and identically distributed strength components, and each component is constructed by a pair of s-dependent elements. These elements $(X_{1},Y_{1}),(X_{2},Y_{2}),ldots ,(X_{k},Y_{k})$ follow a Marshall-Olkin bivariate Weibull distribution, and each element is exposed to a common random stress $T$ which follows a Weibull distribution. The system is regarded as operating only if at least $s$ out of $k (1leq sleq k)$ strength variables exceed the random stress. The multicomponent reliability of the system is given by $R_{s,k}=P$ (at least $s$ of the $(Z_{1},ldots ,Z_{k})$ exceed $T$) where $Z_{i}=min (X_{i},Y_{i})$, $i=1,ldots ,k$. We estimate $R_{s,k}$ by using frequentist and Bayesian approaches. The Bayes estimates of $R_{s,k}$ have been developed by using Lindley's approximation, and the Markov Chain Monte Carlo methods, due to the lack of explicit forms. The asymptotic confidence interval, and the highest probability density credible interval are constructed for $R_{s,k}$. The rel- ability estimators are compared by using the estimated risks through Monte Carlo simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New Developments on Stochastic Properties of Coherent Systems

    Publication Year: 2015 , Page(s): 1 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2759 KB)  

    We consider a coherent system consisting of $n$ components where the component lifetimes are assumed to be random variables following a probabilistically exchangeable joint distribution function, where the probability distribution does not change with any permutation of the components. We study stochastic properties of the residual lifetime of live components of the system, under different conditions. The number of failed components of the system are explored, under the assumption that the system is operating at time $t$. We also study the probability that the $i$th component failure in the system causes the system failure, given that the system is operating at time $t$. The results of the paper extend some of the existing results in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ordering Heuristics for Reliability Evaluation of Multistate Networks

    Publication Year: 2015 , Page(s): 1 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1315 KB)  

    This paper develops ordering heuristics to improve the efficiency of reliability evaluation for multistate two-terminal networks given all minimal path vectors (  d -MPs for short). In the existing methods, all d -MPs are treated equally. However, we find that the importance of each d -MP is different, and different orderings affect the efficiency of reliability evaluation. Based on the above observations, we introduce the length definitions for d -MPs in a multistate two-terminal network, and develop four ordering heuristics, called O1, O2, O3, and O4, to improve the efficiency of the Recursive Sum of Disjoint Products (RSDP) method for evaluating network reliability. The results show that the proposed ordering heuristics can significantly improve the reliability evaluation efficiency, and O1 performs the best among the four methods. In addition, an ordering heuristic is developed for the reliability evaluation of multistate two-terminal networks given all minimal cut vectors ( d -MCs). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fingerprint-Based Detection and Diagnosis of Malicious Programs in Hardware

    Publication Year: 2015 , Page(s): 1 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (698 KB)  

    In today's Integrated Circuit industry, a foundry, an Intellectual Property provider, a design house, or a Computer Aided Design vendor may install a hardware Trojan on a chip which executes a malicious program such as one providing an information leaking back door. In this paper, we propose a fingerprint-based method to detect any malicious program in hardware. We propose a tamper-evident architecture (TEA) which samples runtime signals in a hardware system during the performance of a computation, and generates a cryptographic hash-based fingerprint that uniquely identifies a sequence of sampled signals. A hardware Trojan cannot tamper with any sampled signal without leaving tamper evidence such as a missing or incorrect fingerprint. We further verify fingerprints off-chip such that a hardware Trojan cannot tamper with the verification process. As a case study, we detect hardware-based code injection attacks in a SPARC V8 architecture LEON2 processor. Based on a lightweight block cipher called PRESENT, a TEA requires only a 4.5% area increase, while avoiding being detected by the TEA increases the area of a code injection hardware Trojan with a 1 KB ROM from 2.5% to 36.1% of a LEON2 processor. Such a low cost further enables more advanced tamper diagnosis techniques based on a concurrent generation of multiple fingerprints. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchical and Dynamic Elliptic Curve Cryptosystem Based Self-Certified Public Key Scheme for Medical Data Protection

    Publication Year: 2015 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1129 KB)  

    As our aging population significantly grows, personal health monitoring is becoming an emerging service and can be accomplished by large-scale, low-power sensor networks, such as Zigbee networks. However, collected medical data may reveal patient privacy, and should be well protected. We propose a Hierarchical and Dynamic Elliptic Curve Cryptosystem based self-certified public key scheme (HiDE) for medical data protection. To serve a large amount of sensors, HiDE provides a hierarchical cluster-based framework consisting of a Backbone Cluster and several Area Clusters. In an Area Cluster, a Secure Access Point (SAP) collects medical data from Secure Sensors (SSs) in the sensor network, and transmits the aggregated data to a Root SAP located in the Backbone Cluster. Therefore, the Root SAP can serve a considerable number of SSs without establishing separate secure sessions with each SS individually. To provide dynamic secure sessions for mobile SSs connecting SAP, HiDE introduces the Elliptic Curve Cryptosystem based Self-certified Public key scheme (ESP) for establishing secure sessions between each pair of Cluster Head (CH) and Cluster Member (CM). In ESP, the CH can issue a public key to a CM, and computes a Shared Session Key (SSK) with that CM without knowing the CM's secrete key. This concept satisfies the Zero Knowledge Proof so CHs can dynamically build secure sessions with CMs without managing a CM's secrete keys. Our experiments in realistic implementations and Network Simulation demonstrate that ESP requires less computation and network overhead than the Rivest-Shamir-Adleman (RSA)-based public key scheme. In addition, security analysis shows keys in ESP are well protected. Thus, HiDE can protect the confidentiality of sensitive medical data with low computation overhead, and keep appropriate network performance for wireless sensor networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Two-Stage Fuzzy Inference System-Based Approach to Prioritize Failures in Failure Mode and Effect Analysis

    Publication Year: 2015 , Page(s): 1 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1440 KB)  

    This paper presents a new Fuzzy Inference System (FIS)-based Risk Priority Number (RPN) model for the prioritization of failures in Failure Mode and Effect Analysis (FMEA). In FMEA, the monotonicity property of the RPN scores is important. To maintain the monotonicity property of an FIS-based RPN model, a complete and monotonically-ordered fuzzy rule base is necessary. However, it is impractical to gather all (potentially a large number of) fuzzy rules from FMEA users. In this paper, we introduce a new two-stage approach to reduce the number of fuzzy rules that needs to be gathered, and to satisfy the monotonicity property. In stage-1, a Genetic Algorithm (GA) is used to search for a small set of fuzzy rules to be gathered from FMEA users. In stage-2, the remaining fuzzy rules are deduced approximately by a monotonicity-preserving similarity reasoning scheme. The monotonicity property is exploited as additional qualitative information for constructing the FIS-based RPN model. To assess the effectiveness of the proposed approach, a real case study with information collected from a semiconductor manufacturing plant is conducted. The outcomes indicate that the proposed approach is effective in developing an FIS-based RPN model with only a small set of fuzzy rules, which is able to satisfy the monotonicity property for prioritization of failures in FMEA. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discrete Time Shock Models in a Markovian Environment

    Publication Year: 2015 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1138 KB)  

    This paper deals with two different shock models in a Markovian environment. We study a system from a reliability point of view under these two shock models. According to the first model, the system fails if the cumulative shock magnitude exceeds a critical level, while in the second model the failure occurs when the cumulative effect of the shocks in consecutive periods is above a critical level. The shock occurrences over discrete time periods are assumed to be Markovian. We obtain expressions for the failure time distributions of the system under the two model. Illustrative computational results are presented for the survival probabilities and mean time to failure values of the system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Galvanic Corrosion of High Temperature Low Sag Aluminum Conductor Composite Core and Conventional Aluminum Conductor Steel Reinforced Overhead High Voltage Conductors

    Publication Year: 2015 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (703 KB)  

    A High-Temperature Low-Sag Aluminum Conductor Composite Core (ACCC) bare overhead transmission line conductor utilizing a load bearing unidirectional carbon and glass fiber reinforced epoxy composite rod was evaluated for potential galvanic corrosion problems. A series of corrosion tests were performed in 0.5 M NaCl aqueous solution at room temperature, and at 85 ^{\circ}{\rm C} . The corrosion performance of the ACCC conductor was compared to a conventional Aluminum Conductor Steel Reinforced (ACSR) conductor. The bi-metallic ACSR design suffers inherently from galvanic corrosion, while the ACCC design does not develop galvanic corrosion unless its fiberglass composite galvanic corrosion barrier is compromised. Even with a severely compromised barrier in the ACCC conductor, the measured galvanic corrosion rate of the aluminum in the ACCC conductor was much lower than the galvanic corrosion rate measured in the ACSR conductor. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extended Optimal Replacement Policy for a Two-Unit System With Shock Damage Interaction

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4756 KB)  

    In this paper, we consider a system consisting of two major units, A and B, each subject to two types of shocks that occur according to a non-homogeneous Poisson process. A type II shock causes a complete system failure which is corrected by a replacement; and a type I shock causes a unit A minor failure, which is rectified by a minimal repair. The shock type probability is age-dependent. Each unit A minor failure results in a random amount of damage to unit B. Such a damage to unit B can be accumulated to a specified level of the complete system failure. Moreover, unit B with a cumulative damage of level z may become minor failed with probability \pi(z) at each unit A minor failure, and fixed by a minimal repair. We consider a more general replacement policy where the system is replaced at age T , or the N th type I shock, or first type II shock, or when the total damage to unit B exceeds a specified level, whichever occurs first. We determine the optimal policy of T^{\ast } and N^{\ast } to minimize the s-expected cost per unit time. We present some numerical examples, and show that our model is the generalization of many previous maintenance models in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Novel Dynamic-Weighted Probabilistic Support Vector Regression-Based Ensemble for Prognostics of Time Series Data

    Publication Year: 2015 , Page(s): 1 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1500 KB)  

    In this paper, a novel Dynamic-Weighted Probabilistic Support Vector Regression-based Ensemble (DW-PSVR-ensemble) approach is proposed for prognostics of time series data monitored on components of complex power systems. The novelty of the proposed approach consists in i) the introduction of a signal reconstruction and grouping technique suited for time series data, ii) the use of a modified Radial Basis Function (RBF) kernel for multiple time series data sets, iii) a dynamic calculation of sub-models weights for the ensemble, and iv) an aggregation method for uncertainty estimation. The dynamic weighting is introduced in the calculation of the sub-models' weights for each input vector, based on Fuzzy Similarity Analysis (FSA). We consider a real case study involving 20 failure scenarios of a component of the Reactor Coolant Pump (RCP) of a typical nuclear Pressurized Water Reactor (PWR). Prediction results are given with the associated uncertainty quantification, under the assumption of a Gaussian distribution for the predicted value. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Novel Layout-Based Single Event Transient Injection Approach to Evaluate the Soft Error Rate of Large Combinational Circuits in Complimentary Metal-Oxide-Semiconductor Bulk Technology

    Publication Year: 2015 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1135 KB)  

    As the technology scales down, space radiation induced soft errors are becoming a critical issue for the reliability of Integrated Circuits (ICs). In this paper, we propose a novel layout-based Single-Event Transient (SET) injection approach to evaluate the Soft Error Rate (SER) of large combinational circuits in Complementary Metal-Oxide-Semiconductor (CMOS) bulk technology. We consider the effect of ion strike location on the SET pulse width in this approach. Heavy-ion experiments on two different inverter chains are conducted to verify this layout-based SET injection approach. The simulation and experiment results show that this approach can fairly reflect the SET pulse width distribution. Furthermore, we compare the soft error number calculated by our proposed layout-based approach with the normal SET injection approach, and illustrate the detailed circuit response obtained by our proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Conditional Inactivity Time of Components in a Coherent Operating System

    Publication Year: 2015 , Page(s): 1 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3239 KB)  

    The purpose of this paper is to study the conditional inactivity time of failed components of a coherent system consisting of n identical components with statistically independent lifetimes. Different aging and stochastic properties of this conditional random variable are obtained. Some mixture representations for the conditional inactivity time of the components are also derived. We give sufficient conditions based on the signature and the common distribution of component lifetimes to obtain stochastic ordering results for coherent systems. Some stochastic orders on the dynamic signature of coherent systems are also provided. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Warranty Prediction for Highly Reliable Products

    Publication Year: 2015 , Page(s): 1 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2358 KB)  

    Field return rate prediction is important for manufacturers to assess the product reliability and develop effective warranty management. To get timely predictions, lab reliability tests have been widely used in assessing field performance before the product is introduced to the market. This work concerns warranty prediction for highly reliable products. But, due to the high reliability associated with modern electronic devices, the failure data in lab tests are typically insufficient for each individual product, resulting in less accurate prediction for the field return rate. To overcome this issue, a hierarchical reliability model is suggested to efficiently integrate the information from multiple devices of a similar type in the historical database. Under a Bayesian framework, the warranty prediction for a new product can be inferred and updated as the data collection progresses. The proposed methodology is applied to a case study in the information and communication technology industry for illustration. Bayesian prediction is demonstrated to be very effective compared to other alternatives via a cross-validation study. In particular, the prediction error rate based on our updating prediction scheme is significantly improved as more field data are collected, and achieves a prediction error rate lower than 20% after launching the product for 3 months. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy Classification With Restricted Boltzman Machines and Echo-State Networks for Predicting Potential Railway Door System Failures

    Publication Year: 2015 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB)  

    In this paper, a fuzzy classification approach applying a combination of Echo-State Networks (ESNs) and a Restricted Boltzmann Machine (RBM) is proposed for predicting potential railway rolling stock system failures using discrete-event diagnostic data. The approach is demonstrated on a case study of a railway door system with real data. Fuzzy classification enables the use of linguistic variables for the definition of the time intervals in which the failures are predicted to occur. It provides a more intuitive way to handle the predictions by the users, and increases the acceptance of the proposed approach. The research results confirm the suitability of the proposed combination of algorithms for use in predicting railway rolling stock system failures. The proposed combination of algorithms shows good performance in terms of prediction accuracy on the railway door system case study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic Uncertain Causality Graph Applied to Dynamic Fault Diagnoses of Large and Complex Systems

    Publication Year: 2015 , Page(s): 1 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4627 KB)  

    Intelligent systems for online fault diagnoses can increase the reliability, safety, and availability of large and complex systems. As an intelligent system, Dynamic Uncertain Causality Graph (DUCG) is a newly presented approach to graphically and compactly represent complex uncertain causalities, and perform probabilistic reasoning, which can be applied in fault diagnoses and other tasks. However, only static evidence was utilized previously. In this paper, the methodology for DUCG to perform fault diagnoses with dynamic evidence is presented. Causality propagations among sequential time slices are avoided. In the case of process systems, the basic failure events are classified as initiating, and non-initiating events. This classification can increase the efficiency of fault diagnoses greatly. Failure rates of initiating events can be used to replace failure probabilities without affecting diagnostic results. Illustrative examples are provided to illustrate the methodology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Effective Integrity Check Scheme for Secure Erasure Code-Based Storage Systems

    Publication Year: 2015 , Page(s): 1 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2508 KB)  

    In the application of cloud storage, a user no longer possesses his files in his local depository. Thus, he is concerned about the security of the stored files. Data confidentiality and data robustness are the main security issues. For data confidentiality, the user can first encrypt files and then store the encrypted files in a cloud storage. For data robustness, there are two concerns: service failure, and service corruption. We are concerned about data robustness in cloud storage services. Lin and Tzeng proposed a secure erasure code-based storage system with multiple key servers recently. Their system supports a repair mechanism, where a new storage server can compute a new ciphertext from the ciphertexts obtained from the remaining storage servers. Their system considers data confidentiality in the cloud, and data robustness against storage server failure. In this paper, we propose an integrity check scheme for their system to enhance data robustness against storage server corruption, which returns tampered ciphertexts. With our integrity check scheme, their storage system can deal with not only the problem of storage server failure, but also the problem of storage server corruption. The challenging part of our work is to have homomorphic integrity tags. New integrity tags can be computed from old integrity tags by storage servers without involvement of the user's secret key or backup servers. We prove the security of our integrity check scheme formally, and establish the parameters for achieving an overwhelming probability of a successful data retrieval. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Stochastic Approach for the Analysis of Dynamic Fault Trees With Spare Gates Under Probabilistic Common Cause Failures

    Publication Year: 2015 , Page(s): 1 - 15
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1609 KB)  

    A redundant system usually consists of primary and standby modules. The so-called spare gate is extensively used to model the dynamic behavior of redundant systems in the application of dynamic fault trees (DFTs). Several methodologies have been proposed to evaluate the reliability of DFTs containing spare gates by computing the failure probability. However, either a complex analysis or significant simulation time are usually required by such an approach. Moreover, it is difficult to compute the failure probability of a system with component failures that are not exponentially distributed. Additionally, probabilistic common cause failures (PCCFs) have been widely reported, usually occurring in a statistically dependent manner. Failure to account for the effect of PCCFs overestimates the reliability of a DFT. In this paper, stochastic computational models are proposed for an efficient analysis of spare gates and PCCFs in a DFT. Using these models, a DFT with spare gates under PCCFs can be efficiently evaluated. In the proposed stochastic approach, a signal probability is encoded as a non-Bernoulli sequence of random permutations of fixed numbers of ones and zeros. The component's failure probability is not limited to an exponential distribution, thus this approach is applicable to a DFT analysis in a general case. Several case studies are evaluated to show the accuracy and efficiency of the proposed approach, compared to both an analytical approach and Monte Carlo (MC) simulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Failure Event Prediction Using Hidden Markov Model Approaches

    Publication Year: 2015 , Page(s): 1 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2806 KB)  

    In the past years, Hidden Markov Models have been used in several fields and applications with success. More recently, these models have been applied to improve the reliability of a machinery system. In many cases, failure is preceded by specific sequences of events (signature), which can be detected by an adequate Hidden Markov Model (HMM). Classical laws like lifetime models or survival functions are used to estimate the lifetime of a system. The default of these laws is that only the elapsed time is used to estimate the end of life of a system. The aim of this paper is to validate an HMM approach. We first use a synthetic HMM model of degradation to produce event sequences. This synthetic model has been inspired by a real process. In this case, we can adjust the failure rate by changing model parameters. All the parameters of this synthetic model are known and provide references which can be evaluated by different indicators. Classical survival functions used in reliability are computed on synthetic sequences. These laws validate the behavior of the synthetic model. The higher the failure rate, the shorter the lifetime duration. These results confirm that a four-state, left to right, HMM topology can represent the degradation level of a system. In a second time, this HMM approach is used in a real case, where degradation levels are unknown. Degradation estimates are compared with the results from classical survival functions used in the first case. Then we show that the degradation level provided by the HMM approach is more efficient than the survival functions approach. The HMM approach takes into account the events collected about a system, not only the elapsed time as is the case with survival functions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Component Level Versus System Level k -Out-of- n Assembly Systems

    Publication Year: 2015 , Page(s): 1 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2003 KB)  

    A well-known principle in engineering is that redundancy at the component level is generally more reliability effective than that at the system level. Here, the redundancy simply means that components are connected in parallel. Motivated by this principle, in this paper, a more general problem of assembling K -out-of- N systems optimally is studied. We consider two assembled systems: one is by assembling a k -out-of- n system and a l -out-of- m system at the system level, and the other is by assembling them at a component level. With some appropriate assumptions on (n,k) and (m,l) , some results on stochastic comparisons of the two assembly systems are derived. From this result, some useful principles for assembling K -out-of- N -systems optimally are proposed. Two numerical examples are given to illustrate our results as well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Parameter Design for Quality and Reliability Issues Based on Accelerated Degradation Measurements

    Publication Year: 2015 , Page(s): 1 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1662 KB)  

    Manufacturing quality and lifetime testing conditions may affect product reliability measurements. The literature for the design of experiments (DOE) and robust product optimization considering both quality and reliability issues is scarce. This article develops a model to include both manufacturing variables and accelerated degradation test (ADT) conditions. A simple algorithm provides calculations of the maximum likelihood estimates (MLEs) of these model parameters and percentile lifetimes. Variances of these estimates are derived based on large sample theory. Our DOE plans focus on deciding replication sizes and proportions of the test-units allocated at three stress levels for various manufacturing and ADT conditions. This work also explores robust parameter design (RPD) optimizations for selected controllable manufacturing variables to achieve the longest product lifetime and smallest variation in lifetime distributions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Aggregate Discounted Warranty Cost Forecast for a New Product Considering Stochastic Sales

    Publication Year: 2015 , Page(s): 1 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2466 KB)  

    Most commercial products are sold with a warranty. Product repairs during the warranty period are often free of charge, and contribute significantly to the costs of a manufacturer. Estimation of the warranty costs is important for the manufacturer to prepare sufficient warranty reserves for future claims. Because products are often sold to customers intermittently, the total number of sold units under warranty varies over time, which presents difficulties in forecasting the warranty costs over time. In this study, we consider the stochastic sales process, and derive the expectation and variance of the aggregate discounted warranty costs within a given period of time. By taking the variation of the warranty costs into consideration, the result can be used in the preparation of a conservative warranty reserve for a new product. The discounted life-cycle warranty cost is a special case of our model, and it can also be determined from our result. Numerical results show the applicability of our model in estimating periodic discounted warranty costs, and preparing both short-term and long-term warranty reserves. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Degradation Indicator Based on a Statistical Anomaly Approach

    Publication Year: 2015 , Page(s): 1 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1217 KB)  

    This paper presents a new method for combining measured parameters into a single indicator for monitoring the condition of systems subjected to degradation effects. The proposed approach integrates the use of nonparametric density estimation techniques into Runger's U^{2} method, which allows for the separation of variables that are directly related to degradation effects from those which are not. Two simulated case studies are presented for illustration, namely, the monitoring of a flap extension and retraction system, and a gas turbine employed as an auxiliary power unit. For comparison, degradation indicators are also calculated by using Hotteling's T^{2} and Runger's U^{2} methods, as well as a nonparametric method without separation of variables. In both case studies, the proposed method provided the best results in terms of fault detection performance and suitability for remaining useful life prediction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scheduling Preventive Maintenance as a Function of an Imperfect Inspection Interval

    Publication Year: 2015 , Page(s): 1 - 15
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3434 KB)  

    This paper considers a system with periodic inspections and periodic preventive maintenance (PM) to detect and correct hidden failures that generate a penalty cost per unit time undetected. Imperfect periodic inspections (IPIs) occur at a chosen interval t , and detect hidden failures with probability p\in (0,1) . Both reactive maintenance (RM), performed when a hidden failure is detected by an IPI, and PM, performed at a chosen time (n+1)t , renew the system. The objective is to determine the optimal frequency t and quantity n of imperfect inspections between PM such that the expected cost (which includes the costs of undetected failures, IPIs, PM, and RM) per unit time is minimized over an infinite horizon. We analytically establish conditions for the existence of a finite optimal t for a given value of n , and discuss the asymptotic behavior of the objective function for large n and t . These results are further exploited to describe convergence properties of a proposed approach for finding a globally optimal solution. Also, for the special case of a Weibull time-to-failure distribution, we derive conditions that guarantee the uniqueness of a locally optimal solution for a given value of n . View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Survey of Securing Networks Using Software Defined Networking

    Publication Year: 2015 , Page(s): 1 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (430 KB)  

    Software Defined Networking (SDN) is rapidly emerging as a new paradigm for managing and controlling the operation of networks ranging from the data center to the core, enterprise, and home. The logical centralization of network intelligence presents exciting challenges and opportunities to enhance security in such networks, including new ways to prevent, detect, and react to threats, as well as innovative security services and applications that are built upon SDN capabilities. In this paper, we undertake a comprehensive survey of recent works that apply SDN to security, and identify promising future directions that can be addressed by such research. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability and Birnbaum Importance for Sparsely Connected Circular Consecutive- k Systems

    Publication Year: 2015 , Page(s): 1 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3899 KB)  

    A consecutive- k -out-of-  n :  {\rm F} ( {\rm G} ) system with sparse  d consists of  n components ordered in a line or a circle, while the system fails (works) iff, there exist at least  k consecutive failed (working) components with sparse  d for  0 \le d \le n - k . In this paper, a circular consecutive-  k -out-of-  n system with sparse  d is considered. Some equations for system reliability and Birnbaum importance are derived by means of the finite Markov chain imbedding approach. Then the Birnbaum importance of components is compared in the situations where the system is under an IID model, and where one of the components is known to be failed, respectively. Finally, some numerical examples are followed to illustrate the results obtained in the paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong