By Topic

Reliability, IEEE Transactions on

Issue 2 • Date June 2014

Filter Results

Displaying Results 1 - 25 of 29
  • Table of contents

    Publication Year: 2014 , Page(s): C1 - 385
    Save to Project icon | Request Permissions | PDF file iconPDF (125 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability publication information

    Publication Year: 2014 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (132 KB)  
    Freely Available from IEEE
  • Dependable Fiber-Wireless (FiWi) Access Networks and Their Role in a Sustainable Third Industrial Revolution Economy

    Publication Year: 2014 , Page(s): 386 - 400
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1596 KB) |  | HTML iconHTML  

    According to the Organisation for Economic Co-operation and Development (OECD), broadband access networks enable the emergence of new business models, processes, inventions, as well as improved goods and services. In fact, broadband access is viewed as a so-called general purpose technology (GPT) that has the potential to fundamentally change how and where economic activity is organized. In this paper, we focus on the implications of the emerging Third Industrial Revolution (TIR) economy, which goes well beyond current austerity measures, and has recently been officially endorsed by the European Commission as the economic growth roadmap toward a competitive low carbon society by 2050. This roadmap has been receiving an increasing amount of attention by other key players, e.g., the Government of China most recently. More specifically, we describe a variety of advanced techniques to render converged bimodal fiber-wireless (FiWi) broadband access networks dependable, including optical coding based fiber fault monitoring techniques, localized optical redundancy strategies, wireless extensions, and availability-aware routing algorithms, to improve their reliability, availability, survivability, security, and safety. Next, we elaborate on how the resultant dependent FiWi access networks can be exploited to enhance the dependability of other critical infrastructures of our society, most notably the future smart power grid and its envisioned electric transportation, by means of probabilistic analysis, co-simulation, and experimental demonstration. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Span-Restorable Elastic Optical Networks Under Different Spectrum Conversion Capabilities

    Publication Year: 2014 , Page(s): 401 - 411
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1735 KB) |  | HTML iconHTML  

    This paper deals with the design of a span-restorable (SR) elastic optical network under different spectrum conversion capabilities, including 1) no spectrum conversion, 2) partial spectrum conversion, and 3) full spectrum conversion. We develop Integer Linear Programming (ILP) models to minimize both the required spare capacity and the maximum number of link frequency slots used for each of the three spectrum conversion cases. We also consider using the Bandwidth Squeezed Restoration (BSR) technique to obtain the maximal restoration levels for the affected service flows, subject to the limited frequency slot capacity on each fiber link. Our studies show that the spectrum conversion capability significantly improves spare capacity efficiency for an elastic optical network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Recovery From Link Failures in Ethernet Networks

    Publication Year: 2014 , Page(s): 412 - 426
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1526 KB) |  | HTML iconHTML  

    Fast-recovery from link failures is a well-studied topic in IP networks. Employing fast-recovery in Ethernet networks is complicated as the forwarding is based on destination MAC addresses, which do not have the hierarchical nature similar to those exhibited in Layer 3 in the form of IP-prefixes. Moreover, switches employ backward learning to populate the forwarding table entries. Thus, any fast recovery mechanism in Ethernet networks must be based on undirected spanning trees if backward learning is to be retained. In this paper, we develop three alternatives for achieving fast recovery from single link failures in Ethernet networks. All three approaches provide guaranteed recovery from single link failures. The approaches differ in the technologies required for achieving fast recovery, namely VLAN rewrite or mac-in-mac encapsulation or both. We study the performance of the approaches developed on five different networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Balancing Cost and Reliability in the Design of Internet Protocol Backbone Using Agile Optical Networking

    Publication Year: 2014 , Page(s): 427 - 442
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1398 KB) |  | HTML iconHTML  

    To address reliability challenges due to failures and planned outages, Internet Service Providers (ISPs) typically use two backbone routers (BRs) at each central office. Access routers (ARs) are connected to these BRs in a dual-homed configuration. To provide reliability through node and path diversity, redundant backbone routers and redundant transport equipment to interconnect them are deployed. However, deploying such redundant resources increases the overall cost of the network. Hence, to avoid such redundant resources, a fundamental redesign of the backbone network leveraging the capabilities of an agile optical transport network is highly desired. In this paper, we propose a fundamental redesign of IP backbones. Our alternative design uses only a single router at each office. To survive failures or outages of a single local BR, we leverage the agile optical transport layer to carry traffic to remote BRs. Optimal mapping of local ARs to remote BRs is determined by solving an Integer Linear Program (ILP). We describe how our proposed design can be realized using current optical transport technology. We evaluate network designs for cost and performability, the latter being a metric combining performance and availability. We show significant reduction in cost for approximately the same level of reliability as current designs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Dynamic Programming Algorithm for Reliable Network Design

    Publication Year: 2014 , Page(s): 443 - 454
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2999 KB) |  | HTML iconHTML  

    This paper addresses an NP-hard problem to design a network topology with maximum all-terminal reliability subject to a cost constraint, given the locations of the various computer centers (nodes), their connecting links, each link's reliability and cost, and the maximum budget cost to install the links. Because cost is always a major focus in network design, this problem is practical for critical applications requiring maximized reliability. This paper first formulates a Dynamic Programming (DP) scheme to solve the problem. A DP approach, called DPA-1, generates the topology using all spanning trees of the network (STG). The paper shows that DPA-1 is optimal if the spanning trees are optimally ordered. Further, the paper describes an alternative DP algorithm, called DPA-2, that uses only k spanning trees ( k ≤ n, where n=|STG|) sorted in increasing weight and lexicographic order to improve the time efficiency of DPA-1 while producing similar results. Extensive simulations using hundreds of benchmark networks that contain up to 1.899102 spanning trees show the merits of using the sorting method, and the effectiveness of our algorithms. DPA-2 is able to generate 85% optimal results, while using only a small number of k spanning trees, and up to 16.83 CPU seconds. Furthermore, the non-optimal results are only up to 3.4% off from optimal for the simulated examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probabilistic Novelty Detection With Support Vector Machines

    Publication Year: 2014 , Page(s): 455 - 467
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1843 KB) |  | HTML iconHTML  

    Novelty detection, or one-class classification, is of particular use in the analysis of high-integrity systems, in which examples of failure are rare in comparison with the number of examples of stable behaviour, such that a conventional multi-class classification approach cannot be taken. Support Vector Machines (SVMs) are a popular means of performing novelty detection, and it is conventional practice to use a train-validate-test approach, often involving cross-validation, to train the one-class SVM, and then select appropriate values for its parameters. An alternative method, used with multi-class SVMs, is to calibrate the SVM output into conditional class probabilities. A probabilistic approach offers many advantages over the conventional method, including the facility to select automatically a probabilistic novelty threshold. The contributions of this paper are (i) the development of a probabilistic calibration technique for one-class SVMs, such that on-line novelty detection may be performed in a probabilistic manner; and (ii) the demonstration of the advantages of the proposed method (in comparison to the conventional one-class SVM methodology) using case studies, in which one-class probabilistic SVMs are used to perform condition monitoring of a high-integrity industrial combustion plant, and in detecting deterioration in patient physiological condition during patient vital-sign monitoring. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection and Reliability Risks of Counterfeit Electrolytic Capacitors

    Publication Year: 2014 , Page(s): 468 - 479
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2198 KB) |  | HTML iconHTML  

    Counterfeit electronics have been reported in a wide range of products, including computers, medical equipment, automobiles, avionics, and military systems. Counterfeiting is a growing concern for original equipment manufacturers (OEMs) in the electronics industry. Even inexpensive passive components such as capacitors and resistors are frequently found to be counterfeit, and their incorporation into electronic assemblies can cause early failures with potentially serious economic and safety implications. This study examines counterfeit electrolytic capacitors that were unknowingly assembled in power supplies used in medical devices, and then failed in the field. Upon analysis, the counterfeit components were identified, and their reliability relative to genuine parts was assessed. This paper presents an offline reliability assessment methodology and a systematic counterfeit detection methodology for electrolytic capacitors, which include optical inspection, X-Ray examination, weight measurement, electrical parameter measurement over temperature, and chemical characterization of the electrolyte using Fourier Transform Infrared Spectroscopy (FTIR) to assess the failure modes, mechanisms, and reliability risks. FTIR was successfully able to detect a lower concentration of ethylene glycol in the counterfeit capacitor electrolyte. In the electrical properties measurement, the distribution of values at room temperature was broader for counterfeit parts than for the authentic parts, and some electrical parameters at the maximum and minimum rated temperatures were out of specifications. These techniques, particularly FTIR analysis of the electrolyte and electrical measurements at the lowest and highest rated temperature, can be very effective to screen for counterfeit electrolytic capacitors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Stochastic Approach for the Analysis of Fault Trees With Priority AND Gates

    Publication Year: 2014 , Page(s): 480 - 494
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2269 KB) |  | HTML iconHTML  

    Dynamic fault tree (DFT) analysis has been used to account for dynamic behaviors such as the sequence-dependent, functional-dependent, and priority relationships among the failures of basic events. Various methodologies have been developed to analyze a DFT; however, most methods require a complex analytical procedure or a significant simulation time for an accurate analysis. In this paper, a stochastic computational approach is proposed for an efficient analysis of the top event's failure probability in a DFT with priority AND (PAND) gates. A stochastic model is initially proposed for a two-input PAND gate, and a successive cascading model is then presented for a general multiple-input PAND gate. A stochastic approach using the proposed models provides an efficient analysis of a DFT compared to an accurate analysis or algebraic approach. The accuracy of a stochastic analysis increases with the length of random binary bit streams in stochastic computation. The use of non-Bernoulli sequences of random permutations of fixed counts of 1s and 0s as initial input events' probabilities makes the stochastic approach more efficient, and more accurate than Monte Carlo simulation. Non-exponential failure distributions and repeated events are readily handled by the stochastic approach. The accuracy, efficiency, and scalability of the stochastic approach are shown by several case studies of DFT analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Prognostics Based on Structural Model Decomposition

    Publication Year: 2014 , Page(s): 495 - 510
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2327 KB) |  | HTML iconHTML  

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system, and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into computationally-independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Computationally independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Planning Progressive Type-I Interval Censoring Life Tests With Competing Risks

    Publication Year: 2014 , Page(s): 511 - 522
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2908 KB) |  | HTML iconHTML  

    In this article, we investigate some reliability and quality problems when the competing risks data are progressive type-I interval censored with binomial removals. The failure times of the individual causes are assumed to be statistically independent and exponentially distributed with different parameters. We obtain the estimates of the unknown parameters through a maximum likelihood method, and also derive the Fisher's information matrix. The optimal lengths of the inspection intervals are determined under two different criteria. The reliability sampling plans are established under given producer's and customer's risks. A Monte Carlo simulation is conducted to evaluate the performance of the estimators, and also some numerical results are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development of a Life Model for Light Emitting Diodes Stressed by Forward Current

    Publication Year: 2014 , Page(s): 523 - 533
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1343 KB) |  | HTML iconHTML  

    This paper illustrates the development and the experimental validation of a life model for light emitting diodes (LEDs) able to predict the time to failure under different stress conditions associated with the value and time dependence of applied forward current. In the paper, three different life models are derived by exploiting the results of tests under constant forward current. Then, experiments performed by subjecting LEDs to simple load cycles characterized by step-varying forward current are carried out, and the results are employed as a benchmark for the predictions provided by the combined use of the life models derived under constant stress and cumulative damage theory. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Residual Lifetime of Surviving Components From Failed Coherent Systems

    Publication Year: 2014 , Page(s): 534 - 542
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3066 KB) |  | HTML iconHTML  

    In this paper, we consider the residual lifetimes of surviving components of a failed coherent system with statistically independent and identically distributed components, given that before time t1 (t1 > 0) exactly r (r <; n)components have failed, and at time t2 (t2 > t1) the system just failed. Some aging properties and preservation results of the residual lives of the surviving components of such systems are obtained. Also some examples and applications are given. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Goodness-of-Fit Tests for the Birnbaum-Saunders Distribution With Censored Reliability Data

    Publication Year: 2014 , Page(s): 543 - 554
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2295 KB) |  | HTML iconHTML  

    We propose goodness-of-fit tests for Birnbaum-Saunders distributions with type-II right censored data. Classical goodness-of-fit tests based on the empirical distribution, such as Anderson-Darling, Cramér-von Misses, and Kolmogorov-Smirnov, are adapted to censored data, and evaluated by means of a simulation study. The obtained results are applied to real-world censored reliability data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Remaining Useful Life Estimation by Classification of Predictions Based on a Neuro-Fuzzy System and Theory of Belief Functions

    Publication Year: 2014 , Page(s): 555 - 566
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1925 KB) |  | HTML iconHTML  

    Various approaches for prognostics have been developed, and data-driven methods are increasingly applied. The training step of these methods generally requires huge datasets to build a model of the degradation signal, and estimate the limit under which the degradation signal should stay. Applicability and accuracy of these methods are thereby closely related to the amount of available data, and even sometimes requires the user to make assumptions on the dynamics of health states evolution. Following that, the aim of this paper is to propose a method for prognostics and remaining useful life estimation that starts from scratch, without any prior knowledge. Assuming that remaining useful life can be seen as the time between the current time and the instant where the degradation is above an acceptable limit, the proposition is based on a classification of prediction strategy (CPS) that relies on two factors. First, it relies on the use of an evolving real-time neuro-fuzzy system that forecasts observations in time. Secondly, it relies on the use of an evidential Markovian classifier based on Dempster-Shafer theory that enables classifying observations into the possible functioning modes. This approach has the advantage to cope with a lack of data using an evolving system, and theory of belief functions. Also, one of the main assets is the possibility to train the prognostic system without setting any threshold. The whole proposition is illustrated and assessed by using the CMAPPS turbofan dataset. RUL estimates are shown to be very close to actual values, and the approach appears to accurately estimate the failure instants, even with few learning data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Component Redundancy Versus System Redundancy in Different Stochastic Orderings

    Publication Year: 2014 , Page(s): 567 - 582
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4376 KB) |  | HTML iconHTML  

    Stochastic orders are useful to compare the lifetimes of two systems. We discuss both active redundancy as well as standby redundancy. We show that redundancy at the component level is superior to that at the system level with respect to different stochastic orders, for different types of systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decision-Theoretical Model for Failures Which are Tackled by Countermeasures

    Publication Year: 2014 , Page(s): 583 - 592
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1899 KB) |  | HTML iconHTML  

    In semiconductor manufacturing, it is necessary to guarantee the reliability of the produced devices. Latent defects have to be screened out by means of burn-in (that is, stressing the devices under accelerated life conditions) before the items are delivered to the customers. In a burn-in study, a sample of the stressed devices is investigated on burn-in relevant failures with the aim of proving a target failure probability level. In general, zero failures are required; if burn-in related failures occur, countermeasures are implemented in the production process, and the burn-in study actually has to be restarted. Countermeasure effectiveness is assessed by experts. In this paper, we propose a statistical model for assessing the devices' failure probability level, taking account of the reduced risk of early failures after the implementation of the countermeasures. Based on that, the target ppm-level can be proven when extending the running burn-in study by a reduced number of additional inspections. Therefore, a restart of the burn-in study is no longer required. A Generalized Binomial model is applied to handle countermeasures with different amounts of effectiveness. The corresponding probabilities are efficiently computed, exploiting a sequential convolution algorithm, which also works for a larger number of possible failures. Furthermore, we discuss the modifications needed in case of uncertain effectiveness values, which are modeled by means of Beta expert distributions. For the more mathematically inclined reader, some details on the model's decision-theoretical background are provided. Finally, the proposed model is applied to reduce the burn-in time, and to plan the additional sample size needed to continue the burn-in studies also in the case of failure occurrences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mean Inactivity Time Function, Associated Orderings, and Classes of Life Distributions

    Publication Year: 2014 , Page(s): 593 - 602
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2960 KB) |  | HTML iconHTML  

    The concept of mean inactivity time plays an important role in reliability and life testing. In this investigation, based on the comparison of mean inactivity times of a certain function of two lifetime random variables, we introduce and study a new stochastic order. This new order lies between the reversed hazard rate and the mean inactivity time orders. Several characterizations and preservation properties of the new order under reliability operations of monotone transformation, mixture, and shock models are discussed. In addition, a new class of life distributions called strong increasing mean inactivity time is proposed, and some of its reliability properties are investigated. Finally, to illustrate the concepts, some applications in the context of reliability theory are included. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Uncertainty Quantification in Remaining Useful Life Prediction Using First-Order Reliability Methods

    Publication Year: 2014 , Page(s): 603 - 619
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2056 KB) |  | HTML iconHTML  

    In this paper, we investigate the use of first-order reliability methods to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in engineering applications. The prediction of RUL is affected by several sources of uncertainty, and it is important to systematically quantify their combined effect on the RUL prediction in order to aid risk assessment, risk mitigation, and decision-making. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical approaches are computationally cheaper, and sometimes they are better suited for online decision-making. Exact analytical algorithms may not be available for practical engineering applications, but effective approximations can be made using first-order reliability methods. This paper describes three first-order reliability-based methods for RUL uncertainty quantification: first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (inverse-FORM). The inverse-FORM methodology is particularly useful in the context of online health monitoring, and this method is illustrated using the power system of an unmanned aerial vehicle, where the goal is to predict the end of discharge of a lithium-ion battery. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Degradation Modeling and Maintenance Decisions Based on Bayesian Belief Networks

    Publication Year: 2014 , Page(s): 620 - 633
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2434 KB) |  | HTML iconHTML  

    A variety of data-driven models focused on remaining lifetime prediction have been developed under condition-based monitoring framework. These models either assume an analytical formula for the underlying degradation path is known, or the number of degradation states could be determined subjectively. This paper proposes an adaptive discrete-state model to estimate system remaining lifetime based on Bayesian Belief Network (BBN) theory. The model consists of three steps: degradation state identification, degradation state characterization, and remaining life prediction. Our approach does not require an explicit distribution function to characterize the fault evolutionary process. Because the BBN model leverages the validity measures to determine the optimum state number, it avoids the state identification errors under limited feature data. The performance of the BBN model is validated and verified by actual and simulated bearing life data. Numerical examples show that the Bayesian degradation model outperforms a time-based maintenance policy both in cost and reliability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Defects-Per-Unit Acceptance Sampling Plans Using Truncated Prior Distributions

    Publication Year: 2014 , Page(s): 634 - 645
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2548 KB) |  | HTML iconHTML  

    Optimal sampling inspection plans for defects per unit with fixed acceptance numbers and limiting quality levels are developed to provide the appropriate protection to customers when the number of nonconformities per sampled item follows a Poisson distribution. The best inspection scheme assures the customer, who has to judge the quality of the submitted material, that a supplier's lot is released only when there is conclusive evidence that it is satisfactory. The underlying integer nonlinear programming problem is formulated and solved in the frequentist setting, and a practically exact approximation to the minimum sample size is presented. Because there is often no reason to assume that the process average is constant, the classical perspective is then extended to those situations in which there is substantial prior information on the supplier's process. A family of generalized truncated gamma models and several restricted maximum entropy distributions satisfying typical constraints are adopted to describe the stochastic fluctuations in the process average. Optimal defects-per-unit acceptance plans are determined by solving the corresponding constrained minimization problems. Lower and upper bounds on the required sample size are deduced in closed-forms. A general procedure based on Taylor series expansions of the operating characteristic function around the mean quality level of the rejectable lots is proposed to derive an explicit, accurate, easily computable approximation to the smallest sample size that provides the required average customer protection. This procedure greatly simplifies the determination of optimal plans from defect or failure count data and prior knowledge, and also requires little prior information, namely the prior mean and variance of the quality level of the rejectable lots, which could be estimated from past data and expert opinions. The suggested methodology is applied to the manufacturing of paper and glass for illustrative purposes- Our approach allows the practitioners to incorporate into the quality analysis a reduced parameter space for the process average. Furthermore, the proposed sampling plans are reasonably insensitive to small disturbances in the prior knowledge on the process average, and the effective use of the available information on the supplier's process provides a more realistic assessment of the actual customer protection, as well as considerable savings in testing time and sample size. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Preventive Maintenance of a Multi-State Device Subject to Internal Failure and Damage Due to External Shocks

    Publication Year: 2014 , Page(s): 646 - 660
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3069 KB) |  | HTML iconHTML  

    Preventive maintenance is of interest in reliability studies, to improve the performance of a system, and to optimise profits. In this study, we model a device subject to internal failures and external shocks, and examine the influence of preventive maintenance. The internal failures can be repairable or non-repairable. External shocks produce cumulative damage until non-repairable failure occurs. The device is inspected at random times. When an inspection takes place, the level of internal degradation and the damage produced by external shocks are observed. If damage is major, the unit is assigned to preventive maintenance according to the degradation level observed. Minimal preventive maintenance is also undertaken: if internal and external degradation is observed, and only one of them is major, then the device is assigned to preventive maintenance only for the major damage, and the minor damage state is saved in memory. We model the system, solve for the stationary distribution, create measures of reliability, in transient and stationary regimes, and introduce rewards by considering profits and different costs. We show the results in algorithmic form, and they are implemented computationally with Matlab. The versatility of the model is shown by a numerical example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic Function Replacement for System-on-Chip Security in the Presence of Hardware-Based Attacks

    Publication Year: 2014 , Page(s): 661 - 675
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1344 KB) |  | HTML iconHTML  

    We describe a set of design methodologies and experiments related to enabling hardware systems to utilize on-the-fly configuration of reconfigurable logic to recover system operation from unexpected loss of system function. Methods we explore include programming using locally stored configuration bitstream as well as using configuration bitstream transmitted from a remote site. We examine specific ways of utilizing reconfigurable logic to regenerate system function, as well as the effectiveness of this approach as a function of the type of attack, and various architectural attributes of the system. Based on this analysis, we propose architectural features of System-on-Chip (SoC) that can minimize performance degradation and maximize the likelihood of seamless system operation despite the function replacement. This approach is highly feasible in that it is not required to specially manage system software and other normal system hardware functions for the replacement. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-Stage Cost-Sensitive Learning for Software Defect Prediction

    Publication Year: 2014 , Page(s): 676 - 686
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1482 KB) |  | HTML iconHTML  

    Software defect prediction (SDP), which classifies software modules into defect-prone and not-defect-prone categories, provides an effective way to maintain high quality software systems. Most existing SDP models attempt to attain lower classification error rates other than lower misclassification costs. However, in many real-world applications, misclassifying defect-prone modules as not-defect-prone ones usually leads to higher costs than misclassifying not-defect-prone modules as defect-prone ones. In this paper, we first propose a new two-stage cost-sensitive learning (TSCS) method for SDP, by utilizing cost information not only in the classification stage but also in the feature selection stage. Then, specifically for the feature selection stage, we develop three novel cost-sensitive feature selection algorithms, namely, Cost-Sensitive Variance Score (CSVS), Cost-Sensitive Laplacian Score (CSLS), and Cost-Sensitive Constraint Score (CSCS), by incorporating cost information into traditional feature selection algorithms. The proposed methods are evaluated on seven real data sets from NASA projects. Experimental results suggest that our TSCS method achieves better performance in software defect prediction compared to existing single-stage cost-sensitive classifiers. Also, our experiments show that the proposed cost-sensitive feature selection methods outperform traditional cost-blind feature selection methods, validating the efficacy of using cost information in the feature selection stage. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong