By Topic

Reliability, IEEE Transactions on

Issue 2 • Date June 2007

Filter Results

Displaying Results 1 - 25 of 27
  • Table of contents

    Page(s): C1 - 177
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Reliability publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE
  • Editorial: How Reliable is Teaching Evaluation? The Relationship of Class Size to Teaching Evaluation Scores

    Page(s): 178 - 181
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (102 KB)  

    While teaching is an important component of engineering education, few widely adopted objective measures for evaluating teaching have emerged in our institutions of higher learning. The most frequently used measure nationally is the teaching questionnaire. This editorial does not attempt to analyse the much debated topic of whether teaching questionnaires are capable of measuring teaching effectiveness. What this paper does is quantify the relationship between class size and scores on teaching questionnaires. We report here, for the first time, that 1) compared to larger classes, course ratings are higher for classes of size 20 or less, and course ratings decrease when the class size increases within this small class group and 2) course ratings are independent of class size when the class size is larger than 20. This editorial is based on data from end-of-term teaching appraisals obtained from the archives at the Texas A&M University College of Engineering for all three semesters each year, beginning with Spring 1998 and continuing until Fall 2002 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Cache Architecture for Extremely Unreliable Nanotechnologies

    Page(s): 182 - 197
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1011 KB) |  | HTML iconHTML  

    In the drive to create ever smaller transistors, conventional silicon CMOS devices are becoming more difficult to fabricate reliably as process size shrinks. New technologies are being investigated to replace silicon CMOS. While offering greater numbers of devices per unit area, all of these technologies are more difficult to fabricate, and more likely to fail in operation than current technologies. Nanotechnology research has identified the need for fault and defect tolerance at the architectural level so that future devices can be used in large-scale electronics circuits. This paper examines the problem of creating reliable caches using extremely unreliable technologies. We incorporate support logic (i.e., control, datapath, and self-test logic) into the analysis, and propose a novel Content Addressable Memory-based design incorporating "best practice" fault tolerant design techniques. The design requires 15 times the number of devices of a conventional design, but enables the use of device technologies with defect rates higher than 10-6, a three order of magnitude improvement over non-fault tolerant designs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Assessment of Testing-Effort Dependent Software Reliability Growth Models

    Page(s): 198 - 211
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (435 KB) |  | HTML iconHTML  

    Over the last several decades, many Software Reliability Growth Models (SRGM) have been developed to greatly facilitate engineers and managers in tracking and measuring the growth of reliability as software is being improved. However, some research work indicates that the delayed S-shaped model may not fit the software failure data well when the testing-effort spent on fault detection is not a constant. Thus, in this paper, we first review the logistic testing-effort function that can be used to describe the amount of testing-effort spent on software testing. We describe how to incorporate the logistic testing-effort function into both exponential-type, and S-shaped software reliability models. The proposed models are also discussed under both ideal, and imperfect debugging conditions. Results from applying the proposed models to two real data sets are discussed, and compared with other traditional SRGM to show that the proposed models can give better predictions, and that the logistic testing-effort function is suitable for incorporating directly into both exponential-type, and S-shaped software reliability models View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Count Models for Software Quality Estimation

    Page(s): 212 - 222
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (422 KB) |  | HTML iconHTML  

    Identifying which software modules, during the software development process, are likely to be faulty is an effective technique for improving software quality. Such an approach allows a more focused software quality & reliability enhancement endeavor. The development team may also like to know the number of faults that are likely to exist in a given program module, i.e., a quantitative quality prediction. However, classification techniques such as the logistic regression model (lrm) cannot be used to predict the number of faults. In contrast, count models such as the Poisson regression model (prm), and the zero-inflated Poisson (zip) regression model can be used to obtain both a qualitative classification, and a quantitative prediction for software quality. In the case of the classification models, a classification rule based on our previously developed generalized classification rule is used. In the context of count models, this study is the first to propose a generalized classification rule. Case studies of two industrial software systems are examined, and for each we developed two count models, (prm, and zip), and a classification model (lrm). Evaluating the predictive capabilities of the models, we concluded that the prm, and the zip models have similar classification accuracies as the lrm. The count models are also used to predict the number of faults for the two case studies. The zip model yielded better fault prediction accuracy than the prm. As compared to other quantitative prediction models for software quality, such as multiple linear regression (mlr), the prm, and zip models have a unique property of yielding the probability that a given number of faults will occur in any module View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Comprehensive Empirical Study of Count Models for Software Fault Prediction

    Page(s): 223 - 236
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (586 KB) |  | HTML iconHTML  

    Count models, such as the Poisson regression model, and the negative binomial regression model, can be used to obtain software fault predictions. With the aid of such predictions, the development team can improve the quality of operational software. The zero-inflated, and hurdle count models may be more appropriate when, for a given software system, the number of modules with faults are very few. Related literature lacks quantitative guidance regarding the application of count models for software quality prediction. This study presents a comprehensive empirical investigation of eight count models in the context of software fault prediction. It includes comparative hypothesis testing, model selection, and performance evaluation for the count models with respect to different criteria. The case study presented is that of a full-scale industrial software system. It is observed that the information obtained from hypothesis testing, and model selection techniques was not consistent with the predictive performances of the count models. Moreover, the comparative analysis based on one criterion did not match that of another criterion. However, with respect to a given criterion, the performance of a count model is consistent for both the fit, and test data sets. This ensures that, if a fitted model is considered good based on a given criterion, then the model will yield a good prediction based on the same criterion. The relative performances of the eight models are evaluated based on a one-way anova model, and Tukey's multiple comparison technique. The comparative study is useful in selecting the best count model for estimating the quality of a given software system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Multi-Objective Software Quality Classification Model Using Genetic Programming

    Page(s): 237 - 245
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (241 KB) |  | HTML iconHTML  

    A key factor in the success of a software project is achieving the best-possible software reliability within the allotted time & budget. Classification models which provide a risk-based software quality prediction, such as fault-prone & not fault-prone, are effective in providing a focused software quality assurance endeavor. However, their usefulness largely depends on whether all the predicted fault-prone modules can be inspected or improved by the allocated software quality-improvement resources, and on the project-specific costs of misclassifications. Therefore, a practical goal of calibrating classification models is to lower the expected cost of misclassification while providing a cost-effective use of the available software quality-improvement resources. This paper presents a genetic programming-based decision tree model which facilitates a multi-objective optimization in the context of the software quality classification problem. The first objective is to minimize the "Modified Expected Cost of Misclassification", which is our recently proposed goal-oriented measure for selecting & evaluating classification models. The second objective is to optimize the number of predicted fault-prone modules such that it is equal to the number of modules which can be inspected by the allocated resources. Some commonly used classification techniques, such as logistic regression, decision trees, and analogy-based reasoning, are not suited for directly optimizing multi-objective criteria. In contrast, genetic programming is particularly suited for the multi-objective optimization problem. An empirical case study of a real-world industrial software system demonstrates the promising results, and the usefulness of the proposed model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Failure Size Proportional Models and an Analysis of Failure Detection Abilities of Software Testing Strategies

    Page(s): 246 - 253
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (181 KB) |  | HTML iconHTML  

    This paper combines two distinct areas of research, namely software reliability growth modeling, and efficacy studies on software testing methods. It begins by proposing two software reliability growth models with a new approach to modeling. These models make the basic assumption that the intensity of failure occurrence during the testing phase of a piece of software is proportional to the s-expected probability of selecting a failure-causing input. The first model represents random testing, and the second model represents partition testing. These models provide the s-expected number of failures over a period, which in turn is used in analyzing the failure detection abilities of testing strategies. The specific areas of investigation are *) conditions that enable partition testing yielding optimal results, and) comparison between partition testing and random testing in terms of efficacy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Scalable Path Protection Mechanism for Guaranteed Network Reliability Under Multiple Failures

    Page(s): 254 - 267
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1215 KB) |  | HTML iconHTML  

    We propose two versions of Link Failure Probability (LFP) based backup resource sharing algorithms, namely LFP based First-Fit algorithm, and LFP based Best-Fit algorithm for Generalized Multi-Protocol Label Switching networks. Customers' availability requirements are met by adjusting the availability of the protection paths with different sharing options. Information required for calculating the availability of both the working, and protection paths can be collected along the specific working, and protection paths, thus avoiding the requirement for flooding. This makes our algorithms scalable for a large network. Our algorithms work consistently against both single, and multiple failures. Furthermore, we propose extensions for the existing signaling protocols to demonstrate that our proposed algorithms require minimum changes to the existing protocols. Simulation results show that our proposal performs better than the conventional Dedicated Path Protection schemes in terms of Call Acceptance Rate, and Total Bandwidth Consumption. Finally, by comparing simulation results to analytical results for a simplified network, we provide some insights into the correctness, and efficiency of our proposed algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized Access Structure Congestion System

    Page(s): 268 - 274
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    The k-out-of-n secret sharing schemes are effective, reliable, and secure methods to prevent a secret or secrets from being lost, stolen, or corrupted. The circular sequential k-out-of-n congestion (CSknC) system , based upon this type of secret sharing scheme, is presented for reconstructing secret(s) from any k servers among n servers in circular, sequential order. When a server is connected successfully, it will not be reconnected in later rounds until the CSknC system has k distinct, successfully connected servers. An optimal server arrangement in a CSknC system is determined in where n servers have known network connection probabilities for two states, i.e., congested, and successful. In this paper, we present: i) a generalized access structure congestion (GGammaC) system that is based upon the generalized secret sharing scheme, and ii) an efficient connection procedure for the GGammaC system in terms of the minimal number of server connection attempts. The k-out-of-n secret sharing schemes are considered as simple cases of the generalized secret sharing schemes. It implies that the GGammaC system is a more general system than the CSknC system. We established an iterative connection procedure for the new system. Simulation results are used to demonstrate that the iterative connection procedure is more efficient in terms of minimizing the number of connection attempts View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Network Reliability Optimization via the Cross-Entropy Method

    Page(s): 275 - 287
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (545 KB) |  | HTML iconHTML  

    Consider a network of unreliable links, each of which comes with a certain price, and reliability. Given a fixed budget, which links should be purchased in order to maximize the system's reliability? We introduce a new approach, based on the cross-entropy method, which can deal effectively with the constraints, and noise introduced when estimating the reliabilities via simulation, in this difficult combinatorial optimization problem. Numerical results demonstrate the effectiveness of the proposed technique View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient and Exact Reliability Evaluation for Networks With Imperfect Vertices

    Page(s): 288 - 300
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (674 KB) |  | HTML iconHTML  

    The factoring theorem, and BDD-based algorithms have been shown to be efficient reliability evaluation methods for networks with perfectly reliable vertices. However, the vertices, and the links of a network may fail in the real world. Imperfect vertices can be factored like links, but the complexity increases exponentially with their number. Exact algorithms based on the factoring theorem can therefore induce great overhead if vertex failures are taken into account. To solve the problem, a set of exact algorithms is presented to deal with vertex failures with little additional overhead. The algorithms can be used to solve terminal-pair, k-terminal, and all-terminal reliability problems in directed, and undirected networks. The essential variable is defined to be a vertex or a link of a network whose failure has the dominating effect on network reliability. The algorithms are so efficient that it takes less than 1.2 seconds on a 1.67 GHz personal computer to identify the essential variable of a network having 299 paths. When vertex failures in a 3 times 10 mesh network are taken into account, the proposed algorithms can induce as little as about 0.3% of runtime overhead, while the best result from factoring algorithms incurs about 300% overhead View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing Exponentiality Based on Kullback-Leibler Information With Progressively Type-II Censored Data

    Page(s): 301 - 307
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (279 KB) |  | HTML iconHTML  

    We express the joint entropy of progressively censored order statistics in terms of an incomplete integral of the hazard function, and provide a simple estimate of the joint entropy of progressively Type-II censored data. We then construct a goodness-of-fit test statistic based on Kullback-Leibler information with progressively Type-II censored data. Finally, by using Monte Carlo simulations, the power of the test is estimated, and compared against several alternatives under different progressive censoring schemes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Lifetime Distribution With an Upside-Down Bathtub-Shaped Hazard Function

    Page(s): 308 - 311
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (123 KB) |  | HTML iconHTML  

    A three-parameter lifetime distribution with increasing, decreasing, bathtub, and upside down bathtub shaped failure rates is introduced. The new model includes the Weibull distribution as a special case. A motivation is given using a competing risks interpretation when restricting its parametric space. Various statistical properties, and reliability aspects are explored; and the estimation of parameters is studied using the standard maximum likelihood procedures. Applications of the model to real data are also included View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimators for Reliability Measures in Geometric Distribution Model Using Dependent Masked System Life Test Data

    Page(s): 312 - 320
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (486 KB) |  | HTML iconHTML  

    Masked system life test data arises when the exact component which causes the system failure is unknown. Instead, it is assumed that there are two observable quantities for each system on the life test. These quantities are the system life time, and the set of components that contains the component leading to the system failure. The component leading to the system failure may be either completely unknown (general masking), isolated to a subset of system components (partial masking), or exactly known (no masking). In the dependent masked system life test data, it is assumed that the probability of masking may depend on the true cause of system failure. Masking is usually due to limited resources for diagnosing the cause of system failures, as well as the modular nature of the system. In this paper, we present point, and interval maximum likelihood, and Bayes estimators for the reliability measures of the individual components in a multi-component system in the presence of dependent masked system life test data. The life time distributions of the system components are assumed to be geometric with different parameters. Simulation study will be given in order to 1) compare the two procedures used to derive the estimators for the reliability measures of system components, 2) study the influence of the masking level on the accuracy of the estimators obtained, and 3) study the influence of the masking probability ratio on the accuracy of the estimators obtained View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sequential Testing for Comparison of the Mean Time Between Failures for Two Systems

    Page(s): 321 - 331
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1313 KB) |  | HTML iconHTML  

    This study deals with simultaneous testing of two systems, one "basic" (subscript b), and the other "new" (n), both with an exponential distribution describing the times between failures. We test whether the mean TBFn/MTBFb ratio equals a given value, versus whether it is smaller than the given value. These tests yield a binomial pattern. A recursive algorithm calculates the probability of a given combination of failure numbers in the systems, permitting rapid, accurate determination of the test characteristics. The influence of truncation of Wald's Sequential Probability Ratio Test (SPRT) on its characteristics is analysed, and relationships are derived for calculating the coordinates of truncation apex (TA). A test planning methodology is presented for the most common cases View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Computational Model for Determining the Optimal Preventive Maintenance Policy With Random Breakdowns and Imperfect Repairs

    Page(s): 332 - 339
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (245 KB) |  | HTML iconHTML  

    We consider a system that is subject to random failures, and investigate the decision rule for performing renewal maintenance or preventive replacement (PR). This type of maintenance policy involves two decision variables. The first decision variable is the time between preventive replacements, or a fixed cycle time. To avoid unnecessary renewals or replacements at the end of a cycle, a cut-off age is introduced as the second decision variable. At the end of every cycle, if the system's virtual age is equal to or greater than the cut-off age, it will undergo a renewal or replacement; otherwise the renewal decision will be postponed until the end of the next cycle. Random failures can occur, however; and the system receives emergency imperfect repairs (ER) at these times. Hence, within a PR cycle, a second decision time is identified. If an ER occurs between the start of a cycle and this second decision time, then the planned PR would still be performed at the end of the cycle. However, if the first ER occurs after this second decision time, then the PR at the end of the cycle is skipped over, and the next planned PR would take place at the end of the subsequent cycle. With this simple mechanism, PR which follow on too closely after an ER are avoided, thus saving the unnecessary expense. We develop a computational model to determine the optimal maintenance policy with these two decision variables View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability Analysis on the δ-Shock Model of Complex Systems

    Page(s): 340 - 348
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (215 KB) |  | HTML iconHTML  

    We investigate the delta-shock model of complex systems consisting of n i.i.d. components. We first obtain a general lifetime distribution for the delta-shock model of a general complex system by reducing the system to a linear combination of parallel systems. We then consider coherent system structures including series, parallel, and k-out-of-n, then derive some useful results including reliability bounds, bounds on the mean lifetime, limiting distributions, and Laplace-Stieltes transforms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identifying Mechanisms That Highly Accelerated Tests Miss

    Page(s): 349 - 359
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1765 KB) |  | HTML iconHTML  

    Extrapolating reliability from accelerated tests for technologies without field data always carries the risk that the accelerated tests do not show the mechanisms which dominate at operating conditions. In statistical terminology, such accelerated testing carries a risk of confounding. For linear models, there is theory which allows one to determine which models are confounded with others. This paper develops analogous theory for a simple kind of confounding model, evanescent processes, when kinetics is used as the basis of modeling accelerated testing. A heuristic for identifying simple evanescent processes that can give rise to disruptive alternatives (alternative models that reverse the decision which would be made based on modeling to date) is outlined. Then, we develop activity mapping, a tool for quantitatively identifying the parameter values of that evanescent process which can result in disruptive alternatives. Finally, we see how activity mapping can be used to identify experiments which help reveal such disruptive evanescent processes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Corrections on “Failure Transition Distance-Based Importance Sampling Schemes for the Simulation of Repairable Fault-Tolerant Computer Systems” [Jun 06 207-236]

    Page(s): 360
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (58 KB)  

    Various corrections to the above titled paper (ibid., vol. 55, no. 2, pp. 207-236) are presented. The corrections do not propagate to any other part of the paper and do not affect the correctness of the experimental results reported in the paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Erratum [Dec 05 612-616]

    Page(s): 360
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (58 KB)  

    Corrections to (6) in "A simple procedure for Bayesian estimation of the Weibull distribution" (vol. 54, pp. 612-616, Dec 05) are presented here. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Reliability information for authors

    Page(s): 361 - 362
    Save to Project icon | Request Permissions | PDF file iconPDF (53 KB)  
    Freely Available from IEEE
  • Special Issue on Packaging Reliability

    Page(s): 363
    Save to Project icon | Request Permissions | PDF file iconPDF (91 KB)  
    Freely Available from IEEE
  • Reliability Society to Offer Scholarships

    Page(s): 364
    Save to Project icon | Request Permissions | PDF file iconPDF (255 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong