By Topic

Reliability and Maintainability Symposium, 1999. Proceedings. Annual

Date 18-21 Jan. 1999

Filter Results

Displaying Results 1 - 25 of 72
  • 1999 Annual Reliability and Maintainbility Symposium [front matter]

    Page(s): i - x
    Save to Project icon | Request Permissions | PDF file iconPDF (694 KB)  
    Freely Available from IEEE
  • Annual Reliability and Maintainability. Symposium. 1999 Proceedings (Cat. No.99CH36283)

    Save to Project icon | Request Permissions | PDF file iconPDF (259 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): xvii - xviii
    Save to Project icon | Request Permissions | PDF file iconPDF (94 KB)  
    Freely Available from IEEE
  • PANEL: Advisory Board - What Are The Successful Companies Doing?

    Page(s): 219 - 223
    Save to Project icon | Request Permissions | PDF file iconPDF (621 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): cx
    Save to Project icon | Request Permissions | PDF file iconPDF (1401 KB)  
    Freely Available from IEEE
  • Software reliability cases: the bridge between hardware, software and system safety and reliability

    Page(s): 396 - 402
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (696 KB)  

    High integrity/high consequence systems must be safe and reliable; hence it is only logical that both software safety and software reliability cases should be developed. Risk assessments in safety cases evaluate the severity of the consequences of a hazard and the likelihood of it occurring. The likelihood is directly related to system and software reliability predictions. Software reliability cases, as promoted by SAE JA 1002 and 1003, provide a practical approach to bridge the gap between hardware reliability, software reliability, and system safety and reliability by using a common methodology and information structure. They also facilitate early insight into whether or not a project is on track for meeting stated safety and reliability goals, while facilitating an informed assessment by regulatory and/or contractual authorities View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Functional modeling of complex systems with applications

    Page(s): 418 - 425
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (668 KB)  

    This paper describes the use of a functional modeling approach called goal tree-success tree (GTST) and master logic diagram (MLD) as frameworks for modeling complex physical systems. A function-based lexicon for classifying the most common elements of engineering systems is proposed. This classification is based on the conservation laws that govern the engineering systems. It is argued that functional descriptions based on conservation laws provide a simple and rich vocabulary for modeling complex engineering systems. Several examples of using the GTST-MLD framework in reliability and risk analysis and a tool for automating the analysis have been presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Disturbance analysis using a system bond-graph model

    Page(s): 358 - 364
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (556 KB)  

    Conventional failure effect analysis depends on the expert's judgement and experience, which may lead to an erroneous result. To solve this problem, this paper proposes a failure effect analysis using bond-graphs, which represent the system behavior in a unified way from the viewpoint of energy flow. The system state equations obtained from a system bond-graph represent the system behavior in the time space domain, and process variables can be obtained in functional forms of state variables and input variables. Qualitative evaluation of deviations caused by a component failure or external disturbance can be analyzed using a tree graph expression of the system state equations. Not only final deviations remaining under the component failure condition, but also initial transient deviations just after the failure occurrence can be obtained by propagating steady state conditions or assumed deviations along the tree graph. An illustrative example of a water flow control system shows the details of the proposed method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Medical diagnostic device reliability improvement and prediction tools-lessons learned

    Page(s): 29 - 31
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (280 KB)  

    To be successful in the medical diagnostic industry, companies must strive to be the most cost competitive supplier. Abbott Laboratories has traditionally been diligent in pursuing several avenues for cost reduction including raw material supplier contracting, outsourcing, as well as direct labor reduction through design for manufacturability, inventory and scrap reduction. Today, Abbott focuses increased attention on reliability improvements as a means to cost efficiencies. While traditional methods of cost improvement are never seen by the final customer, reliability improvements result in a more satisfied customer. Abbott has used lessons learned from past product development to create in-house reliability engineering tools. These tools help predict reliability of development products; but more importantly, they help Abbott build reliability into the product View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Chemical-process design and maintenance optimization under uncertainty: a simultaneous approach

    Page(s): 78 - 83
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (648 KB)  

    Research work has emphasized the importance of reliability and maintenance in chemical process operation and the benefits from achieving high process availability levels by optimizing the trade-offs between maintenance costs and plant production volumes. This has motivated the need for the development of contemporary techniques and tools for availability assessment of process systems, which go beyond traditional practices, by focussing on the interactions of reliability and maintenance optimization with the detailed process operation and its dynamic, continuously changing environment. Furthermore, at the design stage of chemical plants, operability considerations accounting for the plant's life cycle, such as flexibility, reliability and maintainability are not typically included. The main reason for this is that there is a lack of an integrated design framework enabling process engineers to look at the various operability factors in conjuction with cost in a systematic and quantified way during process design development. This work presents theoretical and computational developments aiming at the integration of maintenance optimization in optimal life-cycle process design and development under uncertainty. In particular, the impact of uncertainty upon determining the optimal balance between maintenance costs and benefits as well as the interactions of process design and maintainability in the presence of uncertainty are clearly shown and quantified View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new method for obtaining the TTT plot for a censored sample

    Page(s): 112 - 117
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (576 KB)  

    For a censored sample where withdrawns or suspensions are present, Kaplan-Meier-estimation (KME) method, also called product-limit-estimation (PLE) method, and piecewise-exponential-estimation (PEXE) method, have been typically used to estimate both the survivor function (or cdf) and the total time on test (TTT) statistic at each observed failure time. This paper presents a new method, called mean-order-number (MON) method, for estimating TTT statistics and TTT plot for a censored sample. This method is illustrated using Dodson's example as quoted by numerous authors. A comparison is made among MON, KME, and PEXE. It turns out that MON method is not only easy to use but its results are consistent with KME and PEXE as well View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • KB3: computer program for automatic generation of fault trees

    Page(s): 389 - 395
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (596 KB)  

    KB3, formerly named EXPRESS, is a knowledge based-workbench that assists in building reliability models. At EDF, KB3 is used for the safety studies of nuclear power plants. It is founded on knowledge bases describing generic classes of components, with their behaviour and failure modes. This description results of a generic functional analysis (FA) and a failure mode and effect analysis (FMEA) of the systems and is written in a dedicated language, called FIGARO, developed at EDF. Using these classes of components, the user can describe the studied system in a graphical system editor and generate fault trees for different missions of the system. He can also add specific knowledge about the system. Thus, he can be released from the limits of the generic knowledge base. KB3 may be linked with different codes for the quantification of the generated fault trees View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Systematic approach to modern corded telephone early commutation failure analysis

    Page(s): 84 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1296 KB)  

    Typical early failures of modern corded telephones characterized by their inability to provide correct commutation were analyzed. Specifics of advanced mass consumer communication electronics reliability and failure analysis have been discussed. Complementary theoretical and experimental methods were applied in order to effectively evaluate dialing reliability specifics, allocate defects and reveal their root causes. A regular fault tree analysis procedure has been employed, however its limitation is discussed. Results of the telephony early failure analysis related to open and short circuits of flexible flat ribbon cables and printed circuit boards, dialing button dimensions, mechanical assembly tolerance and soldering, and keypad contamination are presented. Various fault interactions were found to be major contributors to failure aggravation. A major part of observed early dialing failures was related to coarse mechanical and electro-chemical causes rather than to subtle component or material degradation. Recommendations for corrective action were developed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experience in using MEADEP

    Page(s): 158 - 164
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (784 KB)  

    This paper reports some of the authors' experience in using MEADEP-a newly developed measurement-based dependability evaluation tool that includes both data analysis and modeling functions. Several issues are discussed: identification of time between outages and time to repair distributions; need for more graphical model forms; and consistency between parameter estimation and model evaluation algorithms. The identified distribution functions are valuable for detailed analysis and realistic modeling. The significance of discovering the inconsistency between the failure rate estimation and the availability evaluation algorithms is not limited to only MEADEP because both algorithms are commonly used. The need for more graphical model forms and the demand for better user interfaces are driving improvements for MEADEP. These issues provide insights into the roles and future directions for this kind of tools View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • System reliability modeling considering the dependence of component environmental influences

    Page(s): 214 - 218
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    A system reliability modeling procedure is described and demonstrated to accommodate the case when component failure times are statistically correlated because of the shared environmental exposure of components within a system. When component failure times are correlated, independence assumptions are not valid, and thus, many common reliability modeling practices are inappropriate. If component reliability is influenced by environmental exposure, then the components within a system are likely to have correlated time-to-failures because all components within a system are influenced similarly by the system-level environmental stress profile. This scenario is often overlooked when failure data is analyzed for a homogeneous population of parts that have experienced nonhomogeneous usage profiles. The model presented here is based on proportional hazards models for component reliability and discretized approximation of the joint probability density function for system environmental stress variables. The discretization approach is mathematically convenient, accurate and offers several pragmatic advantages over alternative computation approaches. A hypothetical three-component series system is analyzed, and the results are compared to two common approximations: (1) component independence assumption and (2) use of environmental stress average values. The results indicate that the described approach is convenient and has the potential to be scaled-up to large problems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating the risk of industrial espionage

    Page(s): 230 - 237
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (648 KB)  

    A methodology for estimating the relative probabilities of different compromise paths for protected information by insider and visitor intelligence collectors has been developed based on an event-tree analysis of the intelligence collection operation. The analyst identifies target information and ultimate users who might attempt to gain that information. The analyst then uses an event tree to develop a set of compromise paths. Probability models are developed for each of the compromise paths that use parameters based on expert judgment or historical data on security violations. The resulting probability estimates indicate the relative likelihood of different compromise paths and provide an input for security resource allocation. Industrial facilities face insider information theft as well as compromise by visitors. The method should be adaptable to most industrial situations with modifications to fit the specific situation and is direct and simple to use. When historical data are not available, expert judgment data can be used as an input. Even in the absence of quantitative data, considerable insight into espionage risk can be gained by developing the compromise paths and their attendant probability models. These qualitative insights may be the greatest benefit gained when applying this methodology View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation of computer network reliability with congestion

    Page(s): 208 - 213
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (544 KB)  

    A computer network is generally modeled by a graph consisting of nodes (computers) and links (communication lines). In practical situations, the links have finite capacity, and excess messages are stored in a finite length queue at the nodes. A link is congested if the number of packets waiting to be transmitted over the link exceeds its maximum queue length. The network can fail due to excessive delays in a queue (congestion) or link failures that isolate node pairs. Various routing rules (algorithms) are stored at the nodes to continue communication, via alternate paths, when congestion and/or link failures occur. The combined effects of congestion and routing are difficult to analyze. This paper describes simulation programs for packet switching networks which model congestion, routing and link failures and the results of reliability studies performed using these programs. Simulation results are analyzed and some conclusions are drawn on how network reliability is affected by different congestion factors View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assessment of a safety-critical system including software: a Bayesian belief network for evidence sources

    Page(s): 142 - 150
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (824 KB)  

    Assessment of safety critical systems including software cannot rely only on conventional techniques, based on statistics and dependability models. In such systems, the predominant faults usually are design faults, which are very hard to predict. Therefore, the assessment can only be qualitative, and is performed by experts, who take into account various evidence sources. The aim of the SERENE European project is to improve the understandability, and repeatability of such assessments, thanks to a representation of the expert's reasoning by a mathematical model (a Bayesian belief network). The subject of this paper is the presentation of the BBN built by EDF to model one of its assessment approaches, valid for the products for which EDF writes the requirements specification, and then monitors the development made by an external supplier. No doubt that, before it yields reliable forecasts, this kind of model will require many years of calibration, by comparison between the predictions it gives, and the real, observed safety level of the evaluated systems. However, the authors think that in the short term, they can bring a rationale in the discussions between experts. They will also help in determining which are the most influential variables in the design process of a system, which is a necessary prerequisite for setting up any kind of field experience collection View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New analysis methods of multilevel accelerated life tests

    Page(s): 38 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (448 KB)  

    The development of multilevel accelerated life stress tests has been hindered by the difficulty of performing adequate and satisfactory data analysis. When 2, 3 or 4 levels of stress exist, even a computer analysis can be formidable. Nelson (1990) showed that even a simple analysis of seven levels of a single stress can become quite a challenge. This paper shows a new computerized software approach that has recently become available to the reliability engineer. It makes the analysis of multilevel stresses much easier. Complete or suspended data with 2 different stresses and multiple levels can now be analyzed employing the maximum likelihood estimator to achieve best fit solutions. In addition, confidence estimates can be made and the detailed stress-life relationship of the important parameters may be investigated. The data analysis solutions will permit the reliability person a wider latitude in selecting test stresses, methods and conditions when testing products in the future View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Practical treatment-methods for adaptive components in the fault-tree analysis

    Page(s): 97 - 104
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (628 KB)  

    In this paper, the dependability analysis of functionally reconfigurable logic is discussed. Such logic can change its function if a failure has occurred and it has been detected. Based on the fault tree technique, several methods are illustrated and investigated, namely the macro-models, the Markov chains, and the multi-static models, which provide the treatment of the system components with adaptive properties. This paper also proposes a multi-state based procedure well fitted to the re-configuration. Closed formulas are applied in this procedure, as they ensure simpler and faster analysis for a class of systems than other procedures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design for RMS/LCC the development of a design model

    Page(s): 330 - 335
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB)  

    This paper presents the results of a Ph.D. project aimed to develop and implement a model (called SMARD) for the incorporation of reliability, availability, maintainability, supportability and life-cycle cost (RMS/LCC) aspects in the design and development process of large-scale, complex technical systems. Emphasis lies on the design and development processes related to one-of-a-kind chemical process plants, small to medium series aircraft, and mass production automotive industry. The SMARD model is developed based on several case studies conducted in Europe within these three branches of industry. Within a systems engineering approach the upfront attention towards RMS/LCC will lead to faster, more affordable design, production or manufacturing, operation and support of high-quality systems that satisfy all customer requirements View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simplifying the solution of redundancy allocation problems

    Page(s): 190 - 194
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    Redundancy allocation is a useful technique in designing systems that have high levels of reliability while satisfying limitations on cost, weight, volume, etc. Performing redundancy allocation typically involves formulating and solving an appropriate mathematical programming problem. In the literature, a wide variety of these problems have been formulated and a large number of solution techniques have been proposed. As a result, redundancy allocation problems are typically perceived to be quite difficult to solve. However, very little analysis has been performed for the purpose of quantifying the true difficulty of these problems. In this paper, four basic redundancy allocation problems are addressed. Strategies are defined for solving these four problems to optimality by either partial or total enumeration. In all four cases, the results indicate that these redundancy allocation problems are very simple to solve. These enumeration strategies should provide insight into developing simpler strategies for solving more complex redundancy allocation problems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effective time on station (ETOS): a new operational mission reliability parameter

    Page(s): 118 - 121
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB)  

    Effective time on station (ETOS) is a new reliability parameter used for systems that are designed to provided coverage (surveillance, defense, etc.) for a specified amount of time. The ETOS parameter can be used on systems that provide some type of coverage versus systems that have events defining their completion of a mission. ETOS measures the ability of a system to remain operational for the scheduled on-station time View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Bayes approach for uncertainty bounds for a stochastic analysis of the Space Shuttle main engine

    Page(s): 7 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (772 KB)  

    In this paper a Bayesian approach was presented to derive a time-dependent parametric probability function of a sudden critical structural failure of the Space Shuttle Main Engine Block I design, including uncertainty bounds at any given time. The probability function was assumed to have a Weibull form with parameters β and η. Under this assumption, both prior belief about this type of failure (based on a component based analysis) and the information available from the complex test data set of the SSME were used as input for the Bayesian update. The test data contains both actual failures and tests that were stopped before failure (`censored'), which made the process of examining the data set a challenging endeavor. The outcome of the Bayesian update formed the posterior belief in the parameters of the probability function. It was concluded that under the assumption of a Weibull distribution for a sudden critical structural failure, the uncertainty about the parameters is strongly diminished by combining the prior belief with the test data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy-operators weight refinements

    Page(s): 245 - 251
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (948 KB)  

    The advantages of intelligent technologies, and fuzzy logic as a leading methodology among them, lead to increasing popularity in various systems of control, approximate reasoning, optimization and ranking, managing uncertainty, prediction, etc. All these applications consider combination operations, i.e., fuzzy sets aggregation. Most frequently used Zadeh's operators of intersection and union appear to be very rough in certain applications. Therefore a need for modification and refinement of their functionality arose. The selection of the compensatory operator itself, as well as his parameter, has to be done after a thorough problem analysis. The proposed analysis is applicable to any problem that considers specific aggregation of fuzzy sets, depending on problems and systems depicted by qualitative attributes. Some of those problems arise in various fuzzy rules aggregating methods while fuzzy inferencing, in multiple criteria optimization, in the problem of fuzzy number rankings, where the modification of the connection's weight results in more precise and correct system depiction, and also as in data representation problem. The results are graphically presented and discussed in context of adequate possible applications. A test example exposed in this paper emphasizes an interesting application of data representation by the fuzzy logic function, where a selection of fuzzy-operator has an important part of the function modeling procedure View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.