By Topic

Reliability and Maintainability Symposium (RAMS), 2012 Proceedings - Annual

Date 23-26 Jan. 2012

Filter Results

Displaying Results 1 - 25 of 107
  • [Front cover]

    Publication Year: 2012 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (351 KB)  
    Freely Available from IEEE
  • Author index

    Publication Year: 2012 , Page(s): 1 - 13
    Save to Project icon | Request Permissions | PDF file iconPDF (161 KB)  
    Freely Available from IEEE
  • Reliability of Wind Turbine components — Solder elements fatigue failure

    Publication Year: 2012 , Page(s): 1 - 7
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1396 KB) |  | HTML iconHTML  

    The physics of failure for electrical components due to temperature loading is described. The main focus is on crack propagation in solder joints and damage accumulation models based on the Miner's rule. Two models are proposed that describe the initial accumulated plastic strain depending on the temperature mean and temperature range. Constant terms and model errors are estimated. The proposed methods are useful to predict damage values for solder joint in power electrical components. Based on the proposed methods it is described how to find the damage level for a given temperature loading profile. The proposed methods are discussed for application in reliability assessment of Wind Turbine's electrical components considering physical, model and measurement uncertainties. For further research it is proposed to evaluate damage criteria for electrical components due to the operational temperature fluctuations within Wind Turbines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A mission oriented accident model based on hybrid dynamic system

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (515 KB) |  | HTML iconHTML  

    As a conceptualization of the characteristics of an accident, the accident model indicates the hazard factors in the systems and describes the process of system accidents. Therefore, accident models are the basis of system safety analysis and assessment. This paper presents a mission-oriented accident model to adopt the complex characteristics in the socio-technical system. Based on the principles that an accident is regarded as an emergent phenomenon and the dynamic relationships and interactions between the system entities are the key to building a systemic accident model, a two-stage modeling procedure is described in the paper including the qualitative and quantitative models. First, to build a qualitative conceptual model of the accident, the system mission process is decomposed to identify the system entities, such as equipments, facilities and human, involved in the mission, as well as their states and behaviors. By classifying the types of entity behaviors, the interactions between them are defined and utilized to construct the qualitative conceptual model based on Systems Modeling Language (SysML). Secondly, the accident systemic model is built for quantity analysis based on hybrid dynamic system (HDS) theory, which includes the discrete state transformation of the system and the continuous or discrete behaviors of the system entities. The entity behaviors promote the changes of system states that can be used to determine the system hazard, and the accident process is modeled through the interactions between the entities along the mission process. A case study is also presented including an analysis conclusion to verify the feasibility of the accident model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimizing R&M performance of a system using Monte Carlo Simulation

    Publication Year: 2012 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (551 KB) |  | HTML iconHTML  

    The R&M performance of a system is usually measured in terms of its Reliability (Ro), Availability (Ao), and Maintainability (MTTR and DMC) During the design phase of a system, several choices need to be made, such as number of redundant modules required to achieve a desired performance level (Reliability, Availability etc.), the system configuration or architecture, performance specifications of the chosen modules comprising the system, and system attributes such as weight & power. The maintenance of the system during its operational life requires labor cost associated with maintenance man hours expended during SMAs and cost of parts & materials used during such actions. In addition to cost associated with SMAs, there will invariably be system failures occurring randomly, causing downing events or mission aborts. Under such circumstances UMAs are required to bring the system back to its operational status. UMAs usually cost much more to fix as compared to SMAs. Both actions (UMAs & SMAs) contribute to the DMC of the system. While it is possible to set up a Reliability Block Diagram to estimate the reliability of the system, very often merely estimating the reliability is not enough. A reliability engineer may equally be, or more interested in the availability of the system, the frequency of SMAs and UMAs required for a given configuration to ensure that the system is available; and the direct operating cost of which DMC is an integral part. A given system may also be used for multi-mission purposes requiring different mission times. Adding to the system complexity are uncertainties associated with estimated maintenance man hours required for overhauling, uncertainty in cost of labor, cost & quantity of parts and materials used during maintenance actions. One may be able to decompose the complex problem into various scenarios, model and simulate (or calculate, if possible) one scenario at a time to estimate its perform- nce parameters. However, such an approach can be tedious and time consuming. In this paper we demonstrate a simulation model that can address all of the above mentioned concerns simultaneously. An RBD of the system is first transformed into a simulation model. The MTBF, MTTR and Repair Costs based on historical data are used as inputs to the model, with appropriate statistical distributions respective to these parameters. The scheduled maintenance or replacement intervals (SMI) of various modules are entered as discrete variables into the model. The multi-stage Monte Carlo Simulation is modeled to yield the number of UMAs, Reliability, Availability, MTBMiA and DMC for various SMIs or replacement Intervals. The output results of the simulation are then analyzed using a statistical software package to yield response surfaces and/or contour plots. Such results can then be used to determine the best possible combination of system architecture and maintenance strategy capable of delivering optimal R&M performance of the system under investigation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • NASA applications and lessons learned in reliability engineering

    Publication Year: 2012 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4516 KB) |  | HTML iconHTML  

    Since the Shuttle Challenger accident in 1986, communities across National Aeronautic and Space Administration (NASA) have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the years to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are; reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME Alternate Turbo-pump Development (ATD), impact of the Space Shuttle External Tank (ET) foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. A special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Redundancy allocation for k-out-of-n: G systems with mixed spare types

    Publication Year: 2012 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (491 KB) |  | HTML iconHTML  

    This paper considers the redundancy allocation problem pertaining to k-out-of-n: G heterogeneous series-parallel systems. Different from existing approaches that consider either hot or cold standby redundancy for a parallel subsystem, our approach considers a mix of cold and hot standby redundancies within one subsystem to achieve a balance between fast recovery and power conservation in the optimal system design. Given the available versions of hot and cold standby units and minimum number of operating components (k) for the system to function, the objective of the optimal design is to select versions of components and hot and cold standby redundancy levels within the subsystem to minimize system unreliability while satisfying system-level constraint on cost. To formulate the objective function, an analytical method has been proposed for the reliability analysis of a k-out-of-n: G subsystem with both hot and cold standby units. The method has no limitation on the type of time-to-failure distributions for the system components. An optimization solution methodology based on genetic algorithm is presented for obtaining the optimal design configuration. The proposed methodology is tested on several example data sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Monitoring product reliability and supply chain logistics in warranty data

    Publication Year: 2012 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1173 KB) |  | HTML iconHTML  

    In this paper, we show how warranty data may be used to monitor the distributions of time spent by the product in the supply chain and its time to failure after reaching the consumer using simulated data. In our proposed modeling method, we estimate the parameters of the two distributions based on available warranty data that are collected periodically. Standard I-MR charts in statistical process control are used in monitoring the parameters that affect supply chain and product reliability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A simple algorithm for sum of disjoint products

    Publication Year: 2012 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (453 KB) |  | HTML iconHTML  

    The evaluation of network reliability and Fault Tree Analysis (FTA) is a NP-hard problem that is caused by sum-of-products. This paper presents a new algorithm for calculating system reliability by sum of disjoint products (SDP), based on Boolean algebra and generated from sum-of-products when minimal cut sets (MCS) are known. This algorithm could considerably reduce the number of disjoint (mutually exclusive) terms and save computation time with respect to top-event probability. Four major theorems of this algorithm are given, the use and correctness of which will be analyzed and proven. In addition, some examples for network reliability and FTA are illustrated to show the superiority and efficiency of the presented algorithm, which is not only easier to understand and implement but also better than the existing known SDP algorithm for large network and complex Fault Tree. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The safety analysis of flight landing based on Time Petri Net

    Publication Year: 2012 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (454 KB) |  | HTML iconHTML  

    This paper mainly discusses the airport task flow safety problems induced by time factors, based on the Time Petri Net (TPN) method. We introduced the characteristics of an airport accident, and the method of TPN to analyze the flight landing task flow. The more detailed steps are shown as follows. Based on the characteristics of airport landing task flow, this paper introduced a plan task safety analysis method. The method can compute the task system effect probability which is induced by the time factor, also can located the key events im pacting the task, and proposed the measures and advice to improve the task system safety. This paper shows the task safety analysis approach by a flight landing task flow example based on TPN methods. The result shows that the method is feasible and efficient. In summary, the ultimate of this paper is to introduce an efficient method to analyze the airport task flow safety. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An improvement for HTA based on cognitive process

    Publication Year: 2012 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (689 KB) |  | HTML iconHTML  

    The task analysis is the chief work of human factor, and assists analyst to deeper understand and describe the system and the human in it. The context of the task analysis is to describe human performance in a particular task or scenario and to break down tasks or scenarios into the required individual task steps based on the necessary interaction between the human and the machine. Cognitive task analysis is used to describe the cognitive performance of human what can't be observed and the mental processes used by the human in the system when they complete the tasks. In recent years, a range of Human Factors (HF) methods which can be applied in system safety design and evaluation has been studied. HTA (Hierarchical task analysis) is a widely used task analysis technique of human factors. Its advantages is that it can provide for the analyst with desired goals and a deeper level of understanding of the target task through both by describing the task steps and one specific operation. However, the information provided by HTA is mainly descriptive rather than analytical and HTA itself cannot provide the cognitive analysis process. In this paper, Information Hierarchical Disposal-Decision Modal (IHDDM) and a new way through integrating the cognitive task process of human into the traditional HTA analysis process (CP-HTA) are presented. The new task analysis technique can provide a much more complete understanding of human-machine interaction task analysis process. At last, a case is presented to describe how to apply the technique in practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A strategic approach to technical and product data management

    Publication Year: 2012 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (475 KB) |  | HTML iconHTML  

    The paper discusses Technical Data (TD) rights regulations and policies, and presents specific guidelines for the preparation of a Technical Data Strategy (TDS). Practical criteria for decision making regarding Technical Data options are identified and assessed through the conduct of a Business Case Analysis (BCA). While processes and methodologies have been put into place, lack of guidance and governance in implementing Technical Data Strategies has limited their effectiveness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Limitations of explicit modeling of common cause failures within fault trees

    Publication Year: 2012 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (573 KB) |  | HTML iconHTML  

    The significant impact that common cause failures can have on reliability and safety of a system comprising redundant components is widely acknowledged. These common cause failures can significantly endanger the benefits of the redundancy which is seen as the main principle upon which safety systems design is based. Thus, consideration of common cause failures is one of the most challenging and critical issues in the probabilistic safety assessment (PSA). This is especially emphasized within PSA fault tree modeling of safety systems within nuclear power plants. This study presents a new method for explicit modeling of single component failure in different common cause component groups (CCCGs) as well as the associated limitations. These limitations arise from the principle of explicit modeling of common cause failures where each components failure space can be decomposed, in terms of causes, to independent portion and dependent, common cause failure affected portion. A method-complementary approach for acting upon these limitations, presented in terms of constraining the dependent portion of the total components failure space, is proposed within the paper. The method is applied on a selected case study system. The results and insights gained out of this application are presented and discussed. In general, the application of this method implicates improved and more detailed PSA models. These improved models consequently direct more realistic PSA results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic and optimal allocation of safety integrity levels

    Publication Year: 2012 , Page(s): 1 - 6
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (444 KB) |  | HTML iconHTML  

    Powertrain electrification of vehicles leads to a higher number of sensors, actuators and control functions resulting in increasing complexity. Due to the safety-criticality of the functionalities, safety standards must be considered during system development. The safety standard ISO 26262 defines discrete ASILs (Automotive Safety Integrity Levels) that must be identified and allocated to the components of the system under development. Once allocated, they determine the applicable requirements of ISO 26262 and the necessary safety measures to accordingly minimize residual risk. Fu rthermore, the allocated ASILs directly influence the development efforts and the costs per piece of the system components. Manual elaboration of an ASIL allocation that is economic and assures functional safety is complex and cumbersome. This work presents a method that allows the automatic allocation of ASILs to the system components. In our approach ASIL allocation is interpreted as an ILP (Integer Linear Programming) problem. This allows obtaining an ASIL allocation that is optimal with respect to an objective function that is subject to constraints. These constraints are derived from the results of PHA (Preliminary Hazard Analysis), FTA (Fault Tree Analysis) and preferences of the safety engineer. The approach is evaluated by the case study of hybrid electric vehicle development. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pre-proposal assessment of reliability for spacecraft and instruments

    Publication Year: 2012 , Page(s): 1 - 5
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (707 KB) |  | HTML iconHTML  

    This paper primarily addresses the role of Reliability Engineering in the multidisciplinary process used by NASA's Goddard Space Flight Center in the Integrated Design Center and a description of the estimating tool that is used. This estimating tool may have expanded applicability in the proposal of new designs in many industries. Understanding key contributors to Mission Success or Failure is necessary to assure a successful proposal is developed. Additionally, this preliminary analysis needs to contain recommendations for future use in later stages of project development for the Preliminary Design Review (PDR) and Critical Design Review (CDR). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient analysis of warm standby systems using central limit theorem

    Publication Year: 2012 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (644 KB) |  | HTML iconHTML  

    In a warm standby sparing system, the standby units have time-dependent failure behavior; they have different failure parameters or in general distributions before and after they are used to replace the on-line faulty units. Such time-dependent behavior makes the reliability analysis of warm standby system a challenging task. Existing approaches to analyzing the reliability of warm standby systems include Markov-based methods, simulation-based methods, and combinatorial me thods. Those approaches, however, have one or more of the following limitations: 1) requiring long computation time especially when results with high degree of accuracy are desired, 2) requiring exponential time-to-failure distribution for system components, and 3) involving difficult tasks of computing convolution of multiple integrals. In this paper, based on the central limit theorem, a computationally-efficient approximate method is proposed for the reliability analysis of warm standby systems. The proposed approach has no limitation on the time-to-failure distributions for the system components. Several case studies using different time-to-failure distributions and system sizes are performed to demonstrate the application of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability engineering approach to achieve RCM for mechanical systems — 2012

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (567 KB) |  | HTML iconHTML  

    The foundation for reliability-centered maintenance is the characterization of reliability failure models. Reliability is currently defined as the probability that a part will function without failure for designated mission durations under specified conditions of use. This paper proposes that reliability should be defined as the probability that a part will function without failure under risk characterized stress loads for specified operational and ambient conditions of use. Empirical reliability math modeling suggests that time does not cause part failure and therefore Time-to-Failure, TTF, based reliability math models are meaningless. Findings from current research in reliability math modeling show that failure mechanisms cause part failure, and that stress based reliability math models are meaningful. Reliability math models are fit from failure analysis and (1) characterize part failure condition indicators that allow implementation of condition-based maintenance, or (2) characterize part hazard functions that allow implementation of stress-directed maintenance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Determining the availability on a system of systems network

    Publication Year: 2012 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB) |  | HTML iconHTML  

    The Research Institute at the University of Alabama in Huntsville has developed a method of performing an availability analysis on a system of systems network. The goal of this process is to determine the overall availability for the entire network based up on the LRU (Line Replaceable Unit) and the design configuration of each of the systems. By examining the MTTR (Mean Time To Replace), MTBF (Mean Time Between Failure) and the ALDT (Administrative and Logistical Downtime) for the LRU's, we can see how these factors have an effect on the operational and inherent availability from the LRU through the system level and how it correlates to the availability for an entire system of systems network. This paper presents the methodology on how to build an availability model for a system of systems network. An analysis can be performed to determine the operational and inherent availability of a single part to a network of systems and the potential benefits of performing such an analysis, based upon the criteria specified. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Power law model, correct application in reliability growth do the cumulative times indeed always add up?

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (519 KB) |  | HTML iconHTML  

    Reliability growth methodology is a valuable tool to measure product reliability improvement either through planned, dedicated testing or the gradual upgrade and improvement of the fielded product. The methodology is well thought of with appropriate process mathematical assumptions so that when applied, it provides appropriate and justifiable information and tracking of reliability improvement. The usual Homogenous Poisson Process, HPP, with assumed constant failure rates and failure intensities is almost regularly assumed and used in all reliability analysis and especially testing. The Non-Homogenous Poisson Process, NHPP, adequately expresses step changes of failure rates resultant from product design or processes improvement, by fitting them with the continuous Power law curve. All reliability growth modeling done in this manner such as the take-off models Duane and AMSAA/CROW are valid and valuable. This paper proposes some changes in the accounting of the total test times for the cases when the multiple units are observed in test or in the field which may yield more appropriate determination of failure rates and their parameters. This paper points out that the common practice in reliability growth test data analysis with additions of test times of multiple test items at the times of individual failures may be inappropriate in the case where systematic failures - design or manufacturing practices flaws are observed. Therefore, the papr proposes application of the original NHPP power law, which is the model followed in derivations other current reliability growth analytical methods. The proposed analytical method will correct the errors introduced when multiple units are tested or observed, and will also provide uniformity of the data analysis. The errors do not show when a single items' reliability growth is ob served. They become apparent only in the cases of multiple items, and are proportional to the number of observed items and the lack of synchronization - f beginning, ending, and upgrades times. To eliminate those errors, which cou ld be rather large when reliability growth of fielded products is followed, this paper proposes following the rules of the original power law model in all of the RG data analysis and calculations of reliability results. It might be also advisable that the additions are done in the published material to provide this guidance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrated importance based maintenance decision making

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (443 KB) |  | HTML iconHTML  

    This paper proposed an integrated importance based maintenance decision making method to provide reasonable maintenance schemes for decision makers. First, the concept of integrated importance measure (IIM) is introduced and its physical meaning and characteristics in maintenance are discussed. Then, a new maintenance decision making model (MDM) is put forward by extending the influence diagram. The nodes, edges and dependency intensities in it are described. The corresponding modeling and optimization methods based on IIM are also listed. Finally, based on a multi-state series system, a calculation example is applied to demonstrate the IIM based MDM modelling and optimization process. The MDM of example system proves that the proposed model and modelling method can represent related maintenance factors comprehensively. The optimization results of MDM shows that the proposed IIM based method can provide maintenance decision makers with reasonable maintenance schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrated importance analysis with Markov Bayesian networks

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (453 KB) |  | HTML iconHTML  

    This paper proposed a Markov Bayesian network (MBN) model to facilitate the calculation of integrated importance measure (IIM) for component state in dynamic multi-state systems. First, the concept of IIM is introduced and its computation formula is decomposed into 3 parts to simplify the analysis process. Then, the MBN is proposed on the basis of Bayesian network and state transition diagram to describe the variable state distributions and state transition matrixes comprehensively. Third, the modeling method of MBN based on multi-state fault tree analysis (MFTA) is discussed to initialize the practical MBN. Finally, according to the MFTA of a power system, a case study is implemented to demonstrate the MBN modelling and IIM calculation process. The MBN of power system proves that the proposed modelling method could build the model from the MFTA equivalently. The IIM calculation process shows that MBN provide a convenient analysis framework. The IIM value of each component state also verifies its meaning and worth in system maintenance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Failure assessment and HALT test of electrical converters

    Publication Year: 2012 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (495 KB) |  | HTML iconHTML  

    Reliability improvement of power electronic products is the key action to achieve sustainable forms of production and use of electric energy. The availability of electrical converters in applications like power transmission, conversion, and usage in motors is crucial for sustainability. To address reliability and availability, risk assessment strategies are becoming increasingly applied in power electronic research and development. Beyond the typically used Failure Modes and Effects Analysis (FMEA), similar methods exist like Design Review Based on Failure Mode (DRBFM) to assess risks or failures. Applied together with experimental Highly Accelerated Life Testing (HALT), these methods are powerful tools for reliability evaluation and improvement in the design phase of a product. This paper presents the benefits of using DRBFM during the preparation of HALT on a power electronic converter. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Outage performance improvement by Preventive Maintenance optimization

    Publication Year: 2012 , Page(s): 1 - 4
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (516 KB) |  | HTML iconHTML  

    Shutting down a power plant for any reason is a costly process. Loss of production revenues from one unit can exceed millions of dollars per day. It is therefore good practice to keep outages as short and efficient as possible. A major component of planned outages is Preventive Maintenance (PM). It is important that PM be applied only where needed. Unnecessary PM is costly, not only in terms of labor and material costs, but in additional lost time and production. Further, there are cases where PM is contraindicated by standard reliability models, i.e. components with decreasing failure rates. This can paradoxically introduce failures, incurring additional repair costs and production loss. The risk management group at the South Texas Project Electric Generating Station (STPEGS) has successfully developed software to optimize PM schedules by minimizing total costs and risks associated with corrective maintenance (CM) and PM. The optimum PM interval is determined as a multiple of outage cycles. Components are evaluated based on maintenance type, risk ranking, cost of performing CM at power, and regulatory or internal commitments. This information provided to systems engineering assists in determining whether to advance PM from an upcoming outage, or defer PM to a subsequent outage. Components with maintenance that can be performed online are outside of the scope of this paper and are not considered here. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Risk-informed Preventive Maintenance optimization

    Publication Year: 2012 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (539 KB) |  | HTML iconHTML  

    The risk management group at the South Texas Project Electric Generating Station (STPEGS) has successfully developed a Preventive Maintenance (PM) optimization application based on a new mathematical model developed in collaboration with the University of Texas at Austin. This model uses historical maintenance data from the STPEGS work management database. Robust statistical analysis, coupled with an efficient algorithm generates an optimal PM schedule, based on a Non-Homogenous Poisson Process (NHPP) with a power law failure rate function. In addition, the risk associated with significant plant events triggered by a component failure is appropriately captured in the Corrective Maintenance (CM) cost estimates. The probabilities of such events are modeled via fault tree analysis, and consequences re expressed as monetary costs. The net cost of CM is then modified by a weighted sum of the probability of each event multiplied its monetary cost. The ratio of risk-adjusted CM cost to PM cost is used with the failure rate parameters to calculate the optimum PM frequency that minimizes combined CM and PM costs. The software can evaluate individual components or entire systems of components. Several low-risk ranked systems have been evaluated. In this paper we present the results of these evaluations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Risk comparison of crew launch vehicle concepts

    Publication Year: 2012 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (502 KB) |  | HTML iconHTML  

    NASA is embarking on a new era of human spaceflight, one in which commercial service providers will sell astronaut transportation services to NASA. Thus NASA will have limited insight into the design and manufacturing processes of space transportation vehicles. In this new paradigm, setting appropriate safety requirements and goals, for the service providers to meet, and a standard process for evaluating the safety of commercial rides, will be of paramount importance to ensuring the safety of the astronauts. Good systems engineering practice emphasizes the importance of having valid and verifiable requirements. In the case of safety, carrying an unrealistically high, and thus unverifiable requirement can actually reduce the safety of vehicles under development because it can lead to focus on quantification of known failure modes rather than on a search for the unknowns, focus on process rather than experience, lead to a false sense of security, and a tendency to game the analysis to meet the requirement. A probability of loss of crew (LOC) requirement cannot be strictly verified, as there will be too few flights for statistics and probability forecasts are only “opinions” of the forecaster, well-educated opinions hopefully, but opinions nonetheless. ("Not differently is the aim of logic on the other hand. This cannot tell me if my opinions are right or wrong, that is nonsense, but only if they are coherent or if there is among them an intrinsic inconsistency. And the calculus of probabilities is only the logic of practical convictions, that are subjected to a more or less large degree of doubt"[1]). But the reliability and safety that were actually achieved by previous vehicles, with reasonable expectations of growth, inform the range of LOC probabilities that can be credibly achieved by new systems. This paper shows how this process can inform the development of rational requirements with the example of launch vehicle safety. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.