Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Reliability and Maintainability Symposium (RAMS), 2011 Proceedings - Annual

Date 24-27 Jan. 2011

Filter Results

Displaying Results 1 - 25 of 109
  • [Front cover]

    Publication Year: 2011 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (42 KB)  
    Freely Available from IEEE
  • The Web-Accessible Repository of Physics-based Models (WARP)

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (756 KB) |  | HTML iconHTML  

    The Reliability Information Analysis Center (RIAC), under contract to the Defense Technical Information Center (DTIC), is developing and implementing a Web-Accessible Repository of Physics-based Models (WARP) database and website to centrally organize and make available information and data on physics-based models. In addition to providing access to available Physics-of-Failure (PoF) model information and data, WARP will allow registered users to identify the need for new models or validation data for existing models. The architecture of the WARP interface will pro mote visibility into which PoF model(s) are required to fully characterize the reliability of a specific device type or technology. It will also allow researchers to better understand gaps in the PoF model knowledge base to promote future research. The WARP website is intended to be a collaborative and dynamic tool for the entire reliability engineering community, regardless of industry or affiliation. The RIAC is qualifying all submitted information and data prior to publishing it on the website. This registration and review process is being implemented to ensure the integrity of the submitted models and any supporting data that is presented within WARP. The WARP project was initiated in September 2009 and continues through May 2012. The RIAC team is in the process of populating the WARP database and developing the WARP website model submittal, approval and user interfaces. The beta version of the WARP web-based tool is currently scheduled for April 2011. Full production release is planned for September 2011. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lithium battery analysis: Probability of failure assessment using logistic regression

    Publication Year: 2011 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (892 KB) |  | HTML iconHTML  

    Fourteen hundred rows by 53 columns of vendor cell acceptance data were processed though logistic regression using Insightful Corporation's Insightful Miner™ (IM) and SAS Institute Inc.'s SAS® Enterprise Miner (EM) to find any significant correlation between 52 test output parameters (independent variables) and the pass/fail outcome for each of the 1,400 battery cells tested. The goal was to find helpful predictors for detecting “good” or “bad” cells in the form of a best logistic regression model. Data from five cells selected by Johnson Space Center's (JSC's) Energy Systems Division (ESD) were processed through three model options (Option1, Option2, and Option3) to determine the best model and to indicate a known cell that failed. The output from the best model showed good acceptability statistics and indicated the failed cell was less acceptable than the other cells. The processing and model building results were similar in both IM and SAS EM. The model described by this paper may be applied to any vendor battery cells where acceptance data is available. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A quantitative approach for medical device Health Hazard Analysis

    Publication Year: 2011 , Page(s): 1 - 5
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (482 KB) |  | HTML iconHTML  

    Health Hazard Analysis (HHA) is one major type of patient health risk assessment for medical device field performance issue. U.S. Food and Drug Administration (FDA) has an online form, listing the needed information for HHA. In this paper, we will illustrate a quantitative HHA approach, which is structured in a rigorous risk assessment framework, with several critical steps, concepts and some key statistical reliability techniques embedded. We will also propose a method using Bayesian approach in patient health risk assessment due to medical device field failure. This approach is very useful for issues identified in the manufacturing process or supply chain, when there have been no field returns yet out of the current product sales. Our work may assist the medical device industry by offering an enhanced HHA approach for patient health risk assessment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability analysis of warm standby systems using sequential BDD

    Publication Year: 2011 , Page(s): 1 - 7
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (335 KB) |  | HTML iconHTML  

    Warm standby sparing is a fault-tolerant design technique developed as a compromise between cold sparing and hot sparing approaches in terms of power consumption and recovery time. Warm spares have time-dependent failure behavior; before and after they are used to replace a faulty component, warm spares have different failure rates or in general failure distributions. Existing approaches for analyzing systems with warm spares typically require long computation time especially when results with high degree of accuracy are desired, and/or require exponential time-to-failure distribution for system components. In this paper, an analytical method is proposed for the reliability analysis of warm standby sparing systems. The proposed approach is based on the combinatorial model of sequential binary decision diagrams, and can generate accurate system reliability results for large systems. The approach is applicable to any type of time-to-failure distributions for the system components. Application and advantages of the proposed approach are illustrated using several examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability prognostics for electronics via built-in diagnostic tools

    Publication Year: 2011 , Page(s): 1 - 7
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (515 KB) |  | HTML iconHTML  

    This paper proposes a practical model to monitor the degradation of electronic equipment and further to predict the remaining useful life based on the self-diagnostic data. The de gradation precursor, characterized by voltage or current signals, is modeled as a Non-stationary Gaussian process with tim e-varying mean and variance. Statistical testing is then used to characterize the trend patterns for the mean and the variance, from which different types of degradation paths will be extrapolated. Regression tools and time series models can be adopted to forecast the system remaining useful life. A case study drawn from the semiconductor testing equipment is used to demonstrate the applicability and the performance of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Best practices methods for robust design for reliability with parametric cost estimates

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1171 KB) |  | HTML iconHTML  

    ANSI/GEIA-STD-0009-2008, (STD-0009) "Reliability Program Standard for Systems Design, Development, and Manufacturing," was completed in November 2008 at the behest of the Defense Science Board Developmental Test (DT) and Evaluation Task Force and adopted in August 2009 for use by the Department of Defense (DoD). The demand for highly-reliable systems/products prompted the development of a new standard that specifies a scientific approach to reliability design, assessment, and verification, coupled with integrated management and systems engineering. This standard defines “what to do” in order to design and build reliability in, then maintain high reliability when the system/product is in the hands of the user. This paper provides an examination of Objective 2, STD-0009, design and redesign for reliability. It discusses the concepts, process, activities, and tools to affect a robust design for reliability and illustrate their use by actual implementation in a real-world case example by General Dynamics Land Systems. In addition to presenting the efficiencies of a robust design for reliability, this paper will include presentation of a parametric cost model that can reasonably estimate the time and cost for robust designs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability of Multi-State Systems subject to competing failures

    Publication Year: 2011 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (425 KB) |  | HTML iconHTML  

    This paper identifies and addresses an issue of competing propagated failures and failure isolation effect caused by functional dependence for Multi-State Systems (MSS). An analytical combinatorial method is proposed for incorporating the competing failure processes in the reliability analysis of MSS. The proposed method can evaluate the reliability of MSS with arbitrary distributions of the system component states. The method is also applicable to MSS with any type of system structure (i.e., not limited to series, parallel, or series parallel structure). An example multi-state memory system is analyzed to demonstrate the application and advantages of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability growth planning for discrete-use systems

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (563 KB) |  | HTML iconHTML  

    In the recent past, the Defense Science Board (DSB) Task Force report on Developmental Test and Evaluation revealed a significant increase in the number of DoD weapon system programs evaluated as not being operationally suitable. The primary reason is the lack of material readiness due to poor system Reliability, Availability, and Maintainability (R AM). The report shows that nearly half of U.S. Army systems from 1997-2006 failed to demonstrate their established reliability requirements during Operational Testing. As a result of the DSB findings and associated DoD Reliability Improvement Working Group (RIWG) report, a series of department policies have been established that place increased emphasis not only on reliability growth planning and tracking, but also on reliability best practices, and reliability language for defense acquisition contracts. As such, it is now department policy \“for programs to be formulated to execute a viable RAM strategy that includes a reliability growth program as an integral part of design and development.\” The most recent policy further stipulates, \“For new or restructured programs DOT\&E will not approve TESs and TEMPs lacking a reliability growth curve or software failure profile.\” This paper presents a detailed Reliability Growth (RG) planning approach that may be utilized for developing RG programs for discrete-use systems, thereby facilitating the implementation of the aforementioned DoD reliability policies. More specifically, this approach, hereafter referred to as PM2-Discrete, may be utilized for developing RG programs and associated planning curves that: (1) portray planned reliability achievement as a function of program resources; (2) serve as a baseline against which demonstrated reliability values may be compared throughout a test program (for tracking purposes); and (3) illustrate and quantify the feasibility of a test program in achieving interim and final reliability goals. In parti- > - > cular, PM2-Discrete possesses a series of management metrics that may be used to assess the effectiveness of proposed RG programs. These metrics serve as concomitant measures of programmatic risk and system maturity that may also be assessed during testing for progress reporting purposes. A methodology overview and application of PM2-Discrete is given, as well as an abbreviated overview of relevant literature within the area of RG planning. Note that derivations of the model equations (not presented herein), are available and may be referenced in. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving software-based system R&M with better error processing

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (491 KB) |  | HTML iconHTML  

    Software developers typically incorporate error-detection code to find and report faults at all levels. A system's reliability and maintainability can be significantly improved thorough wider use of error detection, better detection methods, and more informative reporting, but the benefits come at a price. This paper explores practical techniques in real-world applications that can greatly improve error detection and reporting while often reducing development time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analyzing interconnection design safety using bent pin analysis

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (653 KB) |  | HTML iconHTML  

    Low-tech hazards can cause catastrophic results in safety-critical and other systems. Designers sometimes fail to give adequate consideration to hazards in low-tech areas such as electrical interconnection designs, particularly when different design teams develop the subsystems on opposite ends of the interconnections. A good approach to addressing these hazards is a structured bent pin analysis that considers real-world conditions. This analysis can be applied to interconnection designs to adequately assess their failure modes and consequences. While much of the analysis can be automated to incorporate real-world conditions and produce accurate results, the human analyst must be aware that certain names or labels assigned by designers in their drawings may be misleading and present possible risks. The analyst must understand these risks when performing the analysis because they will affect its conclusions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability growth and the caveats of averaging: A Centaur case study

    Publication Year: 2011 , Page(s): 1 - 7
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (461 KB) |  | HTML iconHTML  

    Spacecraft reliability modeling is plagued by data scarcity and lack of data applicability. Systems tend to be one-of-a-kind, and observed failures tend to be the result of systemic defects or human errors, instead of component failures. The result is too often a gap between two extreme estimating approaches: probabilistic risk assessments (PRA) that are component-based lead to optimistic estimates by ignoring system-level failure modes; while history-based failure frequencies can lead to pessimistic estimates by neglecting non-homogeneity (between vehicles and vehicle configurations), reliability growth, and improvements in design. The problem of non-homogeneity is often considered solved once a system has a sufficiently long history. But in reality, rarely can tens of launches be considered samples of the same probability distribution. Launch vehicles undergo design changes in their history; more accurate estimates of reliability need to account for the risk introduced by design changes and for two types of reliability growth: growth of a given system via systematic tracking, assessing, and correction of the causes of failure uncovered in flights; and general technological or knowledge growth over subsequent generations of the system. Using the interesting history of the Centaur upper stage as an example, this paper proposes a pragmatic approach for the estimation of reliability growth over successive flights and configurations, which is applicable to any system with a history of several tens of flights. First considering the Centaur history as a single family, the paper compares the total success frequency to the `instantaneous' success frequency over intervals of increasing flight number. This analysis shows that as a result of the reliability growth experienced by Centaur, the total success frequency underestimates the risk of the first Centaur launches by a factor of almost 10, and overestimates the risk of the last Centaur launches by a factor of more than 3- > - > . But a closer analysis of Centaur history reveals that a number of failures were the results of design changes, as the stage design was improved or adapted for flight on new launch vehicle models. Understanding the risk introduced by design changes is important in the use of historical failure data as a surrogate for new systems. The second part of the paper shows that the `interval' growth curve of the Centaur family is the average of distinct growth curves for each configuration. Over a given flight interval, the average success frequency can underestimate the risk of the newest generation of Centaur, and overestimate that of the older operating Centaur, by a factor of 2 to 5. The net result is that after almost 200 flights, the most reliable Centaur presented 10 times less risk than suggested by the total failure frequency, and 100 times less risk than the initial launches. Thus the `mature' reliability was close to typical values generated by some bottom-up PRAs; but it was reached only after a long flight experience and the character of the residual failures is different. The authors hope that the practical approach presented in this paper can be of use to the industry in bridging the gap between forecasts based solely on historical failure frequencies and the results of component-based PRAs; and that it can foster a better understanding of the uncertainty bounds associated with various estimation methods, generally improving the relevance of reliability estimates to the problems faced by launch program decision makers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improvements in estimating software reliability from growth test data

    Publication Year: 2011 , Page(s): 1 - 5
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (459 KB) |  | HTML iconHTML  

    John Musa's first book on Software Reliability Engineering advises the analyst to use the Musa Basic Law or the Musa-Okumoto Logarithmic Law to estimate failure intensity, depending on which provides the best fit to the data. He refered to the papers by Tractenberg and Downs as providing a foundation for these models. I propose using Musa's basic model in combination with an approach Duane and Codier used to estimate failure intensity (aka instantaneous failure rate) for software. Musa noted in his last book that the Basic law is optimistic in estimating residual errors and that the logarithmic law is pessimistic because it implies infinite errors. There is currently a problem in deciding which law to use for software, Basic or Logarithmic, other than which has a higher correlation coefficient. I recommend that Musa's basic law be used but that the line be drawn using a method that reflects the cumulative nature of the statistics. This approach is based on both Downs' paper and that presented by E. O. Codier at the 1968 Annual Symposium on Reliability which described how to draw the line for Duane growth plots. Codier argued that when reliability growth data is plotted: (1) \“The latter points, having more information content, must be given more weight than earlier points\” (2) \“The normal curve fitting procedure of drawing the line through the `center of gravity' of all the points should not be used.\” and (3) \“Unless the data is exceptionally noisy, the best procedure is to start the line on the last data point and seek the region of highest density of points to the left of it.\” With regard to Musa basic plots, the region of highest density would be to the right of the last point, not to the left of it. It should be noted here that the IEEE Recommended Practice on Software Reliability for the application of Duane states that \“Least squares estimates for a and b of the straight line on log-log paper can be d- > - > erived using recommended practice linear regression estimates\”. But Codier's recommendations (above) have been shown to result in a more accurate measure of MTBF for hardware. This would be just as true for the application of Duane growth plo ts for software as for hardware. This paper shows even greater improvements when Codier's methods are applied to Musa's Basic model for Software. The early paper by Thomas Downs referred to the curved lines that are fitted to operational profile data as \“convex\”, not logarithmic, because there is no firm basis for calling the distribution logarithmic. There are examples in physics of exponential decay, as in the half life of a radioactive element, that follow a logarithmic curve, but no such mechanism exists to justify fitting a log function to software test data. The recommended method avoids defining the optimistic and pessimistic extremes of the curves as linear, logarithmic or any other specific shape. It simply draws a line that follows the changing slope of the points naturally as originally proposed by Codier. This paper develops a methodology for calculating failure intensity from the slope of the resulting line and from the cumulative failure rate at the final data point. It avoids the optimism of the Basic law and the pessimism of the Logarithmic law as well as the decision of which to use. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effective oversight & development of DFMEAs on large scale, complex systems

    Publication Year: 2011 , Page(s): 1 - 5
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (436 KB) |  | HTML iconHTML  

    Design failure mode and effects analysis (DFMEA) is part of any Design for Reliability (DfR) effort to design, develop, and build a large, technologically-challenging product. As there may be many cross-functional teams working on DFMEA at any point in time, management must provide effective oversight and project management skills to ensure that the final product will not only be on time, but also provide the end user with high quality and reliability. The use of DFMEA methodology is as much an art as a science. Accordingly, management must provide guidelines for DFMEA formats, conventions, and team decision-making. The author shares his insight in this regard, as often times it is difficult to simply, strictly adhere to the automotive DFMEA standards. This is particularly true when working on large scale complex systems, where the needs of the organization far exceed the guidelines provided by the standard. Often, there are features of the older and obsolete standard, MIL-STD-1629a, which are desirable to use, particularly if they are upgraded to meet DfR approaches. With so many teams working on DFMEA, management has an obligation to manage this effort to ensure that phase-gated design review deadlines are met. To meet this goal, the author shares an empirical technique for tracking DFMEA statistics that can be used to assess project completion goals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantifying the value of risk-mitigation measures for launch vehicles

    Publication Year: 2011 , Page(s): 1 - 7
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB) |  | HTML iconHTML  

    The efficient development of a highly reliable system, such as a new crew launch vehicle, cannot afford to ignore the lessons of history. A number of interesting studies of launch vehicle failures provide very valuable, albeit qualitative \“lessons learned\” on measures that a risk-informed program should take. If schedule and funds were unlimited, a very intensive and exhaustive test program would be the course to follow before the first flight of a new launcher. But when a program is faced with stringent schedule and cost constraints, it needs to optimize its test planning so as to meet constraints without sacrificing safety. Making such trade-offs intelligently requires having a way to quantify the relationship between the initial unreliability of a system, and the array of risk-mitigating measures on hand. This paper proposes several analysis steps beyond the existing studies of historical launch vehicle failures, which can form the basis for quantifying the lessons of history. Firstly, risk cannot be quantified accurately by summing all failures across history, because systems were not exposed to the same design deficiencies at each flight. Early failures typically represent sources of high risk, which are eliminated by corrective actions after the early flights, while late failures are often indicative of low-risk, design deficiencies that remain present for many flights. Thus failures occurring in the early launches of a system actually represent more risk than failures occurring later in history. Quantifying historical risk properly requires taking into account the reality of reliability growth. Secondly, knowing what failed in the past does not provide direct guidance as to how to reduce the risk of a new design. Of utmost relevance are the kinds of measures that could have prevented the failures in the first place. Simplistically put, knowing that the majority of launch vehicle failures originated in propulsion systems is of limited use to de- > - > signers and managers, who already pay tremendous attention to that central subsystem. By contrast, a quantification of the potential risk reduction possible by submitting an engine to stress testing, for example, could be valuable in supporting the cost and schedule trade-offs that decision makers are unavoidably faced with. This paper proposes a method for re-considering the failures of historical launchers in that new light and illustrates its application to two historical examples, the Ariane and Centaur systems. The results provide an approximate quantification of the risk reduction potentially offered by improvements in areas such as: sufficient flight-like testing at the system level; definition of, and testing for, margins that consider all phases of flight, including not only steady-state but also transient conditions; stress testing and testing for variability at the component and engine levels; analysis of the results of every single flight with an eye towards uncovering design defects: \“post-success investigations\” re-examination of the margins of all components and systems (including software) and re-qualification after every single change in design, configuration, or mission profile; and maintenance of very rigorous levels of electrical and cabling parts control, quality assurance and contamination control in all phases of manufacturing, assembly and launch operations. The authors hope that the techniques and insights presented in this paper can be of use to the aerospace industry as it embarks on the flight certification program for the next-generation crewed launcher. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An improved modular approach for dynamic fault tree analysis

    Publication Year: 2011 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (450 KB) |  | HTML iconHTML  

    Modularization technique allows efficient simplification of Dynamic and Static Fault Tree (FT). Each independent sub-tree (module) in static FT can be calculated separately and substituted by a basic event with obtained probability of failure. However, there is a significant restriction of this procedure in Dynamic FT, if it is converted to a Markov Chain. An independent sub-tree inside of Dynamic FT cannot be converted to basic event in general case, because corresponding Failure Rate (FR) is not constant and is not defined at arbitrary time. In the present paper we consider a set of cases when this modularization technique is still possible inside a Dynamic FT. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New approach for risk analysis and management in medical engineering

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (357 KB) |  | HTML iconHTML  

    Risk management in its current form does not fulfill today's requirements for enterprises in risk-sensitive industries. Risks resulting from well-known boundary conditions, such as shorter development times, a higher number of variants or global markets can just be identified inadequately by existing methods. In particular the increased product complexity, e. g. in the medical engineering branch, makes it difficult for companies to assess risks correctly and individual persons are less able to judge a product thoroughly. To improve the risk management process the iFEM-me thod (innovative function effect modeling) was developed, which is a new approach to identify and assess risks. First of all, critical risk areas are identified by the creation of a tree that visualizes the actual state of a complete technological system. After that, these critical risk areas are analyzed by an object model and a function-effect model. The result is a comprehensive risk inventory in which the cause and assessment for each of the risks have been defined. These results can be directly addressed through an effective risk mi tigation. The validation of iFEM was carried out in an enterprise from the medical engineering industries. The results were structured and documented so that the information could be used to improve the method. iFEM is a suitable method to model an overall system for the multidisciplinary analysis of a complex product. It permits a common understanding of a system and combines manufacturing and operational processes with system functions and inherent risks. The benefits of iFEM are the systematical risk identification of inconsistencies and expansion of employee's system understanding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spares provisioning for repairable systems under fleet expansion

    Publication Year: 2011 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (461 KB) |  | HTML iconHTML  

    We consider a spares provisioning problem in which a fleet of identical repairable systems is acquired in stages over time. Each system comprises a series of line replaceable units (LRU) so as to achieve a high availability level. The first part of the paper deals with the methods for making statistical inferences on LRU reliability, and the second part deals with the policy of optimal spare unit provisioning based on the output of reliability analysis. In particular, in the former, we consider a realistic scenario when system failures are non-stationary and triggered by multiple s-dependent competing risks. In the latter, the inventory system dynamics are analyzed and an optimal spare provisioning policy is obtained. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimized acoustic microscopy screening for multilayer ceramic capacitors

    Publication Year: 2011 , Page(s): 1 - 4
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (416 KB) |  | HTML iconHTML  

    A program was having a significant number of early life failures due to infant mortality of Multilayer Ceramic Chip Capacitors (MLCC). Board rework was difficult and expensive due to the locations of the MLCC and the complexity of the board. The proposed solution was to develop an improved acoustic microscopy screen for MLCC with latent defects 30 MHz acoustic microscopy screening is unable to detect a significant number of life limiting defects MLCC. 50 MHz screening is able to detect the defects that were missed at 30 MHz. There was not a 100% link between the detection of an anomaly and a device failure in this study. Four factors were identified that correlate strongly with MLCC failure rates: (1) Dielectric composition (2) Delaminations (3) Size (4) Capacitance value greater than 100,000 pF MLCC program requirements have been changed to require 50 MHz two sided C-Mode Scanning Acoustic Microscope (C-SAM) 100% lot inspection. The enhanced screen had positive cost impact. A different transducer was required in a screen that was already in place. There was increased screen fallout of MLCCs but the reduced board rework/repair costs more than offset the part cost. Parts passing the enhanced screen have not shown the early life failure issue. Additional work is necessary to determine the effects on MLCC reliability of defects other than delaminations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of reliability demonstration testing for repairable systems

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (414 KB) |  | HTML iconHTML  

    Reliability Demonstration Testing (RDT) for non-repairable systems has been successfully implemented in many industries, such as microelectronics, aerospace and healthcare. In designing RDTs, the famous beta-binomial formula is often applied to determine the necessary sample size so that the required reliability metric at a given confidence level can be demonstrated. However, many systems, such as cars and combine harvesters, are repairable systems. For repairable systems, RDT design methods have been developed mainly for Homogeneous Poisson Processes (HPP). This paper provides a new RDT design method applicable for both HPP and Non-Homogeneous Poisson Process (NHPP) cases. The result shows that the proposed method can effectively determine the required testing time and sample size of test units to demonstrate the required Mean Time Between Failures (MTBF). A comparison study is also conducted to illustrate the superior performance of the proposed method over some of the existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using FMECA to design sustainable products

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (389 KB) |  | HTML iconHTML  

    In the past, customers were mainly concerned with buying products that were both reliable and easy to maintain. While still important, today's customers are more and more interested in purchasing products that are also sustainable. This poses new challenges for companies who develop either commercial or military products. Now, their products not only need to work well and be easy to service, but they also need to use less energy, resources, and be environmentally responsible. These criteria apply to the entire life cycle of products, from when they are first manufactured and put into service, and later when they are discarded or decommissioned. FMECA can be a key enabler to help companies design and pro duce more sustainable products For many years now Failure Mode, Effects and Criticality Analysis (FMECA) has been a very effective tool or technique used for identifying possible failures and mitigating their effect. Although mainly used to identify failures associated with the design of a product (DFMEA) or flaws involved in the manufacturing process (PFMEA), it can easily be expanded to address concerns associated with sustainability. Chief among these are ecological risks that may be associated wi th a particular design or the manufacture, use, and disposal of a product. FMECA can be used effectively in each of these areas to not only identify potential risks, but to mitigate adverse effects on the environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Surviving the Lead Reliability Engineer role in high unit value projects

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB) |  | HTML iconHTML  

    A project with a very high unit value within a company is defined as a project where a) the project constitutes one of a kind (or two-of-a-kind) national asset type of project, b) very large cost, and c) a mission failure would be a very public event that will hurt the company's image. The Lead Reliability engineer in a high visibility project is by default involved in all phases of the project, from conceptual design to manufacture and testing. This paper explores a series of lessons learned, over a period of ten years of practical industrial experience by a Lead Reliability Engineer. We expand on the concepts outlined by these lessons learned via examples. The lessons learned are applicable to all industries. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hazard analysis based on human-machine-environment coupling

    Publication Year: 2011 , Page(s): 1 - 7
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (908 KB) |  | HTML iconHTML  

    The concept "Human (H)-Machine (M)-Environment (E) coupling" is put forward, and the connotative meaning and representation of the hazards owning to coupling is represented. Based on Process Breakdown Structure (PBS), the method characterizing H-M-E coupling in operational process is put forward, and a multi-view model for integrated analysis is established. The model describes an operational process from 9 perspectives including H, M, E, H-H, M-M, H-M, M-H, E-H, E-M. It provides pragmatic technical means to explore interaction or interconnection among factors of human, machine and environment. Based on engineering practice, the comprehensive hazard analysis method is built. It aids to discover the modes and causes of hazards resulted from the complex H-M-E coupling, and determine the “trigger” mechanism in occurrence of H-M-E hazards. The method provides integrated technical support for subsequent risk assessment. Finally, by means of the methods and models above, hazard analysis based on H-M-E coupling was implemented on a case of approaching process of airliner. The analysis did verify the engineering feasibility and effectiveness of the above-mentioned methods. Moreover, analysis results and findings did identify hazards due to H-M-E coupling and combinations of hazard causes during flight phase of airliner, which is some difficult for methods regardless of interrelation among system elements. So that defects in design were mo dified in time to ensure human, machine and environment in the best state and to improve the inherent safety of system. With the coupling model, some hazards which are hard to identify with common methods of system safety analysis could be recognized, and the relationship from hazard causes to mishaps is expressed more clearly. Moreover, its practice proves its engineering feasibility. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Special topics for consideration in a design for reliability process

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (435 KB) |  | HTML iconHTML  

    In this paper, we look at certain areas of the Design For Reliability (DFR) process where missteps or misapplications are common due to misunderstood “common practices” or due to attempts to either oversimplify the process or introduce unnecessary complexity. We show how these practices can hav e detrimental effects on the reliability of a product and identify what the areas of improvement are. We identify three major areas where missteps most commonly occur. The first of these is during the early stage of the DFR process, including the practices of setting requirements and specifications. We explore the importance of understanding usage and environmental conditions and discuss issues such as using Mean Time Between Failures as a sole metric, or mean estimates without associated confidence bounds. The second area prone to missteps is during the DFR stage where the reliability of a product is quantified; here, we discuss problems such as testing for failure modes that don't correlate with actual usage, using inappropriate life-stress relationship models or modeling the failure rate behavior incorrectly. The third time problems are likely is during the DFR stage that addresses the assurance and sustaining of reliability; here, we present missteps related to setting up demonstration tests without statistical significance, not colleting the appropriate data for warranty analysis and either ignoring suspensions or assuming that units have survived beyond the warranty period. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Planning a reliability growth program utilizing historical data

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (463 KB) |  | HTML iconHTML  

    From historical data this paper will note the significant patterns and key parameters that provide the basis for general guidelines that are very useful in establishing a realistic reliability growth testing program. These guidelines also address concerns raised by the 2008 Defense Science Board Task Force addressing reliability. As noted by this Task Force two major risks areas are the initial MTBF and the Growth Potential MTBF. In particular, the Defense Board Task Force report, Ref. 10, found that the “low initial MTBF and low Growth Potential are the most significant reasons that systems are failing to meet their operational suitability requirements.” This paper will address data and experiences on these two key parameters and provide practical information on how they are managed. The information on these two key parameters and additional information on other parameters, such as growth rates, are very useful in reducing the risks and cost of a reliability growth program. In addition to the data, guidelines will be given regarding the use of these parameters to address the concerns of the Defense Science Board Task Force. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.