By Topic

Reliability and Maintainability Symposium, 2003. Annual

Date 27-30 Jan. 2003

Filter Results

Displaying Results 1 - 25 of 102
  • Annual Reliability and Maintainability Symposium. 2003 Proceedings (Cat. No.03CH37415)

    Publication Year: 2003
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7896 KB)  

    The following topics are dealt with: reliability; maintainability; software reliability; product design; product assurance; repairable systems modeling; accelerated life testing; safety; risk management; probabilistic risk assessment; reliability prediction; life cycle costing; reliability centered maintenance; reliability and maintainability tools application; Bayesian methods; aging; modeling; optimisation; fault tree analysis; equipment maintenance optimisation; reliability statistical methods; power systems; quality management/six sigma; and lessons learned. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Publication Year: 2003 , Page(s): 613 - 614
    Save to Project icon | Request Permissions | PDF file iconPDF (287 KB)  
    Freely Available from IEEE
  • Key Word Index

    Publication Year: 2003 , Page(s): 615 - 616
    Save to Project icon | Request Permissions | PDF file iconPDF (199 KB)  
    Freely Available from IEEE
  • Modeling the "Good enough to release" decision using V&V preference structures and Bayesian belief networks

    Publication Year: 2003 , Page(s): 568 - 573
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (401 KB) |  | HTML iconHTML  

    Throughout the process of determining when a unique computer-based system is "good enough to release," an assessor must consider and reconcile process and product evidence as well as make a judgment on the severity of remaining faults. The assessor may be working with uncertain or incomplete knowledge, and may have little data by which the evidence can be validated and verified. As well, the assessment may be done on an ad-hoc basis with unstated or untested assumptions concerning the relative importance of evidence. It can be difficult to repeat ad-hoc or loosely structured assessments with any degree of confidence, and it may be near impossible to recreate a given assessment for an audit. A model of the "good enough to release" decision based upon quasi-order preference structures of validation and verification (V&V) activities is proposed in this paper. We focus on modeling the release decision for unique computer-based systems because of the types of evidence assessed during the decision. We use quasi-order preference structures to determine the V&V activities that are generally considered to be the most effective, and to determine relationships among the activities. We use a Bayesian belief network (BBN) as the modeling formalism because a BBN's characteristics support the type of assessment process being modeled. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rectifying FMEA-the inter-crossing method

    Publication Year: 2003 , Page(s): 371 - 373
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB) |  | HTML iconHTML  

    Failure modes and effects analysis (FMEA) is one of the most important analysis tools in reliability engineering However, some failure causes may be very difficult to detect when using only FMEA, and we found no methodical way to circumvent this limitation. This article describes the inter-crossing method, that significantly decreases the chance of failing to notice failure causes. The method that we propose is systematic; it enables one to "scan" the design and to expose elusive failure causes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A practical approach to system reliability growth modeling and improvement

    Publication Year: 2003 , Page(s): 351 - 359
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (401 KB) |  | HTML iconHTML  

    In a product development process, to develop appropriate design validation and verification program for reliability assessment, one has to understand the functional behavior of the system, role of components in achieving required functions and failure modes if component/sub-system fails to perform required function. In this paper, the authors propose a simple and practical two-stage approach of system reliability growth modeling considering components, functions, and failure modes. The consideration of these three dimensions will help in uncovering the weak spots in design responsible for low system reliability. The proposed method assumes Weibull distribution as failure time distribution and reliability model is based on the Bayesian framework incorporating even fuzzy information. The fuzzy logic model that has been developed for this purpose is used to quantify the engineering judgment or fuzzy information of reliability improvement attributed to design changes or corrective actions. Uncertainty in data/information at component levels propagates to system level reliability and makes system reliability prediction highly unreliable. The paper suggests a variance reduction strategy to give more accurate system reliability predictions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new approach to solve dynamic fault trees

    Publication Year: 2003 , Page(s): 374 - 379
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (351 KB) |  | HTML iconHTML  

    The traditional static fault trees with AND, OR and voting gates cannot capture the dynamic behavior of system failure mechanisms such as sequence-dependent events, spares and dynamic redundancy management and priorities of failure events. Therefore, researchers introduced dynamic gates into fault trees to capture these sequence-dependent failure mechanisms. Dynamic fault trees are generally solved using automatic conversion to Markov models; however, this process generates a huge state space even for moderately sized problems. In this paper, the authors propose a new method to analyze dynamic fault trees. In most cases, the proposed method solves the fault trees without converting them to Markov models. They use the best methods that are applicable for static fault tree analysis in solving dynamic fault trees. The method is straightforward for modular fault trees; and for the general case, they use conditional probabilities to solve the problem. In this paper, the authors concentrate only on the exact methods. The proposed methodology solves the dynamic fault tree quickly and accurately. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A process for failure modes and effects analysis of computer software

    Publication Year: 2003 , Page(s): 365 - 370
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (338 KB) |  | HTML iconHTML  

    Software FMEA is a means to determine whether any single failure in computer software can cause catastrophic system effects, and additionally identifies other possible consequences of unexpected software behavior. The procedure described here was developed and used to analyze mission- and safety-critical software systems. The procedure includes using a structured approach to understanding the subject software, developing rules and tools for doing the analysis as a group effort with minimal data entry and human error, and generating a final report. Software FMEA is a kind of implementation analysis that is an intrinsically tedious process but database tools make the process reasonably painless, highly accurate, and very thorough. The main focus here is on development and use of these database tools. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability prediction of substitute parts based on component temperature rating and limited accelerated test data

    Publication Year: 2003 , Page(s): 518 - 522
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (395 KB) |  | HTML iconHTML  

    This paper presents a practical method of estimating failure rates of substitute electronic component with temperature characteristics different from that of an original part. For a number of reasons electronics designers occasionally substitute electronic components with comparable parts having different power-temperature rating. The proposed method describes how the accelerated test data pertaining to the original part can be applied to the substitute component while accounting for the difference in temperature ratings. Also due to climatic differences between the regions of operation the authors suggest the use of equivalent failure rate by incorporating the temperature distribution function into the process of warranty prediction. The proposed combined technique offers the reliability engineer a practical procedure to achieve considerable savings in terms of time and cost of product validation by eliminating redundant testing and improving the accuracy of reliability prediction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Models to consider load-sharing in reliability calculation and simulation of systems consisting of mechanical components

    Publication Year: 2003 , Page(s): 493 - 499
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (532 KB) |  | HTML iconHTML  

    The load-sharing case occurs in systems, if a load is shared by several components. Then the failure of a component results in a higher load share for the surviving components. This paper places emphasis on the analytic description of systems with load-sharing, the algebraic calculation and the simulative solution of their reliability. The capacity flow model, the Freund model and the state-graph method are presented as models for the analytic description. The state-graph method is the most general model: systems with individual failure behavior of the components, individual load steps as well as complex system structures can be considered. All three models are restricted to components with constant failure rates in the corresponding load level. However, in most mechanical systems the failure rates of components are not constant due to aging and wear-out. A more adequate method for the load-sharing case in mechanical systems is the application of simulation techniques. In this paper the general simulation algorithms are presented for 1-out-of-n systems consisting of components with constant failure rates and of components with time-dependent failure rates. In the case of constant failure rates, the failure times in the load levels can be sampled directly from the failure distributions due to the memory-less property of the exponential distribution. The simulation results are shown for both a 1-out-of-2 and a 1-out-of-3 system. For verification purpose the results of the simulation are compared with the analytic solution of the Freund model. In the case of time-dependent failure rates a dynamic modification of the failure distribution of the components is necessary. This is done by a time shift of the distributions and transformed random numbers. The simulation results are presented for a 1-out-of-2 and a 1-out-of-3 system. The failure behavior of the mechanical components is described by a Weibull distribution in each load level. In order to verify the simulation results the analytic bounds of the corresponding system with no increased load and maximum load are calculated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A risk evaluation approach for safety in aerospace preliminary design

    Publication Year: 2003 , Page(s): 159 - 163
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (760 KB) |  | HTML iconHTML  

    The preliminary design phase of any program is key to its eventual successful development. The more advanced a design the more this tends to be true. For this reason the preliminary design phase is particularly important in the design of aerospace systems. Errors in preliminary design tend to be fundamental and tend to cause programs to be abandoned, or to be changed fundamentally, and at great cost later in the design development. In the past aerospace system designers have used the tools of systems engineering to enable the development of designs that were more likely to be functionally adequate. However to do so has meant the application significant resources to the review and investigation of proposed design alternatives. This labor-intensive process can no longer be afforded in the current design environment. The realization has led to the development of an approach that attempts to focus the tools of systems engineering on the risk drivers in the design. One of the most important factors in the development of successful designs is adequately addressing the safety and reliability risk. All too often these important features of the developed design are left to afterthoughts as the design gives sway to the more traditional performance focus. Thus even when a successful functional design is forthcoming significant resources are often required to reduce its reliability and safety risk to an acceptable level. This builds upon the experience base of the integrated shuttle risk assessment and its expansions and applications to the evaluation of newly proposed launcher designs. The approach used the shuttle developed PRA models and associated data sets as functional analogs for new launcher functions. The concept is that associated models would characterize the function of any launcher developed for those functions on the shuttle. Once this functional decomposition and reconstruction has been accomplished a proposed new design is compared on a function-by-function basis and specific design enhancements that have significant promise of reducing the functional risk over the shuttle are highlighted. The potential for enhancement is then incorporated into those functions by suitable modification of the shuttle models and or the associated quantification data sets re- presenting those design features addressed by the new design. The level of risk reduction potential is then estimated by those component failure modes and mechanisms identified for the shuttle function and eliminated in the new design. In addition heritage data that would support the claims of risk reduction for those failure modes and mechanisms that remain albeit at a reduced level of risk are applied. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An assessment of RPN prioritization in a failure modes effects and criticality analysis

    Publication Year: 2003 , Page(s): 380 - 386
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB) |  | HTML iconHTML  

    The risk priority number (RPN) methodology for prioritizing failure modes is an integral part of the automobile FMECA technique. The technique consists of ranking the potential failures from 1 to 10 with respect to their severity, probability of occurrence, and likelihood of detection in later tests, and multiplying the numbers together. The result is a numerical ranking, called the RPN, on a scale from 1 to 1000. Potential failure modes having higher RPNs are assumed to have a higher design risk than those having lower numbers. Although it is well documented and easy to apply, the method is seriously flawed from a technical perspective. This makes the interpretation of the analysis results problematic. The problems with the methodology include the use of the ordinal ranking numbers as numeric quantities, the presence of holes making up a large part of the RPN measurement scale, duplicate RPN values with very different characteristics, and varying sensitivity to small changes. Recommendations for an improved methodology are also given. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparing reliability predictions to field data for plastic parts in a military, airborne environment

    Publication Year: 2003 , Page(s): 207 - 213
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (382 KB) |  | HTML iconHTML  

    This paper examines two popular prediction methods and compares the results to field data collected on plastic encapsulated microcircuits (PEMs) operating in a military, airborne environment. The comparison study focused on three digital circuit card assemblies (CCAs) designed primarily with plastic, surface mount parts. Predictions were completed using MIL-HDBK-217 models and PRISW®, the latest software tool developed by the Reliability Analysis Center (RAC). The MIL-HDBK-217 predictions which correlated best to the field data were based on quality levels (πQ) of 2 and 3, rather than the typical πQ values of 5 or higher, traditionally assigned per the handbook's screening classifications for commercial, plastic parts. The initial findings from the PRISM® tool revealed the predictions were optimistic in comparison to the observed field performance, meaning the predictions yielded higher mean time to failure (MTTF) values than demonstrated. Further evaluation of the PRISM® models showed how modifying default values could improve the prediction accuracy. The impact of the system level multiplier was also determined to be a major contributor to the difference between PRISM® predictions and field data. Finally, experience data proved valuable in refining the prediction results. The findings from this study provide justification to modify specific modeling factors to improve the predictions for PEMs, and also serve as a baseline to evaluate future alternative prediction methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying quality tools to reliability: a 12-step Six-sigma process to accelerate reliability growth in product design

    Publication Year: 2003 , Page(s): 562 - 567
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (420 KB) |  | HTML iconHTML  

    This paper describes the application of a Six-sigma based 12-step quality process to the design and development of a new, highly complex commercial product. The process was targeted at an aggressive reliability growth rate that exceeded anything the company had achieved before. The process is described, along with some of the challenges and accommodations required to ensure a successful outcome. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time-to-failure tree

    Publication Year: 2003 , Page(s): 148 - 152
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (285 KB) |  | HTML iconHTML  

    The reliability analysis of critical systems is often performed using fault tree analysis. Fault trees are analyzed using analytic approaches or Monte Carlo simulation. The usage of the analytic approaches is limited in few models and certain kinds of parameter distributions. In contrast to the analytic approaches, Monte Carlo simulation can be broadly used. However, Monte Carlo simulation is time-consuming because of the intensive computation. In this paper, a new model is presented which is called time-to-failure tree. Static and dynamic fault trees can be easily transformed into time-to-failure trees. In fact, each time-to-failure tree is a digital circuit, which can be synthesized to a field programmable gate array (FPGA). Therefore, Monte Carlo simulation can be significantly accelerated using FPGAs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A business model for reliability

    Publication Year: 2003 , Page(s): 459 - 463
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (345 KB) |  | HTML iconHTML  

    The ASPIRE Business Model incorporates the essential business features that will enable and foster exceptional reliability as a market differentiator. The key features are: (1) create a win-win scenario from the outset, fostering success in both manufacturer and customer objectives whilst simultaneously working within technical, business and legislative limitations; (2) focus reliability on maximising achievement of customer objectives and, though judicious choice of technical and customer metrics, create market leading opportunities; (3) ensure that methods and processes used during design and manufacturing gain maximum efficiency-speed of reliability achievement, effectiveness and cost. These together provide the basis, but exceptional reliability remains dependent on design and manufacturing effort. The model is best implemented during design concept and pre-contractual discussions, using white-board 'brainstorming' techniques to identify new design opportunities and to avoid unnecessary business restrictions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Safety of VLSI designs using VHDL

    Publication Year: 2003 , Page(s): 138 - 142
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (362 KB) |  | HTML iconHTML  

    This paper presents a methodology associated to a software tool to generate fail-safe VHDL synthesizable descriptions from Petri net or state diagrams. With this philosophy of automatically providing safety to VLSI systems design in VHDL, designers do not have to include the error detection system because it is going to be added automatically in the design. The method is explained as a group of sequential steps that transform a system into a fail-safe one. The tool uses a graphical environment to define the Petri net or state diagram. VHDL was chosen because is a standard widely supported by synthesis tools. The implementation of the circuit, which is valid either for programmable logic or ASICs, is done by other tools that support the VHDL standard. Following this methodology, three design parameters appear: size (consumption), speed, and safety level. Usually, every tool presents only optimization by speed and size. The proposed tool is fully implemented. VHDL code is synthesizable and experiments were made comparing unsafe and fail-safe systems in relation to their defining characteristics. Adding safety obviously supposes a heavy penalty in area occupied by the circuit. Future work should study the combination of other safety mechanisms, including the possibility of establishing a flexible level of safety. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distribution systems reliability-Lakeland Electric case study

    Publication Year: 2003 , Page(s): 546 - 550
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB) |  | HTML iconHTML  

    The case study presented in the paper combines the distribution substation and the distribution feeders in one analysis to evaluate the reliability of supply at a customer location. This approach provides a more realistic set of reliability indices, because it includes the outage impact of both distribution feeder(s) and the substation components. The indices can be used to rank needed reliability improvements on a feeder, multiple feeders and/or a substation. The reliability indices reported in this paper were calculated for the customers that are served by Palmetto Substation, one of the Lakeland Electric distribution substations. The indices are used in developing reliability initiatives and to rank various projects based on improvement in reliability as seen by a customer. The paper discusses both customer-based indices such as SAIFI, SAIDI, CAIDI, MAIFIe, and energy based indices such as EUE, which can be used in selecting the best option from various alternatives available to a planner. Different conclusions are obtained if customer reliability is only quantified by modeling, without making due consideration of the tie support from neighboring feeders. It is expected that this analysis will help Lakeland Electric plan, design and operate the distribution facilities in a cost-effective manner while meeting its obligations to its customers, city council, and regulators. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An approach for the advanced planning of a reliability demonstration test based on a Bayes procedure

    Publication Year: 2003 , Page(s): 288 - 294
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    The classical theory to determine sampling plans yields a large sample-size which is necessary to demonstrate the product reliability. Above all, the sample-size increases tremendously, if failures have to be taken into account. To use all information about the product given through the development process the application of Bayes procedure is recommended. In the paper, it was suggested how to generate a prior distribution from fatigue damage calculations, preceding tests or the analysis of warranty data of a former product. We considered a case where the information about the reliability may not be given for the desired product lifetime but for any other time. Additionally, it was taken into account that prior information may be available from an accelerated test. When prior information is used there is always the uncertainty as to what extent the information about the reliability is valid for the actual product conditions. It is obvious that the information of a former product or preceding tests may not be totally transferable to the actual product concerning reliability. In this paper the so-called "decrease-factor" δ was introduced which artificially reduces the quality level of prior reliability information. A procedure for the estimation of the decrease-factor was suggested for a case where information about the failure behavior of a former product is utilized. In this context, the FMECA offers a useful foundation. In a synthetic example the reliability demonstration test was planned by using calculation results, results of a preceding test and the knowledge about the failure behavior of a former product. It was shown that the sample-size necessary to demonstrate the requirements increases with lower decrease-factors. Compared with the classical method it was possible to reduce the sample-size if prior knowledge is considered. The sample-size is reduced by 4 test items at least. For the best case where the prior information was totally used no subsequent test would be necessary. However, due to differences in test conditions or environment and function the total use of prior information is not recommended. To minimize the risk of failing to meet the product requirements the decrease-factor should be estimated with lower values than 1. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Failure analysis and replacement strategies of distribution transformers using proportional hazard modeling

    Publication Year: 2003 , Page(s): 523 - 527
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (365 KB) |  | HTML iconHTML  

    The paper presents a graphical method for plotting mean cumulative repair function (MCRF) for different capacities or types of distribution transformers. The plots provide information about their failure behavior with time. The intensity function graphs obtained using parameters of Weibull process proportional hazard model confirms the results obtained from nonparametric MCRF graphs. Proportional hazard modeling (PHM) technique is quite helpful to consider the effect of covariates on the failure performance of different transformers with time. The property of proportionality is validated with the help of graphical and analytical methods. The higher values of intensity function for 100 kVA transformers and rural environment transformers show their poor performance compared to less than 100 kVA transformers and urban environment transformers respectively. This information is quite helpful in evaluating maintenance/replacement policies. It is considered that normal life of distribution transformer is about 25 years, for a capacity of 100 kVA or less. The present study indicates that in case of 100 kVA transformers and rural transformers an early replacement is required. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collaboration in R&M applications

    Publication Year: 2003 , Page(s): 250 - 254
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (292 KB) |  | HTML iconHTML  

    Time-to-market durations are shrinking, straining the ability to perform thorough Reliability & Maintainability (R&M) tasks during product development. This paper suggests that collaboration is a more effective model than team-based approaches for successful R&M efforts in these situations. The paper describes how R&M software vendors and the R&M community can enhance collaboration to shorten product development times and enhance the quality of the product. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal cost-effective design of parallel systems subject to imperfect fault-coverage

    Publication Year: 2003 , Page(s): 29 - 34
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (325 KB) |  | HTML iconHTML  

    Computer-based systems intended for critical applications are usually designed with sufficient redundancy to be tolerant of errors that may occur. However, under imperfect fault-coverage conditions (such as the system cannot adequately detect, locate, and recover from faults and errors in the system), system failures can result even when adequate redundancy is in place. Because parallel architecture is a well-known and powerful architecture for improving the reliability of fault-tolerant systems, this paper presents the cost-effective design policies of parallel systems subject to imperfect fault-coverage. The policies are designed by considering (1) cost of components, (2) failure cost of the system, (3) common-cause failures, and (4) performance levels of the system. Three kinds of cost functions are formulated considering that the total average cost of the system is based on: (1) system unreliability, (2) failure-time of the system, and (3) total processor-hours. It is shown that the MTTF (mean time to failure) of the system decreases by increasing the spares beyond a certain limit. Therefore, this paper also presents optimal design policies to maximize the MTTF of these systems. The results of this paper can also be applied to gracefully degradable systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Use of accelerated life tests on transmission belts for predicting product life, identifying better designs, materials and suppliers

    Publication Year: 2003 , Page(s): 101 - 105
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (411 KB) |  | HTML iconHTML  

    Accelerated life tests (ALT) are useful in identifying better designs, materials and component suppliers, and also predicting component life in a shorter time. Therefore, ALTs are particularly useful during the product design stage. The use of an ALT in determining the proper transmission belt design in an appliance design is discussed herein. Meeting the reliability at the specified life under the normal operating conditions is the key design parameter. In this regard, a test fixture was designed to accelerate the main failure modes based on the previous experience. The common failure mode focused in this ALT is the breakage of the belt cords (that impregnated with rubber) due to the reverse bending at the idler-tension pulleys. Broken cords may increase the noise level. These are failures in the eyes of the customer and tend to occur long before the catastrophic failures. In this study, a transmission belts were run at three different stress levels, which facilitate a better extrapolation of normally operated life. At the beginning of the tests, a higher stress level was used in comparing and determining the better compound and the supplier in a shorter time. In the end, these tests and analyses were found to be helpful in determining the better material and also in predicting the life of the transmission belt at the normal operating conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accelerated testing for demonstration of product lifetime reliability

    Publication Year: 2003 , Page(s): 117 - 123
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB) |  | HTML iconHTML  

    A reliability test is designed to simulate product lifetime usage and expectations. With the assumptions that the product reliability, as demonstrated in test, is a multiple of its reliabilities regarding various operational and environmental stresses and of their undetermined interaction, a well designed reliability test accounts for all operational and environmental cumulative exposures to the stresses that the product will encounter in the actual field use. To determine levels and durations of each of the separate stress to be applied in test they are assumed to be independent. The stress independency assumption allows determination of duration and intensity of each applied environmental or operational stress to prove product lifetime reliability regarding all expected stresses, while the tests are accelerated to allow for reasonable and cost effective length of test in that environment. This cannot be accomplished without detail knowledge of product's usage profile, sequence of operation, and expected use environments. The synergism or the test sequence is not disregarded as it will be the factor possibly contributing to lower demonstrated reliability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Environmental screening and relevant accelerated tests for products in an outdoor ground environment

    Publication Year: 2003 , Page(s): 309 - 312
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (410 KB) |  | HTML iconHTML  

    Corrosion is the dominant failure mode of products in underground water pits. The failure mechanism was studied to determine the dominant variables. The major variables, which dictate galvanic and pitting corrosion, are pH, temperature and concentration of the various species in the solution. A field screening procedure was used to determine the concentration of the anions, pH and temperature in different pits. These data show that there are wide ranges of pit severity as indicated by the pH and the total anion concentrations. The two variables of the pit solution viz. pH and the total anion concentration, could be used to characterize a pit environment and determine the material reliability. The field data were used to design relevant accelerated test using the pit characteristics viz. anion concentration and temperature as the accelerating variables. The product application and its reliability requirements determined the time for the accelerated test. In this study we have shown how to characterize a pit environment, design the relevant accelerated test for product reliability and use these data for the best selection of materials, based on low cost and high reliability. This, in turn, will lead to optimum and cost-effective materials used in product design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.