By Topic

Reliability and Maintainability, 2004 Annual Symposium - RAMS

Date 26-29 Jan. 2004

Filter Results

Displaying Results 1 - 25 of 125
  • Reliability prediction based on degradation modeling for systems with multiple degradation measures

    Page(s): 302 - 307
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB) |  | HTML iconHTML  

    This paper describes a general modeling and analysis approach for reliability prediction based on degradation modeling, considering multiple degradation measures. Previous research has been focusing on reliability prediction based on degradation modeling at the component failure mechanism level. In reality, a system may consist of multiple components or a component may have multiple degradation paths, so it is necessary to simultaneously consider multiple degradation measures. Also, many research efforts on degradation analysis were initiated by making assumptions about the degradation mechanism. In reality often there is very limited understanding about the degradation mechanism. When the observed degradation data is collected, it is often not known which degradation model is applicable. In this paper, a general analysis procedure is developed. Simulated data were used to illustrate the applicability of this approach. It was verified that, when the multiple degradation measures in a system are correlated, an incorrect independence assumption might underestimate the system reliability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new approach to gathering failure behavior information about mechanical components based on expert knowledge

    Page(s): 90 - 95
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (349 KB) |  | HTML iconHTML  

    This paper presents a possibility with which the reliability knowledge of maintenance employees can be used in order to receive reliability data that can be used to do simulations or further calculations. Using the presented methodology opens up a big source of information. The use of employee knowledge entails the processing of imprecise data, as the knowledge is not given in the form of a failure time plot, but in the form of verbal expressions and specific information. The paper deals with the data format and precision of expert-based information. A methodology is presented which enables the transformation of certain employee (further called experts) information into reliability data. Apart from that the paper also shows the influence of imprecision of expert information and how to handle this imprecision in order to get applicable results. As a finishing subject the paper also provides a possibility with which the deviations of expert information from reality can be estimated dependent on the expert statement itself. With that approach one can tax the trustability of expert information. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Server class disk drives: how reliable are they?

    Page(s): 151 - 156
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (447 KB) |  | HTML iconHTML  

    Hard disk drive manufacturers frequently set high expectations for drive reliability from their specifications, test results and "global returns database". However, field reliability experienced by customers often does not match those expectations. Actual disk drive reliability may differ greatly from the manufacturer's specification and from customer to customer because of dependence on a variety of factors, some of which are completely independent of the drive design or manufacturing process. Thermal environment and duty cycle, inherent characteristics of the drive itself, architecture and logic of the system in which it is used, and the data collection and analysis process itself are all sources of significant variability. Together, these can create a range on mean time between failures (MTBF), if MTBF is indeed the correct metric, of 350,000 to 1,200,000 hours over the lifetime of a population of server class disk drives. This paper further elaborates on these four causes of variability and explains how each is responsible for a possible gap between expected and measured drive reliability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Behavioural study of the general renewal process

    Page(s): 237 - 242
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (430 KB) |  | HTML iconHTML  

    This paper is intended to provide insights into the application of the GRP model. A repairable system model for the case of realistic maintenance, the so-called general renewal process (GRP), was introduced by allowing the goodness of repairs to be modelled from as-good-as-new (i.e. ORP) to same-as-old (i.e. NHPP). This is sometimes referred to as better-than-old-but-worse-than-new repair assumption. The modelling of a realistic repair activity depends on a number of factors including overall age of the component, number of repairs, effectiveness of the repair, skill of the technicians, etc. The objective of this paper is to provide general insights into the behaviour of the GRP model and application. Observations concluded that at a low number of renewals there is little difference between the two models. However, as the number of renewals increases the difference between the two models becomes significant due to the variation in the underlying virtual age equations of the two models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An alternate analysis of a 2x2 DOE using a contour plot of Weibull joint parameter confidence regions

    Page(s): 335 - 339
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (423 KB) |  | HTML iconHTML  

    Before beginning an accelerated test program for implantable cardiac defibrillator/pacemaker leads, an initial phase of testing was performed to determine if test stations and operators could be used interchangeably. DOE and Monte Carlo simulations were used to set up the 2×2 test matrix and determine if the test would produce sufficient data. The Weibull analysis methods, including the contour plot, proved to be useful for analyzing the fatigue failure data as well as providing some insight to the analysis methods. The test results demonstrated no difference between the four runs at the 85% confidence level; however, one run was subtly different from the others. A subsequent investigation into the combined data indicated that the three parameter Weibull distribution is more suitable than the two parameter Weibull distribution. The two parameter Weibull must be used for the individual runs, due to the smaller samples sizes, even though the three parameter distribution may represent the best fit to the total population. The observed run-to-run variation may partially be due to the choice of the simpler two parameter Weibull distribution. It is concluded that the effects of test station and operator on Weibull parameters β and η, are not significant at the 85% confidence level (this conclusion also holds for higher confidence levels because the intersecting area increases as the confidence level increases). It is also concluded that the three parameter Weibull model yields a better fit than the two parameter Weibull model, for this particular lead specimen, this test method, and when the sample size is sufficient. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extended fault modeling used in the space shuttle PRA

    Page(s): 382 - 385
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (380 KB) |  | HTML iconHTML  

    A probabilistic risk assessment (PRA) has been completed for the space shuttle with NASA sponsorship and involvement. This current space shuttle PRA is an advancement over past PRAs conducted for the space shuttle in the technical approaches utilized and in the direct involvement of the NASA centers and prime contractors. One of the technical advancements is the extended fault modeling techniques used. A significant portion of the data collected by NASA for the space shuttle consists of faults, which are not yet failures but have the potential of becoming failures if not corrected. This fault data consists of leaks, cracks, material anomalies, and debonding faults. Detailed, quantitative fault models were developed for the space shuttle PRA which involved assessing the severity of the fault, detection effectiveness, recurrence control effectiveness, and mission-initiation potential. Each of these attributes was transformed into a quantitative weight to provide a systematic estimate of the probability of the fault becoming a failure in a mission. Using the methodology developed, mission failure probabilities were estimated from collected fault data. The methodology is an application of counter-factual theory and defect modeling which produces consistent estimates of failure rates from fault rates. Software was developed to analyze all the relevant fault data collected for given types of faults in given systems. The software allowed the PRA to be linked to NASA's fault databases. This also allows the PRA to be updated as new fault data is collected. This fault modeling and its implementation with FRAS was an important part of the space shuttle PRA. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability importance of components in a complex system

    Page(s): 6 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB) |  | HTML iconHTML  

    Reliability importance indices are valuable in establishing direction and prioritization of actions related to an upgrading effort (reliability improvement) in system design, or suggesting the most efficient way to operate and maintain system status. Existing indices are calculated through analytical approaches, and application of these indices to complex repairable systems may be intractable. Complex repairable systems are being increasingly seen, and issues related to analytical system reliability and availability solutions are well known. To overcome this intractability, discrete event simulation, through the use of reliability block diagrams (RBD), is often used to obtain numerical system reliability characteristics. Traditional use of simulation results provides no easy way to compute reliability importance indices. To bridge this gap, several new reliability importance indices are proposed and defined in this paper. These indices can be directly calculated from the simulation results and their limiting values are traditional reliability importance indices. Examples are provided to illustrate the application of the proposed importance indices. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cost constrained reliability maximization of software systems

    Page(s): 195 - 200
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (439 KB) |  | HTML iconHTML  

    Architecture-based techniques have been largely used for the reliability assessment of software systems. However, these techniques also enable the exploration of cost/reliability tradeoffs and evaluation of a set of competing architectural alternatives. This paper presents an optimization framework based on an evolutionary algorithm (EA) which can be used to explore cost/reliability tradeoffs based on software architecture. Evolutionary algorithm was used as an optimization technique because of the discontinuous search space, usually nonlinear but monotonic relation between the cost and reliability of individual modules comprising the software, and complex software architectures giving rise to nonlinear dependencies between individual module reliabilities and the overall application reliability. We illustrate the use of the EA using a case study, where the results of the EA are compared with those obtained from exhaustive enumeration. A comparison of the time taken by the EA to generate an optimal solution with the time taken by exhaustive search to generate an optimal solution indicates that the EA can be used to obtain optimal designs with much greater efficiency than exhaustive search. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intelligent FMEA based on model FIORN

    Page(s): 386 - 390
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (474 KB) |  | HTML iconHTML  

    FMEA automation and intelligence are studied, which is an effective way to improving FMEA in product development. Firstly, we formed an intelligent FMEA framework that is an intelligent failure effect inference mechanism based on the target system model as well as using expert experience and considering failure modes input and output relationship between products in the system. This framework is comprised of three parts, failure mode analyzer, failure effect analyzer and FMEA report creator. Failure effect analysis based on system models. We form two system modeling methods, system hierarchical model based on expert knowledge and fault input/output relationship net (FIORN) model which describes the relationship among products belonging to the same level in the system. The latter based on failures' relationship and it could analyse correlated failures and common cause failures. Inference mechanism is presented based on these two models. Lastly, a prototype software - iFMEA (intelligent FMEA) is developed. Intelligent FMEA technique is used in analysis of an aircraft's main gear system through which detail steps of intelligent FMEA method are described. System modeling method and inference mechanism are validated by this example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software reliability estimations/projections, cumulative & instantaneous

    Page(s): 178 - 183
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (414 KB) |  | HTML iconHTML  

    Musa's methods (J.D. Musa et al., 1987) for software development and test planning combined with Duane's learning curve approach (E.O. Codier, 1968) for hardware reliability growth testing provide an efficient means for estimating and demonstrating reliability requirements. This paper reviews the analyses of Tractenberg (M. Trachtenberg, 1985) and Downs (T.Downs, 1985) that provide a foundation for Musa's basic (linear) model. I have used Musa's basic model in combination with an approach similar to that used by Duane and Codier for derivation of a formula for instantaneous failure rate for hardware to develop formulas for the estimation of instantaneous failure rate for software. These calculations show significant correlation with interval estimates and provide an efficient method for showing the achievement of goal reliability for software without a separate demonstration test. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probabilistic structural analysis of ultra efficient engine technology ceramic matrix composite combustor liners

    Page(s): 320 - 323
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB) |  | HTML iconHTML  

    A probabilistic structural analysis of NASA's ultra efficient engine technology (UEET) ceramic matrix composite (CMC) combustor liners has been completed using the NESTEM Code. The purpose was to identify the maximum stress locations and perform a probabilistic structural analysis at these locations on the inner and outer liners for given thermal loadings to determine the probability of failure at these locations. The probabilistic structural analysis included quantifying the influence of uncertainties in material stiffness properties and the coefficient of thermal expansion. Results of the analysis indicate that the circumferential component of stress was the most severe stress component and that the inner liner was more likely to fail than the outer liner. Tests of the combustor liners by general electric aerospace engines (GEAE) qualitatively support the results of this analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Risk forecasting using heritage-based surrogate data

    Page(s): 628 - 633
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (430 KB) |  | HTML iconHTML  

    While an individual failure analysis approach has been recommended for the space shuttle program, [Gehman, HW, Jr et al., August 2003] other more heuristic and far simpler approaches have also been used in the past to provide useful information to decision-makers when failure data is scant. This paper provides an example of visual mapping using heritage data might be used to discover problems in complex systems before they lead to failure, using the Saturn launcher family as an example. An effort will be undertaken to further analyze the available data in an attempt to answer the question: "What would have been the risk of future launches if the Saturn launcher had continued in service?" The motivation is to try to show how the use of heritage data, when properly adjusted for growth and combined with scant data on the program under investigation produces estimates consistent with the use of properly analyzed precursor data. Further, an investigation was conducted as to whether heritage and precursor studies can provide considerably better insight than scant failure data alone. On the basis of the findings, it is suggested that heritage and precursor estimates may be more realistic and consistent with launcher history than the more optimistic estimates produced from traditional bottom-up approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effect of element initialization in synchronous networked control system to control quality

    Page(s): 135 - 140
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (554 KB) |  | HTML iconHTML  

    This paper presents some results on the modeling and analysis of networked control systems (NCS). The problem analyzed is the design of control systems whose elements are interconnected by a communication network. For modeling and simulations of NCS coloured Petri nets and design/CPN software tool were chosen. The simulation results of a simple tank system are presented, but within the research activity the cascade system with two connected tanks is also studied. It is shown how the performance of a NCS can be improved by modifying the control strategy. From the simulation results the case of the TEE initialization is the most robust, i.e. the influence of random delays is the smallest on quality. So, the performance of an NCS can be improved by modifying the control strategy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • System reliability assessment using covariate theory

    Page(s): 18 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (652 KB) |  | HTML iconHTML  

    A method is demonstrated that utilizes covariate theory to generate a multi-response component failure distribution as a function of pertinent operational parameters. Where traditional covariate theory uses actual measured life data, a modified approach is used herein to utilize life values generated by computer simulation models. The result is a simulation-based component life distribution function in terms of time and covariate parameters for each failure response. A multivariate joint probability covariate model is proposed by combining the covariate marginal failure distributions with the Nataf transformation approach. Evaluation of the joint probability model produced significant improvement in joint probability predictions as compared to the independent series event approach. The proposed methods are executed for a nominal aircraft engine system to demonstrate the assessment of multi-response system reliability driven by a dual mode turbine blade component failure scenario as a function of operational parameters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Driving the feedback loop reliability and safety in the full life cycle

    Page(s): 61 - 67
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (571 KB) |  | HTML iconHTML  

    This paper describes a cohesive, integrated full life cycle product reliability and safety management process and an electronic database tool to facilitate the process. This system a) collects field failure data, b) tracks/drives corrective actions, c) pulls the lessons learned from the corrective actions, d) places those lessons learned in the proactive analysis tools used in product development, e) drives product development to use the proactive analysis tools, f) keeps a library of proactive analyses to act as guides for future development, g) provides the proactive analysis for direct use by a root cause team driving to analyze and correct a field failure h) tracks and drives all activities - allowing instant summary "dashboards" or "scorecards" to be created. proactive and reactive records are stored coincidently in a single database system -- allowing real-time feedback. Reactive case resolution teams can tap into the development analyses or development teams can incorporate all relevant lessons learned from reactive case resolutions. All tools are attached to the database - to drive consistent usage and facilitate the integration of analysis libraries. All activities are mapped to a business process with pre-specified milestones and tracked to target dates - making this a real-time product reliability and safety management tool. Each record instantly becomes an ISO 9001 controlled quality record upon closure. A record tree structure allows cases/records to follow the configuration management of the product. The culmination of these features allows the user community to achieve product safety and reliability business management system over the full life cycle of a product. After two years of implementation and continuous improvement, a fundamental truth is proven. The process will only be sustainable and the database tool will only achieve long term success if the users feel their personal workflow facilitated by the system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software safety analysis: using the entire risk analysis toolkit

    Page(s): 272 - 279
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (354 KB) |  | HTML iconHTML  

    When an accident occurs, it is common to attribute the accident to a failure in the system. Therefore, precautions must be taken to design the system to provide safeguards that supports the system even when failures occur. The problem, however, is that accident occur where there is no failure in the system (i.e., the software, hardware, and humans "work" as they are supposed to). The flaw is in the design oversight for specific high-risk situations. It is up to the decision maker to: (a) ensure that adequate design and safety checks have been performed before the system is put into operation (b) ensure that a comprehensive risk analysis is conducted to examine both the design element malfunctions and the design oversights to determine the loss sequences (c) be satisfied that the loss sequences are understood with adequate confidence that the system risk is at or below the risk acceptance criteria. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An extended reliability growth model for managing and assessing corrective actions

    Page(s): 73 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (381 KB) |  | HTML iconHTML  

    The most widely used traditional reliability growth tracking model and reliability growth projection model are both included as IEC International Standard and US ANSI National Standard models. These traditional models address reliability growth based on failure modes surfaced during the test. With the tracking model all corrective actions are incorporated during test, called test-fix-test. With the projection model all corrective actions are delayed until the end of test. This is called test-find-test. However, the most common approach for development-testing programs include some corrective actions during testing and some delayed fixes incorporated at the end of test. That is, test-fix-find-test. This paper presents an extended model that addresses this practical situation and allows for preemptive corrective actions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability and robustness assessment of diagnostic systems from warranty data

    Page(s): 146 - 150
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (302 KB) |  | HTML iconHTML  

    Diagnostic systems are software-intensive built-in-test systems, which detect, isolate and indicate the failures of prime systems. The use of diagnostic systems reduces the losses due to the failures of prime systems and facilitates the subsequent correct repairs. Therefore, they have found extensive applications in industry. Without loss of generality, this paper utilizes the on-board diagnostic systems of automobiles as an illustrative example. A failed diagnostic system generates α or β. α error incurs unnecessary warranty costs to manufacturers, while β error causes potential losses to customers. Therefore, the reliability and robustness of diagnostic systems are important to both manufacturers and customers. This paper presents a method for assessing the reliability and robustness of the diagnostic systems by using warranty data. We present the definitions of robustness and reliability of the diagnostic systems, and the formulae for estimating α, β and reliability. To utilize warranty data for assessment, we describe the two-dimensional (time-in-service and mileage) warranty censoring mechanism, model the reliability function of the prime systems, and devise warranty data mining strategies. The impact of α error on warranty cost is evaluated. Fault tree analyses for α and β errors are performed to identify the ways for reliability and robustness improvement. The method is applied to assess the reliability and robustness of an automobile on-board diagnostic system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Use of PSA tools and techniques in life cycle management of maintenance and operation of complex system

    Page(s): 492 - 499
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (914 KB) |  | HTML iconHTML  

    This paper briefly discusses the uses of probabilistic safety analysis for life cycle optimization of maintenance and operation. The focus is on developing a time-dependent model for conditional failure intensity of component with incorporated effects of maintenance (major overhauls) that could be used in fault tree evaluations simulating various time points in the past or future. Likelihood functions and Bayesian inference process for estimating model's parameters based on operational history records are described for normally operating and periodically tested standby components. Finally, examples of Bayesian inference and modulation of reliability predictions are provided for each of the two types of components based on assumed operational history records in order to demonstrate that the model produces meaningful results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Variation mode and effect analysis

    Page(s): 364 - 369
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (469 KB) |  | HTML iconHTML  

    In this paper, we introduce an engineering method, variation mode and effect analysis (VMEA) developed to systematically look for noise factors affecting key product characteristics (KPCs) early in product development. Conducted on a systematic basis, the goal of VMEA is to identify and prioritize noise factors that significantly contribute to the variability of KPCs and might yield unwanted consequences with respect to safety, compliance with governmental regulations, and functional requirements. As a result of the analysis, a variation risk priority number (VRPN) is calculated which directs the attention to areas where reasonably anticipated variation might be detrimental. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving engine control reliability through software optimization

    Page(s): 634 - 640
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1038 KB) |  | HTML iconHTML  

    This paper discusses optimization of software control strategy for eliminating "hitching" and "ringing" in a diesel engine powertrain. Slow- and high-amplitude oscillation of the entire vehicle powertrain under steady pedal position at idle is called "ringing", and similar behavior under cruisecontrol conditions is called "hitching". The intermittent nature of these conditions posed a particular challenge in arriving at proper design alternatives. Zero-point-proportional dynamic S/N ratio was used to quantify vibration and tracking accuracy under six driving conditions, which represented noise factors. An L1 8 orthogonal array explored combinations of six software strategy control factors associated with controlling fuel delivery to the engine. The result was between 4 and 10 dB improvement in vibration reduction, resulting in virtual elimination of the hitching condition. As a result of this effort, a 12 Repair/1000 vehicle reliability (eight million dollar warranty) problem was eliminated. The robust design methodology developed in this application may be used for a variety of applications to optimize similar feedback control strategies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient factoring algorithm for computing the failure-frequencies of telecommunications networks

    Page(s): 110 - 115
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB) |  | HTML iconHTML  

    This paper proposes and efficient algorithm for computing the failure frequencies of telecommunications networks. The proposed method is based on a special modification that transforms the formula for computing the reliability into a formula that also gives the failure frequency. A number of algorithms have been produced, and the factoring algorithm has been shown to be the most efficient. This algorithm has not yet been transformed, because its graph operations contraction and deletion of a link produce complicated expressions in the failure-frequency case. A new matrix-style decomposition formula is derived in this paper, and allows us to transform the factoring algorithm into an algorithm that gives failure frequency as well as reliability. Some numerical examples show that this new algorithm simultaneously computes both measures with high efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Jackson networks and Markov processes for resource allocation modeling

    Page(s): 261 - 265
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (334 KB) |  | HTML iconHTML  

    Measuring the productivities of engineers is one difficult task in software engineering. The popular technique used to evaluate the quality and productivity of engineers during software development is that supervisors and managers have to monitor engineers and keep track their work activities every day. Managers might use their common sense that can be unfair and incorrect to evaluate performance of engineers. Hence two new models encouraged fault repair model and discouraged fault arrival model that we propose solve these issues. A closed form derived from encouraged fault repair model is used to measure the productivities of engineers, and a closed form derived discouraged fault arrival model to evaluate the performance of testers. The limitation the fault repair rate [Luong 2001] is a constant is removed in the new model, engineering resource configuration model that is a Jackson network model with Markov chain configuration. Factors causing workforce available or unavailable are included into the model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Marketing with reliability and maintainability: the business case for hiring a reliability engineer when there are no R&M requirements

    Page(s): 480 - 484
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (291 KB) |  | HTML iconHTML  

    Even in the absence of customer imposed reliability requirements, market driven requirements can be established. The need to establish these requirements and demonstrate meeting them necessitates a professional reliability engineering function. Tying this function to the sales and marketing departments, while unorthodox, can provide access to information and straightforward communication channels to influence design. The business case is based on the increased sales due to a product that lives up to its marketing hype. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accelerated tests to determine the reliability of the cable-encapsulant interface

    Page(s): 330 - 334
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (387 KB) |  | HTML iconHTML  

    Encapsulants are mainly used with the cable in various outdoor products, to act as barriers against the operating environment. In some applications, the encapsulants can also be used to provide strain relief for the cable. The reliability of the cable-encapsulant interface is critical for product functionality as well as its long-term reliability. The reliability of this interface depends on the cable and encapsulant chemistry, their interaction, process conditions and other properties like the glass transition temperature (Tg), hardness, coefficient of thermal expansion (CTE) etc. This reliability is usually determined as part of the final product tests. This is very time-consuming and can lead to appreciable loss of design time, especially when failures are discovered at the end. This study has developed quick screening tests which can optimize the process conditions and reasonably determine the reliability of the cable-encapsulant interface, without testing the entire product. The failure at this interface is mainly due to cracking of the cable. The failure mechanism is influenced by the exothermic reaction, cure temperature, hardness and adhesive properties of the encapsulant. The results of these tests have also been correlated to conventional accelerated product tests like ALT, 85/85 to determine their product applicability. Thereby, these tests could be used as a screening tool for the various cable-potting combinations at the start of the design, thus reducing the design time and achieving the required reliability and optimum process conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.