Scheduled System Maintenance:
On Monday, April 27th, IEEE Xplore will undergo scheduled maintenance from 1:00 PM - 3:00 PM ET (17:00 - 19:00 UTC). No interruption in service is anticipated.
By Topic

Reliability and Maintainability Symposium, 2003. Annual

Date 27-30 Jan. 2003

Filter Results

Displaying Results 1 - 25 of 102
  • Annual Reliability and Maintainability Symposium. 2003 Proceedings (Cat. No.03CH37415)

    Publication Year: 2003
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7896 KB)  

    The following topics are dealt with: reliability; maintainability; software reliability; product design; product assurance; repairable systems modeling; accelerated life testing; safety; risk management; probabilistic risk assessment; reliability prediction; life cycle costing; reliability centered maintenance; reliability and maintainability tools application; Bayesian methods; aging; modeling; optimisation; fault tree analysis; equipment maintenance optimisation; reliability statistical methods; power systems; quality management/six sigma; and lessons learned. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Publication Year: 2003 , Page(s): 613 - 614
    Save to Project icon | Request Permissions | PDF file iconPDF (287 KB)  
    Freely Available from IEEE
  • Key Word Index

    Publication Year: 2003 , Page(s): 615 - 616
    Save to Project icon | Request Permissions | PDF file iconPDF (199 KB)  
    Freely Available from IEEE
  • An approach for the advanced planning of a reliability demonstration test based on a Bayes procedure

    Publication Year: 2003 , Page(s): 288 - 294
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    The classical theory to determine sampling plans yields a large sample-size which is necessary to demonstrate the product reliability. Above all, the sample-size increases tremendously, if failures have to be taken into account. To use all information about the product given through the development process the application of Bayes procedure is recommended. In the paper, it was suggested how to generate a prior distribution from fatigue damage calculations, preceding tests or the analysis of warranty data of a former product. We considered a case where the information about the reliability may not be given for the desired product lifetime but for any other time. Additionally, it was taken into account that prior information may be available from an accelerated test. When prior information is used there is always the uncertainty as to what extent the information about the reliability is valid for the actual product conditions. It is obvious that the information of a former product or preceding tests may not be totally transferable to the actual product concerning reliability. In this paper the so-called "decrease-factor" δ was introduced which artificially reduces the quality level of prior reliability information. A procedure for the estimation of the decrease-factor was suggested for a case where information about the failure behavior of a former product is utilized. In this context, the FMECA offers a useful foundation. In a synthetic example the reliability demonstration test was planned by using calculation results, results of a preceding test and the knowledge about the failure behavior of a former product. It was shown that the sample-size necessary to demonstrate the requirements increases with lower decrease-factors. Compared with the classical method it was possible to reduce the sample-size if prior knowledge is considered. The sample-size is reduced by 4 test items at least. For the best case where the prior information was totally used no subsequent test would be necessary. However, due to differences in test conditions or environment and function the total use of prior information is not recommended. To minimize the risk of failing to meet the product requirements the decrease-factor should be estimated with lower values than 1. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New concept for aircraft maintenance management. II. The establishment of the spoon-shaped curve model

    Publication Year: 2003 , Page(s): 68 - 73
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (526 KB) |  | HTML iconHTML  

    On the basis of many years' analyses and studies on abundant statistical data, and taking specialists' experiences into consideration, the authors of this paper have discovered the statistical law of resource consumption of field maintenance-man-hours-the spoon-shaped curve model. The famous model-the bath-tub curve model has revealed the general law of failure concerning product life from the angle of reliability, while the spoon-shaped model enables us to understand the relation between maintenance and product life from the angle of maintainability. The consequence of failure is opposite to the result of maintenance. The life profile of an aircraft is a unity of opposites. It can be thought of as a conflicting process between failure process and maintenance process, and the end of this contradiction always means the end of life for the aircraft. The bath-tub model describes the process of failure, while the spoon-shaped model reveals the process of maintenance. Both of them are descriptions of the process of product's operation life. In this paper, the authors have adopted the method of statistical analysis to find out maintenance ratio curve, expecting to recognize macroscopically the basic law of maintenance process, and then to proceed to the next step with the aim of controlling the consumption of maintenance-man-hours resources so that it will play a positive role in raising the level of maintenance and management of modern aircraft. Thus it makes new recognition of the aircraft maintenance and management. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The use of object based event driven simulation modelling to assess viable contractor support options

    Publication Year: 2003 , Page(s): 244 - 249
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (395 KB) |  | HTML iconHTML  

    Through the use of an innovative object based support simulation modelling approach, that utilises an ODBC link to combine data generated for a logistic support analysis record (LSAR) with a flexible simulation software product, we are able to easily assess the complexity and content of viable support approaches early in the acquisition phase of the life cycle. Support concept modelling and the use of Monte Carlo simulation is relatively new, however the unique combination of a standard data format (MIL-STD-1388 and DEF STAN 00-60) and a very flexible object based simulation model (eM-Plant) is new. This enables a quick and cost effective means of understanding the various aspects of a potential support solution (spare parts, manpower, support equipment, facilities and associated cost). This event driven approach uses inherent failure rates as the basis for simulation of maintenance activities and timed events for preventive maintenance and operational activities. Within the simulation model, using objects that have been developed specifically for the logistic support simulation requirement, the support situation(s), i.e. maintenance levels, maintenance and support facilities, delay times between levels, spare part pool quantities, supply times and operational plan(s), are constructed. The logistics data, housed in the LSAR data repository, is then incorporated into the model and run through the support situation(s) and operational plan(s) that have been constructed. Within the model the arising of inherent failures result in a 'job card' that identifies each failure that occurs and the time taken to recover each failure. This time is a combination of delay time and the time taken to perform the various maintenance tasks that are associated with the failure mode within the maintenance plan. By comparison with a deterministic modelling approach the simulated approach provides a more realistic view of when support resources are likely to be required over a period of operational time against planned operational events and activities. Another advantage of this approach is that the object-based software, with objects that have been developed to enable complex support situations to be established, can be easily manipulated to exactly reflect a project specific re- quirement. This differs significantly from more traditional software tools that are fixed in nature and require the users to 'shoe horn' data and expend effort on the interpretation output information. The object-based software also has the facility to build in optional data such as multiple support concepts, various missions, reliability block diagrams, decision events, etc. All inputs and outputs from the simulation model are in Microsoft Access® format and the software can link directly to an LSAR to provide information inputs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Long term aging of electronics systems & maintainability strategy for critical applications

    Publication Year: 2003 , Page(s): 328 - 331
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (455 KB) |  | HTML iconHTML  

    Many electronic systems, such as computers and consumer goods, have a short life, typically two to five years. However, there are critical applications, e.g. in the case of power plants, where life may be 20 years or more. While proactive maintenance is normally practiced for mechanical components, the authors have not found references in the technical literature that address the question as to whether or not proactive maintenance for electronic systems may be warranted in cases of very long term usage. Electronics reliability models that are commonly used include MIL-HDBK 217F and Telcordia Reliability model. In the former model failures are assumed to be "random", i.e. exponentially distributed. A model such as this has no "infant mortality" or "wearout" region. The latter, Telcordia, model does add a "First year multiplier" to account for infant mortality but no factors for well-aged components. Physics of failure approaches come closest to describing the "life" of electronic products. They are useful in cases where life is limited by predictable physical mechanisms. These models work well, e.g., for the case of solder joint fatigue by thermal cycling. However, POF approaches are not adequate for large systems, with disparate part types, where the environmental conditions are benign. In the authors' applications, units are operated in a control room with carefully controlled environmental conditions. So, they conducted a large measurement program on samples of aged control circuit cards to determine what components age, and by how much, over 20 years or more. They used the results, in concert with additional analysis, to determine a maintenance strategy for older control systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrate hardware/software device testing for use in a safety-critical application

    Publication Year: 2003 , Page(s): 132 - 137
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (363 KB) |  | HTML iconHTML  

    In train and transit applications, the occurrence of a single hazard (fault) may be quite catastrophic resulting in significant societal costs, ranging from loss of life to major asset damages. The axiomatic safety-critical assessment process (ASCAP) has been demonstrated as a competent method for assessing the risk associated with train and transit systems. ASCAP concurrently simulates the movement of n-trains within a given system from the perspective of the individual trains. During simulation, each train interacts with a series of appliances that are located along the track, within the trains and at a central office. Within ASCAP, each appliance is represented by a probabilistic multistate model, whose state selection is decided using a Monte Carlo process. In lieu of exercising this multistate model for a given appliance, the ASCAP methodology supports the inclusion of actual appliances within the simulation platform. Hence, an appliance can be fault tested in a simulation environment that emulates the actual operational environment to which it will be exposed. The ASCAP software can interface with a given appliance through an input/output (I/O) node contained within its executing platform. This node provides the ASCAP software with the capability of communicating with an external device, such as a track or an onboard appliance. When a train intersects with a particular appliance, the actual appliance can be queried by the ASCAP simulator to ascertain its status. This state information can then be used by ASCAP in lieu of its multi-state model representation of the appliance. This simulation process provides a mechanism to determine the appliance's ability to perform its intended safety-critical function in the presence of hardware/software design faults within its intended operational environment. By being able to quantify these effects prior to deploying a new appliance, credible and convincing evidences can be prepared the to ensure that overall system safety will not be adversely impacted. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collaboration in R&M applications

    Publication Year: 2003 , Page(s): 250 - 254
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (292 KB) |  | HTML iconHTML  

    Time-to-market durations are shrinking, straining the ability to perform thorough Reliability & Maintainability (R&M) tasks during product development. This paper suggests that collaboration is a more effective model than team-based approaches for successful R&M efforts in these situations. The paper describes how R&M software vendors and the R&M community can enhance collaboration to shorten product development times and enhance the quality of the product. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Safety of VLSI designs using VHDL

    Publication Year: 2003 , Page(s): 138 - 142
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (362 KB) |  | HTML iconHTML  

    This paper presents a methodology associated to a software tool to generate fail-safe VHDL synthesizable descriptions from Petri net or state diagrams. With this philosophy of automatically providing safety to VLSI systems design in VHDL, designers do not have to include the error detection system because it is going to be added automatically in the design. The method is explained as a group of sequential steps that transform a system into a fail-safe one. The tool uses a graphical environment to define the Petri net or state diagram. VHDL was chosen because is a standard widely supported by synthesis tools. The implementation of the circuit, which is valid either for programmable logic or ASICs, is done by other tools that support the VHDL standard. Following this methodology, three design parameters appear: size (consumption), speed, and safety level. Usually, every tool presents only optimization by speed and size. The proposed tool is fully implemented. VHDL code is synthesizable and experiments were made comparing unsafe and fail-safe systems in relation to their defining characteristics. Adding safety obviously supposes a heavy penalty in area occupied by the circuit. Future work should study the combination of other safety mechanisms, including the possibility of establishing a flexible level of safety. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrating maintainability and data development

    Publication Year: 2003 , Page(s): 255 - 262
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1485 KB) |  | HTML iconHTML  

    In this paper we described a new and innovative maintainability-engineering tool being developed by GE-GRC, AFRL and Lockheed Martin (LM). The tool employs modeling, simulation, information composition and virtual prototyping to allow an easier analysis of service related issues early in the design. The resulting coupling of the design and service data promote a concurrent development of preliminary designs and maintenance manuals, resulting in a truly maintenance oriented design approach. The prototype tool is in development and will be beta-tested on several commercial and military systems. Implementation challenges discussed include a need for user-defined constraints on the disassembly planner and difficulties mapping design data to domain information: Future directions for this research will examine using the technology for maintenance task analysis, change impact analysis and job-level work scope planning. Finally, we described an implementation strategy for technology transition including an initial Illustrated Parts Catalogue (IPC) generation capability and maintainability studies on components of the GE Joint Strike Fighter (JSF) as well as other GE engines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using equivalent failure rates to assess the unavailability of an ageing system

    Publication Year: 2003 , Page(s): 82 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (427 KB) |  | HTML iconHTML  

    In this paper, the authors are interested in the calculation of some dependability measures for a system that is subjected to ageing. This system can be in two different states: the perfect operating state; and the failure state. First, the problem of the modeling of aging is discussed. Then, they study a general model and derive formulae for the unavailability and the mean number of failures over a period. For a highly available system, these formulae may be used to reduce the number of simulations necessary to assess these dependability measures. Ageing models are often really complex, and may lead to some practical problems, such as the number of parameters to be estimated. The results of this article justify the use of a simplified ageing model, involving only a few parameters. Moreover, the computation stage is much easier in this model, providing an alternative to Monte Carlo simulation. In the final section, the authors apply this simplification method to a problem of preventive maintenance optimization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic generation of diagnostic expert systems from fault trees

    Publication Year: 2003 , Page(s): 143 - 147
    Cited by:  Papers (11)  |  Patents (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (357 KB) |  | HTML iconHTML  

    When a fault tolerant computer-based system fails, diagnosis and repair must be performed to bring the system back to an operational state. The use of fault tolerance design implies that several components or subsystems may have failed, and that perhaps many of these faults have been tolerated before the system actually succumbed to failure. Diagnosis procedures are then needed to determine the most likely source of failure and to guide repair actions. Expert systems are often used to guide diagnostics, but the derivation of an expert system requires knowledge (i.e., a conceptual model) of failure symptoms. In this paper, we consider the problem of diagnosing a system for which there may be little experience, given that it might be a one-of-a-kind system or because access to the system may be limited. We conjecture that the same fault tree model used to help aid in the design and analysis of the system can provide the conceptual model of system component interactions needed in order to define a diagnostic process. We explore the use of a fault tree model (along with the probabilities of failure for the basic events) along with partial knowledge of the state of the system (i.e., the system has failed, and perhaps some components are known to be operational or failed) to produce a diagnostic aid. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Who's eating your lunch? A practical guide to determining the weak points of any system

    Publication Year: 2003 , Page(s): 263 - 268
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB) |  | HTML iconHTML  

    This paper describes a technique for focusing system-level improvement efforts on those areas that are contributing most to undesirable RAM performance. Monte Carlo simulation that incorporates localized performance reporting is used to identify the key culprits affecting a system's availability or reliability. This ranking technique accounts for issues such as distributional variability, logistical constraints, complex redundancy, internal dependencies, and dynamic components that are often overlooked using traditional mean life ranking methods. We applied this methodology to the V-22 Osprey tilt-rotor aircraft and produced some insightful results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time-to-failure tree

    Publication Year: 2003 , Page(s): 148 - 152
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (285 KB) |  | HTML iconHTML  

    The reliability analysis of critical systems is often performed using fault tree analysis. Fault trees are analyzed using analytic approaches or Monte Carlo simulation. The usage of the analytic approaches is limited in few models and certain kinds of parameter distributions. In contrast to the analytic approaches, Monte Carlo simulation can be broadly used. However, Monte Carlo simulation is time-consuming because of the intensive computation. In this paper, a new model is presented which is called time-to-failure tree. Static and dynamic fault trees can be easily transformed into time-to-failure trees. In fact, each time-to-failure tree is a digital circuit, which can be synthesized to a field programmable gate array (FPGA). Therefore, Monte Carlo simulation can be significantly accelerated using FPGAs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predicting reliability via neural networks

    Publication Year: 2003 , Page(s): 196 - 201
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (351 KB) |  | HTML iconHTML  

    The objective of this work is to predict the reliability of automotive components and systems from experimental failure data using artificial neural networks. To construct the necessary neural models, the Neural Simulation Tool (NEST), developed by Polytechnic of Milan, has been employed. An operative procedure based on the developed ANN models has been been implemented to predict the trend of the unreliability index R100(t), the number of faults in 100 vehicles at time t (number of months from production time), starting from information on the number of vehicles produced and sold and the predicted number of faults up to the previous time t-1. The procedure has been applied on data from the Fiat Car Group, leading to satisfactory results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Importance analysis with Markov chains

    Publication Year: 2003 , Page(s): 89 - 95
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (333 KB) |  | HTML iconHTML  

    In this paper, the authors introduce novel techniques for computing importance measures in state space dependability models. Specifically, reward functions in a Markov reward model (MRM) are utilized for this purpose, in contrast to the common method of computing importance measures through combinatorial models and structure functions. The advantage of bringing these measures in the context of MRMs is that the mapping extends the applicability of these substantial results of reliability engineering, previously considered only associated with fault trees and other combinatorial modeling techniques. As a consequence, software packages that allows the automatic description of MRMs can easily compute the importance measures under this new circumstance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying quality tools to reliability: a 12-step Six-sigma process to accelerate reliability growth in product design

    Publication Year: 2003 , Page(s): 562 - 567
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (420 KB) |  | HTML iconHTML  

    This paper describes the application of a Six-sigma based 12-step quality process to the design and development of a new, highly complex commercial product. The process was targeted at an aggressive reliability growth rate that exceeded anything the company had achieved before. The process is described, along with some of the challenges and accommodations required to ensure a successful outcome. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A technique for characterization of hardware system behavior for product reliability predictions

    Publication Year: 2003 , Page(s): 500 - 506
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (421 KB) |  | HTML iconHTML  

    This paper describes a cost-effective reliability estimation technique that systematically characterizes components of a system based on the system behavior. This technique is an extension to that known as statistical interference theory where probability of failure is derived from the interference between load and strength distributions. Material strength distribution is a physical property characterizing material's failure. Several parameters such as stress, strain, or energy can be used to measure strength in hardware components. Strength distributions for most commonly used materials can be obtained from materials testing. Load distribution derived from field return data and strength distribution aggregates effects of numerous parameters to form a material behavioral response under field use conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new approach to solve dynamic fault trees

    Publication Year: 2003 , Page(s): 374 - 379
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (351 KB) |  | HTML iconHTML  

    The traditional static fault trees with AND, OR and voting gates cannot capture the dynamic behavior of system failure mechanisms such as sequence-dependent events, spares and dynamic redundancy management and priorities of failure events. Therefore, researchers introduced dynamic gates into fault trees to capture these sequence-dependent failure mechanisms. Dynamic fault trees are generally solved using automatic conversion to Markov models; however, this process generates a huge state space even for moderately sized problems. In this paper, the authors propose a new method to analyze dynamic fault trees. In most cases, the proposed method solves the fault trees without converting them to Markov models. They use the best methods that are applicable for static fault tree analysis in solving dynamic fault trees. The method is straightforward for modular fault trees; and for the general case, they use conditional probabilities to solve the problem. In this paper, the authors concentrate only on the exact methods. The proposed methodology solves the dynamic fault tree quickly and accurately. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new approach for evaluating the reliability of highly reliable systems

    Publication Year: 2003 , Page(s): 475 - 481
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (413 KB) |  | HTML iconHTML  

    This paper outlines a new approach for accurately evaluating the reliability of a complex, highly reliable system for which neither analytical methods nor conventional simulations are feasible. The reliability is simulated in a feasible parameter region, and then rational interpolation (RI) is used to calculate the desired reliability. Numerical experiments are presented to validate the feasibility of this method. It has been demonstrated that the RI approach is very effective in providing a highly accurate result for the system reliability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • PREDICT: a case study, using fuzzy logic

    Publication Year: 2003 , Page(s): 188 - 195
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (483 KB) |  | HTML iconHTML  

    Los Alamos National Laboratory, Design For Reliability, Inc., and others, have worked together to develop PREDICT, a new methodology to characterize the reliability of a new product during its development program. Rather than conducting testing after hardware has been built, and developing statistical confidence bands around the results, this updating approach starts with an early reliability estimate characterized by large uncertainty, and then proceeds to reduce the uncertainty by folding in fresh information in a Bayesian framework. A considerable amount of knowledge is available at the beginning of a program in the form of expert judgment that helps to provide the initial estimate. This estimate is then continually updated as substantial and varied information becomes available during the course of the development program. This paper presents a case study of the application of PREDICT, including an example of the use of fuzzy logic, with the objective of further describing the methodology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of inspection and maintenance models based on the CCC-chart

    Publication Year: 2003 , Page(s): 74 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB) |  | HTML iconHTML  

    In this research, six maintenance models are constructed based on whether minor inspection, major inspection, minor maintenance and major maintenance are performed on a system. The system to study is a production process in which items produced can be classified as either conforming or nonconforming, and a statistical process control chart called CCC-chart (cumulative count control chart) can be applied to monitor the process. The maintenance models are analyzed quantitatively, and selection of models can be based on an economic consideration.. The total cost can be broken down into inspection cost, maintenance cost, and the cost due to deterioration of the process. From the analytic results obtained, the choice of maintenance plan can be optimized from an economic point of view. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A business model for reliability

    Publication Year: 2003 , Page(s): 459 - 463
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (345 KB) |  | HTML iconHTML  

    The ASPIRE Business Model incorporates the essential business features that will enable and foster exceptional reliability as a market differentiator. The key features are: (1) create a win-win scenario from the outset, fostering success in both manufacturer and customer objectives whilst simultaneously working within technical, business and legislative limitations; (2) focus reliability on maximising achievement of customer objectives and, though judicious choice of technical and customer metrics, create market leading opportunities; (3) ensure that methods and processes used during design and manufacturing gain maximum efficiency-speed of reliability achievement, effectiveness and cost. These together provide the basis, but exceptional reliability remains dependent on design and manufacturing effort. The model is best implemented during design concept and pre-contractual discussions, using white-board 'brainstorming' techniques to identify new design opportunities and to avoid unnecessary business restrictions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sahinoglu-Libby (SL) probability density function-component reliability applications in integrated networks

    Publication Year: 2003 , Page(s): 280 - 287
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (394 KB) |  | HTML iconHTML  

    The forced outage ratio of a hardware (or a software) component is defined as the failure rate divided by the sum of the failure and the repair rates. The probability density function (PDF) of the FOR is a three-parameter beta distribution (G3B), renamed to be the Sahinoglu-Libby (SL) probability distribution that was pioneered in 1981. The failure and repair rates are assumed to be the generalized gamma variables where the corresponding shape and scale parameters, respectively, are unequal. A three-parameter beta or G3B PDF, equivalent to an FOR PDF, renamed to be the SL, is shown to default to an ordinary two-parameter beta PDF when the shape parameters are identical. Furthermore, the authors will present a wide perspective of the usability and limitations of the said PDF in theoretical and practical terms, also referring to work done by some other authors in the area. In the new era of quality and reliability, the usage of the SL will assist studies in correctly formulating the PDF of the unavailability or availability random variables to estimate the network reliability and quality indices for engineering and utility considerations. Bayesian methodology is employed to compute small-sample estimators by using informative and noninformative priors for the component failure and repair rates in terms of loss functions, as opposed to the uncontested and erroneous usage of the mle, regardless of the inadequacy of the historical data. Case studies illustrate a phenomenon of overestimation of the availability index in safety and time critical components as well as in systems, when mle is conventionally employed. This work assists the network planners, and analysts, like those of Internet Service Providers, providing a targeted reliability measure of their integrated computer network in a quality-conscious environment under the pressure of an ever-expanding demand and a risk, that needs to be mitigated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.