By Topic

Reliability, IEEE Transactions on

Issue 1 • Date March 1995

Filter Results

Displaying Results 1 - 24 of 24
  • Comment on: "Component relevancy in multistate reliability models"

    Publication Year: 1995 , Page(s): 95 - 96
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (174 KB)  

    The author comments on the paper by A.M. Abouammoh and M.A. Al-Kadi (see ibid., vol.40, p.370-4, 1991) which discusses various notions of component relevancy for multistate systems and suggests a unified form of relevancy. The paper contains many misprints, some wrong examples, and several results that need clarification. The most important are mentioned.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comment on: Reliability of k-out-of-n:G systems with imperfect fault-coverage

    Publication Year: 1995 , Page(s): 137 - 138
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (162 KB)  

    This note comments on the paper "Reliability of k-out-of-n:G systems with imperfect fault-coverage" by S. Akhtar (1994). An alternative probability argument can be used to obtain the MTBF (mean time between failures) and MTTF (mean time to failure) for such systems. This has the advantage that higher moments of such failure times can also be determined.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computation in faulty stars [hypercube networks]

    Publication Year: 1995 , Page(s): 114 - 119
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (508 KB)  

    The question of simulating a completely healthy hypercube with a degraded one (one with some faulty processors) has been considered by several authors. We consider the question for the star-graph interconnection network. With suitable assumptions on the fault probability, there is, with high probability, a bounded distance embedding of Kn×Sn-1 in a degraded Sn , of congestion O(n). By a different method, a congestion O(log(n)) embedding of S, can be obtained. For the hypercube O(1) congestion has been obtained, but this is open for the star graph. Other results presented include a guaranteed O(n) slowdown simulation if there are sufficiently few faults, and upper and lower bounds for the minimal size of a system of faults rendering faulty every m-substar View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effect of testing techniques on software reliability estimates obtained using a time-domain model

    Publication Year: 1995 , Page(s): 97 - 103
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB)  

    Since the early 1970s, researchers have proposed several models to estimate software-reliability as testing progresses. Among these, the time-domain models are the most common. We present empirical evidence to show that the testing method does affect the reliability estimates using one of these models, viz, the Musa basic execution-time model. The evidence suggests that: (1) reliability models need to consider additional data, generated during testing, such as some form of code coverage, to obtain accurate reliability estimates; and (2) further research is necessary to determine which testing method, or combination thereof, leads to higher reliability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new framework for part failure-rate prediction models

    Publication Year: 1995 , Page(s): 139 - 146
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (624 KB)  

    This paper presents a framework for developing part failure-rate models. It is a partial result of an effort sponsored by the US Air Force for the development of reliability prediction models for military avionics. Published data show that the existing reliability prediction methods fall far short of providing the required accuracy. One of the problems in the existing methods is the exclusion of critical factors. The new framework is based on the premise that essentially all failures are caused by the interactions of built-in flaws, failure mechanisms, and stresses. These three ingredients contribute to form the failure distribution which are functions of stress application duration (eg, aging time), number of thermal cycles, and vibration duration. The Weibull distribution has been selected as the general distribution. The distribution is then modified by the critical factors such as flaw quantities, effects of environmental stress screening, calendar-time reliability improvements, and vendor quality differences, to provide the part failure-rate functions. To provide credibility for the framework, only well published data and information have been used View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An O(k3·log(n/k)) algorithm for the consecutive-k-out-of-n:F system

    Publication Year: 1995 , Page(s): 128 - 131
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB)  

    The fastest generally-recognized algorithms for computing the reliability of consecutive-k-out-of-n:F systems require O(n) time, for both the linear and circular systems. The authors' new algorithm requires O(k3·log(n/k)) time. The algorithm can be extended to yield an O(n·max{k3·log(n/k), log(n))} total time procedure for solving the combinatorial problem of counting the number of working states, with w working and n-w failed components, w=1,2,...,n View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A combinatorial approach to modeling imperfect coverage

    Publication Year: 1995 , Page(s): 87 - 94
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (576 KB)  

    A new algorithm combines a coverage model with a combinatorial model to compute system unreliability. Its advantage is that for a class of computer systems, it is simpler than current algorithms. The method applies to combinatorial models which can generate cutsets for the system. This set of cutsets is augmented with cutsets representing the uncovered failures of the system. The resulting set is manipulated by combining standard multi-state and sum-of-disjoint products solution techniques. It is possible to compute the exact unreliability of the system using this algorithm. If the size of the system and the time required for the analysis become prohibitive, however, the solution can be truncated and bounds on the system unreliability computed. The authors' algorithm is important because it adapts standard combinatorial solution techniques to a problem that was previously thought to require a Markov solution. The ability to model a fault-tolerant computer system completely within a combinatorial model allows results to be calculated more quickly and accurately, and thus to impact system design. This new technology is easily integrated into existing design/analysis methodologies. Coverage provides a more accurate picture of system behavior, and gives more faith in reliability estimates View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability analysis of complex models using SURE bounds

    Publication Year: 1995 , Page(s): 46 - 53
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (728 KB)  

    As computer and communication systems become more complex it becomes increasingly more difficult to analyze their hardware reliability, because simple models can fail to adequately-capture subtle but important features. This paper describes several ways the authors have addressed this problem for analyses based upon White's SURE theorem. They show: how reliability analysis based on SURE mathematics can attack very large problems by accepting recomputation in order to reduce memory usage; how such analysis can be parallelized both on multiprocessors and on networks of ordinary workstations, and obtain excellent performance gains by doing so; how the SURE theorem supports efficient Monte Carlo based estimation of reliability; and the advantages of the method. Empirical studies of large models solved using these methods show that they are effective in reducing the solution-time of large complex problems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time-varying failure rates in the availability and reliability analysis of repairable systems

    Publication Year: 1995 , Page(s): 155 - 160
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB)  

    This paper combines time varying failure rates and Markov chain analysis to obtain a hybrid reliability and availability analysis. However, combining these techniques can, depending on the size of the system, result in solutions of the Markov chain differential matrix equations that are intractable. This paper identifies solutions that are tractable, These form the analytical baseline for the reliability and availability analysis of systems with time varying failure rates. Tractable solutions were found for the 1-component 2-state and the 2-component 4-state configurations. Time varying failure rates were characterized by a general polynomial expression. Constant, linear, and Weibull failure rate functions are special cases of this polynomial. The general polynomial failure rate provides flexibility in modeling the time varying failure rates that occur in practice View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An O(n·(log2(n))2) algorithm for computing the reliability of k-out-of-n:G and k-to-l-out-of-n:G systems

    Publication Year: 1995 , Page(s): 132 - 136
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB)  

    This paper presents the RAFFT-GFP (Recursively Applied Fast Fourier Transform for Generator Function Products) algorithm as a computationally superior algorithm for expressing and computing the reliability of k-out-of-n:G and k-to-l-out-of-n:G systems using the fast Fourier transform. Originally suggested by Barlow and Heidtmann (1984), generating functions provide a clear, concise method for computing the reliabilities of such systems. By recursively applying the FFT to computing generator function products, the RAFFT-GFP achieves an overall asymptotic computational complexity of O(n·(log2(n)) 2) for computing system reliability. Algebraic manipulations suggested by Upadhyaya and Pham (1993) are reformulated in the context of generator functions to reduce the number of computations. The number of computations and the CPU time are used to compare the performance of the RAFFT-GFP algorithm to the best found in the literature. Due to larger overheads required, the RAFFT-GFP algorithm is superior for problem sizes larger than about 4000 components, in terms of both computation and CPU time for the examples studied in this paper. Lastly, studies of very large systems with unequal reliabilities indicate that the binomial distribution gives a good approximation for generating function coefficients, allowing algebraic solutions for system reliability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assembly-level reliability: a methodology for effective manufacturing of IC packages

    Publication Year: 1995 , Page(s): 14 - 18
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    This paper discusses the general methodology of assembly level reliability (ALR) as part of a corporate effort at designing reliability into the whole assembly process of integrated circuit (IC) packages. Semiconductor packages with assembly-induced defects sometimes do escape detection due to a variety of reasons. Trying to eliminate this problem by approaching it piecemeal may result only in single process optimization, but does not guarantee full assembly line balancing for error-free production. ALR is a systematic 4-prong approach which uses a combination of techniques for synergistic effects. (1) Problems of immediate needs have to be addressed and contained, (2) The proper steps must then be taken to ensure that similar issues do not resurface. (3) Design-for-manufacturability principles must be applied; e.g., the design of the package can be simplified to reduce the number of assembly steps, increase throughput, and cut cost. (4) Qualification methodologies have to be revisited. Less expensive but well-characterized test chips can be introduced in lieu of actual devices. Accelerated testing with a good understanding of the failure mechanisms facilitates faster product qualification to ensure time-to-market advantage. Together with these more cost-effective qualification techniques, the proper reliability-monitoring features must be installed. Only then can the true vision of ALR be accomplished, viz, ensuring recognition, by both customers and competitors, as a Company that continuously manufactures defect-free parts View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Survey of reliability studies of consecutive-k-out-of-n:F and related systems

    Publication Year: 1995 , Page(s): 120 - 127
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (664 KB)  

    The consecutive-k-out-of-n:F and related systems have caught the attention of many researchers since the early 1980s. The studies of these systems lead to better understanding of the reliability of general series systems, In computation and structure. This paper is mainly a chronological survey of computing the reliability of these systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault-tree analysis: a knowledge-engineering approach

    Publication Year: 1995 , Page(s): 37 - 45
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (752 KB)  

    This paper deals with the application of knowledge engineering and a methodology for the assessment and measurement of reliability, availability, maintainability, and safety of industrial systems using fault-tree representation. Object oriented structures, production rules representing the expert's heuristics, algorithms, and database structures are the basic elements of the system. The blackboard architecture of the system supports qualitative and quantitative evaluation of the fault tree. A fuzzy set approach analyzes problems with few failure data or much fuzziness or imprecision. Fault-tree analysis is a knowledge acquisition structure that has been extensively explored by knowledge engineers. Reliability engineers can apply the techniques developed by this area of computer science to: (1) improve the data acquisition process; (2) explore the benefits of object oriented expert systems for reliability applications; (3) integrate the several sources of knowledge into a unique system; (4) explore the approximate reasoning to handle uncertainty; and (5) develop hybrid solution strategies combining expert heuristics, conventional procedures, and available failure data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensitivity and uncertainty analysis of Markov-reward models

    Publication Year: 1995 , Page(s): 147 - 154
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (604 KB)  

    Markov-reward models are often used to analyze the reliability and performability of computer systems. One difficult problem therein is the quantification of the model parameters. If they are available, e.g., from measurement data collected by manufacturers, they are: (a) generally regarded as confidential; and (b) difficult to access. This paper addresses two ways of dealing with uncertain parameters: (1) sensitivity analysis, and (2) Monte Carlo uncertainty analysis. Sensitivity analysis is relatively fast and cheap but it correctly describes only the local behavior of the model outcome uncertainty as a result of the model parameter uncertainties. When the uncertain parameters are dependent, sensitivity analysis is difficult. The authors extend the classical sensitivity analysis so that the results conform better to those of the Monte Carlo uncertainty analysis. Monte Carlo uncertainty analysis provides a global view. Since it can include parameter dependencies, it is more accurate than sensitivity analysis. By two examples they demonstrate both approaches and illustrate the effects uncertainty and dependence can have View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dependent and multimode failures in reliability evaluation of extra-stage shuffle-exchange MINs

    Publication Year: 1995 , Page(s): 73 - 86
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1028 KB)  

    Previous reliability evaluations for multistage interconnection networks (MINs) assumed that “all failures are statistically-independent and that no degraded operational modes exist for switches”, though these assumptions are not realistic. For example, researchers have described instances of statistically-dependent failures, or fault side-effects, in some MINs. This paper presents efficient algorithms for terminal, broadcast, and K-terminal reliability evaluation of the shuffle-exchange network with an extra stage (SENE), a redundant-path MIN, under assumptions that allow statistical-dependence between failures and degraded operational modes for switches. A modified shock-model incorporates failure statistical-dependency and multiple operational modes into the reliability evaluation. For an N×N SENE, the reliability algorithms and their run-times are: terminal and broadcast →O(log(N)), and K-terminal→O(|K|·log(N)) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weibull accelerated life testing with unreported early failures

    Publication Year: 1995 , Page(s): 31 - 36
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (380 KB)  

    Situations arise in life testing where early failures go unreported, e.g. a technician believes an early failure is “his fault” or “premature” and must not be recorded. Consequently, the reported data come from a truncated distribution and the number of unreported early failures is unknown. Inferences are developed for a Weibull accelerated life-testing model in which transformed scale and shape parameters are expressed as linear combinations of functions of the environment (stress). Coefficients of these combinations are estimated by maximum likelihood methods which allow point, interval, and confidence bound estimates to be computed for such quantities of interest for a given stress level as the shape parameter, the scale parameter, a selected quantile, the reliability at a particular time, and the number of unreported early failures. The methodology allows lifetimes to be reported as exact, right censored, or interval-valued, and to be subject optionally to testing protocols which establish thresholds below which lifetimes go unreported. A broad spectrum of applicability is anticipated by virtue of the substantial generality accommodated in both stress modeling and data type View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mixture models for reliability of software with imperfect debugging: identifiability of parameters

    Publication Year: 1995 , Page(s): 104 - 113
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (652 KB)  

    A class of software-reliability mixture-type models is introduced in which individual bugs come with i.i.d. random failure-causation rates λ, and have conditional hazard function φ(t|λ) for software failure times. The models allow the possibility of imperfect debugging, in that at each failure a new bug (possibly with another rate-parameter λ) is introduced, statistically independently of the past, with probability p. For φ(t|λ)=λ, it is shown that the unknown parameters p, n0 (the initial number of bugs), and G (the Cdf for λ) are uniquely determined from the probability law of the failure-count function (N(t), 0⩽t⩽δ), for arbitrary δ>0. The parameters (n0,G) are also uniquely determined by the mean failure-count function E{N(t)} when p is known (e.g., is assumed to be 0), but not when p is unknown. For special parametric classes of G, the parameters (n0,p,G) are uniquely determined by (E{N(t)}, 0⩽t⩽δ) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Genetic-algorithm-based reliability optimization for computer network expansion

    Publication Year: 1995 , Page(s): 63 - 72
    Cited by:  Papers (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (760 KB)  

    This paper explains the development and implementation of a new methodology for expanding existing computer networks. Expansion is achieved by adding new communication links and computer nodes such that reliability measures of the network are optimized within specified constraints. A genetic algorithm-based computer network expansion methodology (GANE) is developed to optimize a specified objective function (reliability measure) under a given set of network constraints. This technique is very powerful because the same approach can be extended to solve different types of problems; the only modification required is the objective function evaluation module. The versatility of the genetic algorithm is illustrated by applying it to solve various network expansion problems (optimize diameter, average distance and computer network reliability for network expansion). The results are compared with the optimal solutions computed using an exhaustive search of complete solution space. The results demonstrate that GANE is very effective (in both accuracy and computation time) and applies to a wide range of problems, but it does not guarantee the optimal results for every problem View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Determining the duration of a demonstration life-test before all units fail

    Publication Year: 1995 , Page(s): 26 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB)  

    Two small samples of electrodes insulated with the standard and modified designs were put on a high-stress voltage-endurance life-test. The objective was to compare the life distributions of these insulation designs. During the test, the design engineer suspected that the modified design was an improvement (longer lasting) over the standard design. There were still unfailed electrodes in the modified sample when all the electrodes in the standard sample had failed. However, a statistically significant difference-between the α (scale) parameters of the assumed Weibull distributions for insulation life-could not be shown at the accumulated testing time. The design engineer wished to know how much longer the unfailed electrodes in the modified sample needed to survive in order to provide convincing evidence for the difference between the two designs. This paper presents a procedure to address this question and discusses further aspects of determining test duration in similar practical situations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling and maximizing burn-in effectiveness

    Publication Year: 1995 , Page(s): 19 - 25
    Cited by:  Papers (3)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB)  

    System burn-in can get rid of many residual defects left from component and subsystem burn-in since incompatibility exists not only among components but also among different subsystems and at the system level. Even if system, subsystem, and component burn-in are performed, the system reliability often does not achieve the requirement. In this case, redundancy is a good way to increase system reliability when improving component reliability is expensive. This paper proposes a nonlinear model to: estimate the optimal burn-in times for all levels, and determine the optimal amount of redundancy for each subsystem. For illustration, a bridge system configuration is considered; however, the model can be easily applied to other system configurations. Since there are few studies on system, subsystem, and component incompatibility, reasonable values are assigned for the compatibility factors at each level View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Demonstrated reliability of plastic-encapsulated microcircuits for missile applications

    Publication Year: 1995 , Page(s): 8 - 13
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (508 KB)  

    For the past decade, overall reliability improvement and product availability have enabled plastic encapsulated microcircuits (PEM) to move from consumer electronics beyond the relatively large and reliability-conscious automotive market, into the military market. Based on the analysis of the worst-case PEM scenario for military applications, demonstrating the moisture reliability under long-term (20 years) dormant storage environments has become the last hurdle for PEM. Studies have demonstrated that PEM can meet the typical missile environments in long-term storage. To further validate PEM reliability in missile applications, Texas Instruments (TI) conducted three separate studies involving 6 years of PEM moisture-life monitoring and assessment, testing of the standard PEM electrical characteristics under the military temperature ranges (-55°C to +125°C), and assessing their robustness in moisture environments after the assembly processes. These TI studies support the use of PEM in missile (or similar) applications. Effective focus on part and supplier selection, supplier teaming, and process monitoring is necessary to maintain the PEM reliability over the required environments at the lowest cost. This paper assesses PEM reliability for a selected missile storage environment using the industry-standard moisture testing, such as biased HAST or 85°C/85%RH (relative humidity), for demonstrating the PEM moisture survivability. The moisture reliability (MTTF) or average moisture lifetime of PEM is assessed to correlate PEM capability to anticipated field-performance environments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability comparisons for plastic-encapsulated microcircuits

    Publication Year: 1995 , Page(s): 6 - 7
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB)  

    This paper briefly compares reliability test data obtained from plastic encapsulated microcircuits (PEM) purchased from various manufacturers. Tests include biased humidity, temperature cycling, autoclave, and life tests. The results indicate differences in reliability associated with PEM from the various manufacturers. These data highlight the need for a thorough understanding of supplier quality and reliability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A reliability model for real-time rule-based expert systems

    Publication Year: 1995 , Page(s): 54 - 62
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (780 KB)  

    This paper uses two modeling tools to analyze the reliability of real-time expert systems: (1) a stochastic Petri net (SPN) for computing the conditional response time distribution given that a fixed number of expert system match-select-act cycles are executed, and (2) a simulation search tree for computing the distribution of expert system match-select-act cycles for formulating a control strategy in response to external events. By modeling the intrinsic match-select-act cycle of expert systems and associating rewards rates with markings of the SPN, the response time distribution for the expert system to reach a decision can be computed as a function of design parameters, thereby facilitating the assessment of reliability of expert systems in the presence of real-time constraints. The utility of the reliability model is illustrated with an expert system characterized by a set of design conditions under a real-time constraint. This reliability model allows the system designers to: (1) experiment with a range of selected parameter values; and (2) observe their effects on system reliability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Electrical overstress and electrostatic discharge

    Publication Year: 1995 , Page(s): 2 - 5
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB)  

    Semiconductor devices have a limited ability to sustain electrical overstress (EOS). The device susceptibility to EOS increases as the device is scaled down to submicron feature size. At present, EOS is a major cause for IC failures. Published reports indicate that nearly 40% of IC failures can be attributed to EOS events. Hence, EOS threats must be considered early in the design process. For semiconductor devices, EOS embodies a broad range of electrical threats due to electromagnetic pulses, electrostatic discharge (ESD), system transients, and lightning. EOS-related failures in semiconductor devices can be classified according to their primary failure mechanisms into: thermally-induced failures, electromigration, electric-field-related failures. In general, thermally-induced failures are related to the doping level, junction depth, and device characteristic-dimensions whereas electric-field induced failures are primarily related to the breakdown of thin oxides in MOS devices View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Reliability is concerned with the problems involved in attaining reliability, maintaining it through the life of the system or device, and measuring it.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Way Kuo
City University of Hong Kong