By Topic

Defect and Fault Tolerance in VLSI Systems, 2004. DFT 2004. Proceedings. 19th IEEE International Symposium on

Date 10-13 Oct. 2004

Filter Results

Displaying Results 1 - 25 of 65
  • Learning based on fault injection and weight restriction for fault-tolerant Hopfield neural networks

    Publication Year: 2004 , Page(s): 339 - 346
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    Hopfield neural networks tolerating weight faults are presented. The weight restriction and fault injection are adopted as fault-tolerant approaches. For the weight restriction, a range to which values of weights should belong is determined during the learning, and any weight being outside this range is forced to be either its upper limit or lower limit. A status of a fault occurring is then evoked by the fault injection, and calculating weights is made under this status. The learning based on both of the above approaches surpasses the learning based on either of them in the fault tolerance and/or in the learning time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling yield of carbon-nanotube/silicon-nanowire FET-based nanoarray architecture with h-hot addressing scheme

    Publication Year: 2004 , Page(s): 356 - 364
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1427 KB) |  | HTML iconHTML  

    With molecular-scale materials, devices and fabrication techniques recently being developed, high-density computing systems in the nanometer domain emerge. An array-based nanoarchitecture has been recently proposed based on nanowires such as carbon nanotubes (CNTs) and silicon nanowires (SiNWs). High-density nanoarray-based systems consisting of nanometer-scale elements are likely to have many imperfections; thus, defect-tolerance is considered one of the most significant challenges. In this paper we propose a probabilistic yield model for the array-based nanoarchitecture. The proposed yield model can be used (1) to accurately estimate the raw and net array densities, and (2) to design and optimize more defect and fault-tolerant systems based on the array-based nanoarchitecture. As a case study, the proposed yield model is applied to the defect-tolerant addressing scheme called h-hot addressing and simulation results are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Annotated bit flip fault model

    Publication Year: 2004 , Page(s): 366 - 376
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (876 KB) |  | HTML iconHTML  

    Simulation based fault injection is widely used in order to validate fault tolerant digital circuits with respect to transient faults (TFs). The size of circuits often requires the use of cycle based (accurate) register transfer level (RTL) simulation, which, however, does not account for the timing of the functional units. In this paper, TFs affecting memory elements are annotated by using the timing of the driven combinational logic. Such annotation can be used to increase the accuracy of cycle based RTL fault simulation. This analysis is performed without the need to perform event driven fault simulation, and its results show that relevant errors may be in order in case the IC's timing is neglected. The accuracy of the proposed technique has been validated by comparing its results with those of event driven simulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coupling different methodologies to validate obsolete microprocessors

    Publication Year: 2004 , Page(s): 250 - 255
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB) |  | HTML iconHTML  

    The actual operating life time for many electronic systems turned out to be much longer than originally foreseen, leading to the use of obsolete components in critical projects. To skip microprocessor obsolescence problems, companies should have bought larger stocks of components when still available, or are forced to find parts in secondary markets later. Alternatively, a suitable low-cost solution could be replacing the obsolete component by emulating its functionalities with a programmable logic device. However, design verification of microprocessors is well known as a challenging task. This paper proposes a coupled methodology to generate test-programs, using complementary techniques: one pseudoexhaustive and one driven by an evolutionary optimizer. As a case study, the Motorola 6800 was targeted. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IC HTOL test stress condition optimization

    Publication Year: 2004 , Page(s): 272 - 279
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (355 KB) |  | HTML iconHTML  

    HTOL (high temperature operation life) test is used to determine the effects of bias and temperature stress conditions on solid-state devices over time. It simulates the devices' operating condition in an accelerated manner, and is primarily for device reliability evaluation. This paper addresses an SA (simulated annealing) method used for the HTOL test stress condition decision-making that is an optimization problem. The goal is to reduce the resources for the HTOL test, hardware or time, under reliability constraints. The theory of the reliability statistical model and the SA algorithm are presented. In our optimization algorithm, we need to calculate the accurate HTOL stressed power for the next optimization loop since the Vs (stressed voltage) that is optimized will affect not only Afv (voltage acceleration factor) but also Aft (thermal acceleration factor). A curve-fitting algorithm is applied to get reasonable accelerated factors and reliability calculations. The model selection process and statistical analysis of fitted data by different models are also presented. Experimental results with different stress condition priorities and different user settings are given to demonstrate the effectiveness of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic input match correction in RF low noise amplifiers

    Publication Year: 2004 , Page(s): 211 - 219
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (377 KB) |  | HTML iconHTML  

    The input match of low noise amplifiers can degrade significantly due to process faults and the parasitic package inductances at its input pad. These inductances have wide tolerances and are difficult to co-design for. This paper presents a self-correction methodology that will go beyond BIST systems by ascertaining the input match frequency and dynamically re-aligning it, thus rendering the input match fault and package tolerant. The proposed two-tonal approach depends only on the difference of two signals that pass through the same sensing circuitry. Consequently, it is inherently insensitive to process, power supply and temperature variations. Coupled with the fact that the majority of the signal processing occurs in the baseband/DC domain, complexity and precision demands are highly lenient. We present simple, low-precision circuitry designed in IBM 0.25 μm CMOS RF process with low power and real-estate overheads, no DSP cores or processors and fast correction times of less than 30 μs. To the authors' knowledge, this paper represents the first ever attempt at self correction of integrated RF front-end circuitry. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing of inter-word coupling faults in word-oriented SRAMs

    Publication Year: 2004 , Page(s): 111 - 119
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (343 KB) |  | HTML iconHTML  

    A new algorithm to detect inter-word coupling faults in word-organized SRAMs (WOMs) is proposed in this paper. This algorithm (referred to as March-NU) relies on a new fault model which extends fault detection to three additional types of coupling faults, i.e. read destructive, deceptive read destructive and incorrect read coupling faults. These faults are related to well known fault mechanisms, that have been reported in the literature, which occur in the read operation of SRAMs. Previous algorithms can not guarantee 100% fault detection of these coupling faults. March-NU sensitizes and detects with 100% coverage all coupling faults as well as traditional faults. A detailed analysis of its fault detection capabilities are presented. March-NU utilizes 8 March elements and its complexity is 30N, where N is the number of words in the WOM under test. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • At-speed functional verification of programmable devices

    Publication Year: 2004 , Page(s): 386 - 394
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (298 KB) |  | HTML iconHTML  

    In this paper we present a novel approach for functional verification of programmable devices. The proposed methodology is suited to refine the results obtained by a functional automatic test pattern generator (ATPG). The hard-to-detect faults are examined by exploiting the controllability ability of a high-level ATPG in conjunction with the observability potentiality of software instructions targeted to the programmable device. Generated test programs can be used for both functional verification and at-speed testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • First level hold: a novel low-overhead delay fault testing technique

    Publication Year: 2004 , Page(s): 314 - 315
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (279 KB) |  | HTML iconHTML  

    This paper presents a novel delay fault testing technique, which can be used as an alternative to the enhanced scan based delay fault testing, with significantly less design overhead. Instead of using an extra latch as in the enhanced scan method, we propose using supply gating at the first level of logic gates to hold the state of the combinational circuit. Experimental results on a set of ISCAS89 benchmarks show an average reduction of 27% in area overhead with an average improvement of 62% in delay overhead and 87% in power overhead during normal mode of operation, compared to the enhanced scan implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transient current testing of dynamic CMOS circuits

    Publication Year: 2004 , Page(s): 264 - 271
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (277 KB) |  | HTML iconHTML  

    We propose methods for testing dynamic CMOS circuits using the transient power supply current, iDDT. The methods are based on setting the primary inputs of the circuit under test, switching the clock signal and monitoring iDDT. We target resistive open defects that can either cause the circuit to fail, or introduce unacceptable delay and hence result in degraded circuit performance. Results of fault simulation of domino CMOS circuits show a high rate of detection for resistive open faults that cannot be otherwise detected using traditional voltage or IDDQ testing. We also show that by using a normalization procedure, the defects can be detected with a single threshold setup in the presence of leakage and process variations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Characteristics of fault-tolerant photodiode and photogate active pixel sensor (APS)

    Publication Year: 2004 , Page(s): 58 - 66
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (391 KB) |  | HTML iconHTML  

    A fault-tolerant APS has been designed by splitting the APS pixel into two halves operating in parallel, where the photo sensing element has been divided in two and the readout transistors have been duplicated while maintaining a common row select transistor. This split design allows for a self correcting pixel scheme such that if one half of the pixel is faulty, the other half can be used to recover the entire output signal. The fault tolerant APS design has been implemented in a 0.18 μm CMOS process for both a photodiode based and photogate based APS. Test results show that the fault tolerant pixels behave as expected where a non-faulty pixel behaves normally, and a half faulty pixel, where one half is either stuck low or high, produces roughly half the sensitivity. Preliminary results indicate that the sensitivity of a redundant pixel is approximately three times that of a traditional pixel for the photodiode APS and approximately twice that for the photogate APS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing and defect tolerance: a Rent's rule based analysis and implications on nanoelectronics

    Publication Year: 2004 , Page(s): 280 - 288
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1137 KB) |  | HTML iconHTML  

    Defect tolerant architectures will be essential for building economical gigascale nanoelectronic computing systems to permit functionality in the presence of a significant number of defects. The central idea underlying a defect tolerant configurable system is to build the system out of partially perfect components, detect the defects and configure the available good resources using software. In this paper we discuss implications of defect tolerance on power area, delay and other relevant parameters for computing architectures. We present a Rent's rule based abstraction of testing for VLSI systems and evaluate the redundancy requirements for observability. It is shown that for a very high interconnect defect density, a prohibitively large number of redundant components are necessary for observability and this has adverse affect on the system performance. Through a unified framework based on a priori wire length estimation and Rent's rule we illustrate the hidden cost of supporting such an architecture. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust low-cost analog signal acquisition with self-test capabilities

    Publication Year: 2004 , Page(s): 239 - 247
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (439 KB) |  | HTML iconHTML  

    An architecture based on a parallel statistical sampler with digital post-processing is proposed to provide graceful performance degradation of data acquisition under a multiple failure scenario. After a self-configuration stage, an adaptive procedure keeps track of parametric variations on the signal path during operation and corrects the digital modeling block. Parametric faults are thus detected and their impact on the system behavior is minimized. The low-cost of the basic analog building block allows the inherent redundancy and the parallel reconstruction model of the architecture to bear faults with decreased dynamic performance. Analysis of the architecture and the digital modeling employed is presented, as well as measurement data to validate the feasibility of the approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling and analysis of crosstalk coupling effect on the victim interconnect using the ABCD network model

    Publication Year: 2004 , Page(s): 174 - 182
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (358 KB) |  | HTML iconHTML  

    After order reduction, the crosstalk model is utilized for the analysis of crosstalk coupling effects on the victim's output signal. Various timing issues related to signal waveform such as, delay time, overshoot and undershoot occurrence time etc., that in effect help to ensure the desired signal integrity (SI) and performance reliability of the SoCs, can be estimated analytically using the reduced order crosstalk model. It has been observed that the crosstalk coupling effect introduces a delay in the victim's output signal which can be significant enough, or even unacceptable, if many aggressors simultaneously couple energy to the victim line, or the line spacing between the aggressor and victim is reduced due to under-etching ,or even, the length of the victim interconnect is increased because of improper layout/routing. Influences of other interconnect parasitics on the victim's output signal can also be tested using the same model. Simulation results obtained with our reduced order model is found to be quite good and comparable to the accuracy of PSPICE simulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Designs for reducing test time of distributed small embedded SRAMs

    Publication Year: 2004 , Page(s): 120 - 128
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (333 KB) |  | HTML iconHTML  

    This paper proposes a test architecture aimed at reducing test time of distributed small embedded SRAMs (e-SRAMs). This architecture improves the one proposed in (W. B. Jone et al, Proc. 17th IEEE VLSI Test Symp., p.246- 251, 1999 and also IEEE Transact. VLSI Syst., vol.10, no.4, p.512-515, 2002). The improvements are mainly two-fold. On one hand, the testing of time-consuming data retention faults (DRFs), that is neglected by the previously proposed test architecture, is now considered and performed via a DFT technique referred to as the "no write recovery test mode (XWRTM)". On the other hand, a parallel local response analyzer (LRA), instead of a serial response analyzer, is used to reduce the test time of these distributed small e-SRAMs. Results from our evaluations show that the proposed test architecture can achieve a better defect coverage and test time compared to those obtained previously, with a negligible area cost. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonvolatile repair caches repair embedded SRAM and new nonvolatile memories

    Publication Year: 2004 , Page(s): 347 - 355
    Cited by:  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2638 KB) |  | HTML iconHTML  

    Nonvolatile repair caches require less area than traditional row, column, or block redundancy schemes, to repair random defective memory cells in deep submicron embedded SRAMs and new nonvolatile memories such as FeRAM, MRAM, and OUM. Small memories with few defects can be repaired efficiently in real time by the direct mapped nonvolatile repair cache whereas large memories with many defects can be repaired more effectively and in real time by the N way set associative repair cache. An 8 way set associative repair cache was implemented in the Texas Instruments-Agilent Technologies 64 Mbit FeRAM chip. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Co-design and refinement for safety critical systems

    Publication Year: 2004 , Page(s): 78 - 86
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (283 KB) |  | HTML iconHTML  

    In this paper we focus on design entry of complex systems, that is, the highest abstract tier of the global system without implementation choices to such and such technologies. At this very first level, the use of a formal specification language is more and more considered as the foundation of a real validation process. What we would like to emphasize is that from a formal design entry, project management can be formally controlled by formal refinement. We propose an architecture that is based upon stepwise refinement of a formal model to achieve controllable implementations. This leads to implementations that are highly effective, but remain formally related to the first formal specification. Partitioning, fault tolerance, and system management are seen as particular cases of refinement in order to conceptualize systems correct by proven construction. In this paper, we present the basic principles of system methodologies and describe the methodology based on the refinement paradigm. In order to prove this approach, we have developed the B-HDL Tool based on VHDL (digital circuits) and B method (formal language based on set theory and logic). The benefits of such tools would be an amazing productivity gain, a better reuse of automation and a formal redundancy management. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability modeling and assurance of clockless wave pipeline

    Publication Year: 2004 , Page(s): 442 - 450
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (348 KB) |  | HTML iconHTML  

    This paper presents theoretical yet practical methodologies to model, assure and optimize the reliability of clockless wave pipeline. Clockless wave pipeline is a cutting-edge and innovative technology as an alternative to traditional pipeline, and a promising computing model towards ultra-high throughput and speed. The basic computational components of such clockless wave pipeline are data waves in association with request signals and switches. The key to the success of clockless wave pipeline is how to coordinate and ensure the processing of datawaves throughout the pipeline in association with the request signals without relying on any intermediate access points under clocked-control. Due to the complication of clockless operations, an efficient and effective method to model and analyze the confidence level (referred to as reliability or yield) of clockless operations of wave pipeline is exigently demanded, but has not yet been adequately addressed, in an integrated level such as datawaves in association with request signals, leaving this as a challenge. In this regard, out-of-orchestration between datawaves and request signals, referred to as datawave fault, is the major concern in assuring and optimizing the reliability of the system. This paper specifically addresses and resolves: extensive and practical clockless-induced datawave-fault modeling; assurance and optimization; clockless-oriented fault tolerant design methods. The proposed methods will establish a sound and adequate theoretical foundation for development of innovative yet practical test/diagnosis/fault-tolerant design methods in early design stage of clockless wave pipeline. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Arithmetic operators robust to multiple simultaneous upsets

    Publication Year: 2004 , Page(s): 289 - 297
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (275 KB) |  | HTML iconHTML  

    Future technologies, below 90 nm, will present transistors so small that they will be heavily influenced by electromagnetic noise and SEU induced errors. This way, together with process variability, design as known today is likely to change. Since many soft errors might appear at the same time, a different design approach must be taken. The use of inherently robust operators as an alternative to conventional digital arithmetic operators is proposed in this study. The behavior of the proposed operators is analyzed through the simulation of single and multiple random faults injection, and it is shown to be adequate for several classes of applications, standing to multiple simultaneous upsets. The number of tolerated upsets varies according to the number of extra bits appended to the operands, and is limited only by the area restriction. For example, in a multiplier with 2 extra bits per operand, one can obtain robustness against up to 15 simultaneous faults. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability and yield: a joint defect-oriented approach

    Publication Year: 2004 , Page(s): 2 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (427 KB) |  | HTML iconHTML  

    We present a model for computing the probability of a parametric failure due to a spot defect. The analysis is based on electromigration in conductors under unidirectional current stress. An analytical solution is given for simple layout and simulations for a more complicated case. Then we show that in some cases electromigration-dependent parametric defects can make a significant contribution to the total yield estimation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Concurrent error detection in sequential circuits implemented using embedded memory of LUT-based FPGAs

    Publication Year: 2004 , Page(s): 487 - 495
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (274 KB) |  | HTML iconHTML  

    We propose a concurrent error detection (CED) scheme for a sequential circuit implemented using both embedded memory blocks and LUT-based programmable logic blocks available in FPGAs. The proposed scheme is proven to detect each permanent or transient fault associated with a single input or output of any component of the circuit that results in an incorrect state or output of the circuit. Such faults are detected with no latency. The experimental results show that despite the heterogeneous structure of the proposed CED scheme, the overhead is very reasonable. For the examined benchmark circuits, the combined overhead, that accounts for both extra EMBs and extra logic cells, is in the range of 25.6% to 61.0%, with an average value of 38.6%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Non-intrusive test compression for SOC using embedded FPGA core

    Publication Year: 2004 , Page(s): 413 - 421
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (346 KB) |  | HTML iconHTML  

    In this paper, a complete non-intrusive test compression solution is proposed for system-on-a-chip (SOC) using embedded FPGA core. The solution achieves low-cost testing by employing not only selective Huffman vertical coding (SHVC) for test stimuli compression, but also MISR-based time compactor for test responses compaction. Moreover, the solution is non-intrusive, since it can tolerate any number of unknown states in output responses such that it does not require modifying the logic of core to eliminate or block the sources of unknown states. Furthermore, the solution obtains improved diagnostic capability over conventional MISR by combining masking logic with a modified MISR. Experimental results for ISCAS 89 benchmarks as well as a platform FPGA chip have proven the efficiency of the proposed test solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On-line analysis and perturbation of CAN networks

    Publication Year: 2004 , Page(s): 424 - 432
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (291 KB) |  | HTML iconHTML  

    The controller area network (CAN) is a well-known standard, and it is widely used in many safety-critical applications, spanning from automotive electronics to aircraft and aerospace electronics. Due to its widespread adoption in critical applications, the capability of accurately evaluating the dependability properties of CAN-based networks is becoming a major concern. In this paper we present a new environment that can be fruitfully exploited to assess the effects of faults in CAN-based networks, which is particularly suited for being exploited when a prototype of the network under analysis is available. The core of our new environment is a special-purpose board that plugs into an existing CAN network, and that is able to monitor and, when asked, to modify the information traveling over the bus. Observation and modification of CAN frames are done concurrently with normal CAN-bus operations without introducing any performance degradation. The obtained environment is thus suitable for being deployed in complex CAN-based networks to validate their dependability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Response compaction for test time and test pins reduction based on advanced convolutional codes

    Publication Year: 2004 , Page(s): 298 - 305
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (314 KB) |  | HTML iconHTML  

    This paper addresses the problem of test response compaction. In order to maximize compaction ratio, a single-output encoder based on check matrix of a (n, n-1, m, 3) convolutional code is presented. When the proposed four theorems are satisfied, the encoder can avoid two and any odd erroneous bit cancellations, handle one unknown bit (X bit) and diagnose one erroneous bit. Two types of encoders are proposed to implement the check matrix of the convolutional code. A large number of X bits can be tolerated by choosing a proper memory size and weight of check matrix, which can also be obtained by an optimized input assignment algorithm. Some experimental results would verify the efficiency of the proposed optimized algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accurate estimation of soft error rate (SER) in VLSI circuits

    Publication Year: 2004 , Page(s): 377 - 385
    Cited by:  Papers (11)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (583 KB) |  | HTML iconHTML  

    Trends in CMOS technology have resulted in circuits with higher soft error rate (SER), making it imperative to accurately estimate the SER of VLSI circuits. In this paper a comparative study is presented between the Qcrit method and the simulation method for estimating the circuit level SER. It is shown that for small circuits with uniformly distributed output values (e.g. flip-flop, binary counter), both methods provide similar estimates for SER. However, for other circuits the Qcrit-based method provides SER estimates significantly different from the results of the simulation method. Errors of up to 34% have been observed for a microprocessor scoreboard circuit. This is due to the fact that the Qcrit method assumes that each node in the circuit is equally likely to be 0 or 1. The Qcrit method can also miss out logical masking within the circuit. Finally, a feasible method based on Monte-Carlo simulation is presented to estimate chip level SER in terms of failure in time (FIT) rate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.