By Topic

Defect and Fault Tolerance in VLSI Systems, 2004. DFT 2004. Proceedings. 19th IEEE International Symposium on

Date 10-13 Oct. 2004

Filter Results

Displaying Results 1 - 25 of 65
  • An efficient perfect algorithm for memory repair problems

    Page(s): 306 - 313
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (705 KB) |  | HTML iconHTML  

    Memory repair by using spare rows/columns to replace faulty rows/columns has been proved to be NP-complete. Traditional perfect algorithms are comparison-based exhaustive search algorithms and are not efficient enough for complex problems. To overcome the deficiency of performance, a new algorithm has been devised and presented in this paper. The algorithm transforms a memory repair problem into Boolean function operations. By using BDD (binary decision diagram) to manipulate Boolean functions, a repair function which encodes all repair solutions of a memory repair problem can be constructed. The optimal solution, if it exists, can be found efficiently by traversing the BDD of a repair function only once. The algorithm is very efficient due to the fact that BDD can remove redundant nodes, combine isomorphic subgraphs together, and have very compact representations of Boolean functions if a good variable ordering is chosen. The remarkable performance of the algorithm can be demonstrated by experimental results. Because a memory repair problem can be modeled as a bipartite graph, the algorithm may be useful for researchers in other fields such as graph theory. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Concurrent error detection in sequential circuits implemented using embedded memory of LUT-based FPGAs

    Page(s): 487 - 495
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (274 KB) |  | HTML iconHTML  

    We propose a concurrent error detection (CED) scheme for a sequential circuit implemented using both embedded memory blocks and LUT-based programmable logic blocks available in FPGAs. The proposed scheme is proven to detect each permanent or transient fault associated with a single input or output of any component of the circuit that results in an incorrect state or output of the circuit. Such faults are detected with no latency. The experimental results show that despite the heterogeneous structure of the proposed CED scheme, the overhead is very reasonable. For the examined benchmark circuits, the combined overhead, that accounts for both extra EMBs and extra logic cells, is in the range of 25.6% to 61.0%, with an average value of 38.6%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data integrity evaluations of Reed Solomon codes for storage systems [solid state mass memories]

    Page(s): 158 - 164
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB) |  | HTML iconHTML  

    This paper introduces a very flexible approach for the evaluation of bit error rates (BER) attainable on storage systems which use Reed Solomon codes. These evaluations are based on the use of a Markov model to evaluate the probabilities of having an uncorrectable codeword. Differently from previous literature, the reported approach can take into account the impact of both erasures and random errors, allowing a smaller degree of approximation and better evaluation of BER improvement related to the introduction of scrubbing techniques. The flexibility of the proposed method is finally shown by applying it to different cases of interest. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning based on fault injection and weight restriction for fault-tolerant Hopfield neural networks

    Page(s): 339 - 346
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    Hopfield neural networks tolerating weight faults are presented. The weight restriction and fault injection are adopted as fault-tolerant approaches. For the weight restriction, a range to which values of weights should belong is determined during the learning, and any weight being outside this range is forced to be either its upper limit or lower limit. A status of a fault occurring is then evoked by the fault injection, and calculating weights is made under this status. The learning based on both of the above approaches surpasses the learning based on either of them in the fault tolerance and/or in the learning time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mixed loopback BiST for RF digital transceivers

    Page(s): 220 - 228
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (579 KB) |  | HTML iconHTML  

    In this paper we analyze the performance of a mixed built-in-self-test (BiST) for RF IC digital transceivers, where a baseband processor can be used both as a test pattern generator and response analyzer. The test is oriented at spot defects in a transceiver front-end. Estimates for noise, signal power and nonlinear distortions such as EVM (or SER), gain and IP3, respectively, are considered the test responses. Limitations of these tests are investigated with respect to the test path properties, the strength of defects and circuit tolerances. The IP3 test complements the EVM (SER) and gain tests for some spot defects. The analysis is verified by simulation of a functional-level RF transceiver model implemented in Matlab™. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • At-speed functional verification of programmable devices

    Page(s): 386 - 394
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (298 KB) |  | HTML iconHTML  

    In this paper we present a novel approach for functional verification of programmable devices. The proposed methodology is suited to refine the results obtained by a functional automatic test pattern generator (ATPG). The hard-to-detect faults are examined by exploiting the controllability ability of a high-level ATPG in conjunction with the observability potentiality of software instructions targeted to the programmable device. Generated test programs can be used for both functional verification and at-speed testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online testable reversible logic circuit design using NAND blocks

    Page(s): 324 - 331
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (338 KB) |  | HTML iconHTML  

    A technique for an on-line testable reversible logic circuit is presented. Three new reversible logic gates have been introduced in this paper. These gates can be used to implement reversible digital circuits of various levels of complexity. The major feature of these gates is that they provide on-line testability for circuits implemented using them. The application of these gates in implementation of a subset of MCNC benchmark circuits is provided. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Error-resilient test data compression using Tunstall codes

    Page(s): 316 - 323
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (715 KB) |  | HTML iconHTML  

    This paper presents a novel technique for achieving error-resilience to bit-flips in compressed test data streams. Error-resilience is related to the capability of a test data stream (or sequence) to tolerate bit-flips which may occur in an automatic test equipment (ATE), either in the electronics components of the loadboard or in the high speed serial communication links between the user interface workstation and the head. Initially, it is shown that errors caused by bit-flips can seriously degrade test quality (as measured by the coverage), as such degradation is very significant for variable codeword techniques such as Huffman coding. To address this issue a variable-to-constant compression technique (namely Tunstall coding) is proposed. Using Tunstall coding and bit-padding to preserve vector boundaries, an error-resilient compression technique is proposed. This technique requires a simple algorithm for compression and its hardware for decompression is very small, while achieving a much higher error-resilience against bit-flips compared with previous techniques (albeit at a small reduction in compression). Simulation results on benchmark circuits are provided to substantiate the validity of this approach in an ATE environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An application-independent delay testing methodology for island-style FPGA

    Page(s): 478 - 486
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB) |  | HTML iconHTML  

    A novel fault model for detecting delay defects of FPGAs is proposed in this paper. Our fault model assumes that a target segment can be covered by a shortest path which is realizable in an FPGA, and the path will guarantee to detect delay defects which affect the performance of the segment. Given the proposed fault model, we also developed a framework to search for the target paths and find appropriate tests, which is independent to the size of FPGAs. Several methods are also proposed to minimize the number of test configurations (the test time). The tests can achieve a high coverage of delay defects with reasonable test time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Non-intrusive test compression for SOC using embedded FPGA core

    Page(s): 413 - 421
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (346 KB) |  | HTML iconHTML  

    In this paper, a complete non-intrusive test compression solution is proposed for system-on-a-chip (SOC) using embedded FPGA core. The solution achieves low-cost testing by employing not only selective Huffman vertical coding (SHVC) for test stimuli compression, but also MISR-based time compactor for test responses compaction. Moreover, the solution is non-intrusive, since it can tolerate any number of unknown states in output responses such that it does not require modifying the logic of core to eliminate or block the sources of unknown states. Furthermore, the solution obtains improved diagnostic capability over conventional MISR by combining masking logic with a modified MISR. Experimental results for ISCAS 89 benchmarks as well as a platform FPGA chip have proven the efficiency of the proposed test solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling yield of carbon-nanotube/silicon-nanowire FET-based nanoarray architecture with h-hot addressing scheme

    Page(s): 356 - 364
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1427 KB) |  | HTML iconHTML  

    With molecular-scale materials, devices and fabrication techniques recently being developed, high-density computing systems in the nanometer domain emerge. An array-based nanoarchitecture has been recently proposed based on nanowires such as carbon nanotubes (CNTs) and silicon nanowires (SiNWs). High-density nanoarray-based systems consisting of nanometer-scale elements are likely to have many imperfections; thus, defect-tolerance is considered one of the most significant challenges. In this paper we propose a probabilistic yield model for the array-based nanoarchitecture. The proposed yield model can be used (1) to accurately estimate the raw and net array densities, and (2) to design and optimize more defect and fault-tolerant systems based on the array-based nanoarchitecture. As a case study, the proposed yield model is applied to the defect-tolerant addressing scheme called h-hot addressing and simulation results are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling and analysis of crosstalk coupling effect on the victim interconnect using the ABCD network model

    Page(s): 174 - 182
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (358 KB) |  | HTML iconHTML  

    After order reduction, the crosstalk model is utilized for the analysis of crosstalk coupling effects on the victim's output signal. Various timing issues related to signal waveform such as, delay time, overshoot and undershoot occurrence time etc., that in effect help to ensure the desired signal integrity (SI) and performance reliability of the SoCs, can be estimated analytically using the reduced order crosstalk model. It has been observed that the crosstalk coupling effect introduces a delay in the victim's output signal which can be significant enough, or even unacceptable, if many aggressors simultaneously couple energy to the victim line, or the line spacing between the aggressor and victim is reduced due to under-etching ,or even, the length of the victim interconnect is increased because of improper layout/routing. Influences of other interconnect parasitics on the victim's output signal can also be tested using the same model. Simulation results obtained with our reduced order model is found to be quite good and comparable to the accuracy of PSPICE simulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • First level hold: a novel low-overhead delay fault testing technique

    Page(s): 314 - 315
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (279 KB) |  | HTML iconHTML  

    This paper presents a novel delay fault testing technique, which can be used as an alternative to the enhanced scan based delay fault testing, with significantly less design overhead. Instead of using an extra latch as in the enhanced scan method, we propose using supply gating at the first level of logic gates to hold the state of the combinational circuit. Experimental results on a set of ISCAS89 benchmarks show an average reduction of 27% in area overhead with an average improvement of 62% in delay overhead and 87% in power overhead during normal mode of operation, compared to the enhanced scan implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reducing fault latency in concurrent on-line testing by using checking functions over internal lines

    Page(s): 183 - 190
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (258 KB) |  | HTML iconHTML  

    We describe a method to reduce the fault latency, i.e., the time it takes to detect a fault after it occurs, during concurrent on-line testing. A high fault latency can negatively affect the fault coverage in various ways. The fault latency is reduced by using what we call checking functions. A checking function cfi expresses the function of a line gi in the circuit as a function of one or more other lines. During concurrent on-line testing, the value of gi is compared to the value of cfi. A mismatch indicates the presence of a fault. The advantage of checking functions is that they only use lines that already exist in the circuit. We demonstrate that benchmark circuits have large numbers of checking functions to choose from. We also demonstrate the increase in fault coverage and the reductions in fault detection times possible by using checking functions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compression of VLSI test data by arithmetic coding

    Page(s): 150 - 157
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (325 KB) |  | HTML iconHTML  

    This work presents arithmetic coding and its application to data compression for VLSI testing. The use of arithmetic codes for compression results in a codeword whose length is close to the optimal value as predicted by entropy in information theory. Previous techniques (such as those based on Huffman or Golomb coding) result in optimal codes for test data sets in which the probability model of the symbols satisfies specific requirements. We show that Huffman and Golomb codes result in large differences between entropy bound and sustained compression. We present compression results of arithmetic coding for circuits through a practical integer implementation of arithmetic coding/decoding and analyze its deviation from the entropy bound as well. A software implementation approach is proposed and studied in detail using industrial embedded DSP cores. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accurate estimation of soft error rate (SER) in VLSI circuits

    Page(s): 377 - 385
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (583 KB) |  | HTML iconHTML  

    Trends in CMOS technology have resulted in circuits with higher soft error rate (SER), making it imperative to accurately estimate the SER of VLSI circuits. In this paper a comparative study is presented between the Qcrit method and the simulation method for estimating the circuit level SER. It is shown that for small circuits with uniformly distributed output values (e.g. flip-flop, binary counter), both methods provide similar estimates for SER. However, for other circuits the Qcrit-based method provides SER estimates significantly different from the results of the simulation method. Errors of up to 34% have been observed for a microprocessor scoreboard circuit. This is due to the fact that the Qcrit method assumes that each node in the circuit is equally likely to be 0 or 1. The Qcrit method can also miss out logical masking within the circuit. Finally, a feasible method based on Monte-Carlo simulation is presented to estimate chip level SER in terms of failure in time (FIT) rate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • System-level dependability analysis with RT-level fault injection accuracy

    Page(s): 451 - 458
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (318 KB) |  | HTML iconHTML  

    Fault injection techniques are increasingly used when designing a circuit, in order to analyze the potential cases in which a fault could lead to an application failure. In most experiments, such failures were simply defined as erroneous responses of the circuit. However, in many cases, an erroneous response does not necessarily lead to a failure at the application level, even when the discrepancy with the nominal behavior has a long duration. An accurate but high-level modeling of the complete system is therefore required to discriminate real failure conditions from non-critical errors. On the opposite, performing fault injections on a very high level modeling of the circuit functions does not allow a designer to analyze the effect of real faults potentially occurring in the field, such as bit-flips in internal registers. Injections must therefore be performed using a RT level (or lower level) modeling of the circuit, connected to the system-level modeling of the environment. This paper presents an approach for such mixed-level dependability analyses and reports on a case study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault diagnosis of analog circuits by operation-region model and X-Y zoning method

    Page(s): 230 - 238
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (493 KB) |  | HTML iconHTML  

    An X-Y zoning method can detect faults of analog circuits by using the relationship between circuit inputs and outputs. An operation-region model can analyze/model circuit behaviors by observing changes in the operation regions of MOS transistors in a circuit. In this paper, we propose a method for diagnosing analog circuits by combining the OR model and the X-Y zoning method A diagnosis procedure is realized by the similar way to the method for digital circuits. In order to demonstrate the effectiveness of the proposed method, we apply the method to ITC'97 benchmark circuits with hard faults and soft faults. We obtained the result that the diagnostic resolution is one for every fault. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toggle-masking for test-per-scan VLSI circuits

    Page(s): 332 - 338
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (270 KB) |  | HTML iconHTML  

    This paper presents a novel toggle-masking technique that eliminates the switching activity in a circuit under test (CUT) during the scan-shifting in a test-per-scan test. Conventional scannable D flip-flops (DFFs) are modified to ensure that CUT inputs remain unchanged until an entire test vector is loaded, significantly reducing the power dissipation in the CUT. Our experiments on ISCAS85/89 benchmark circuits show that the proposed technique offers an average of 47% savings in average power compared to previous work (S. Gerstendorfer and H. Wunderlich, Proc. Int. Test Conf., pp. 77-84, 1999), and an average of 99% savings in average power and 8% savings in peak power with respect to test-per-scan circuits with conventional DFFs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IC HTOL test stress condition optimization

    Page(s): 272 - 279
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (355 KB) |  | HTML iconHTML  

    HTOL (high temperature operation life) test is used to determine the effects of bias and temperature stress conditions on solid-state devices over time. It simulates the devices' operating condition in an accelerated manner, and is primarily for device reliability evaluation. This paper addresses an SA (simulated annealing) method used for the HTOL test stress condition decision-making that is an optimization problem. The goal is to reduce the resources for the HTOL test, hardware or time, under reliability constraints. The theory of the reliability statistical model and the SA algorithm are presented. In our optimization algorithm, we need to calculate the accurate HTOL stressed power for the next optimization loop since the Vs (stressed voltage) that is optimized will affect not only Afv (voltage acceleration factor) but also Aft (thermal acceleration factor). A curve-fitting algorithm is applied to get reasonable accelerated factors and reliability calculations. The model selection process and statistical analysis of fitted data by different models are also presented. Experimental results with different stress condition priorities and different user settings are given to demonstrate the effectiveness of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fading algorithm for sequential fault diagnosis [logic IC testing]

    Page(s): 139 - 147
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (349 KB) |  | HTML iconHTML  

    Fault diagnosis algorithms for logic designs with only partial scan support remains inadequate so far because of the difficulties in dealing with the sequential fault effect. In this paper, we enhance our previous symbolic techniques to address such a challenge. Along with the baseline enhancement, we also propose a fading scheme that can effectively reduce the potentially huge memory requirement and long running time without sacrificing much accuracy. This fading algorithm incorporates a commonly used concept called 'local fault effect', using symbolic techniques. Experimental results show that sequential fault diagnosis can actually be done effectively and accurately with reasonable CPU time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probabilistic balancing of fault coverage and test cost in combined built-in self-test/automated test equipment testing environment

    Page(s): 48 - 56
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (338 KB) |  | HTML iconHTML  

    As design and test complexities of SoCs ever intensify, the balanced utilization of combined built-in self-test (BIST) and automated test equipment (ATE) testing becomes desirable to meet the required minimum-fault-coverage while maintaining an acceptable cost overhead. The cost associated with combined BIST/ATE testing of such systems mainly consists of 1) the cost induced by the BIST area overhead and 2) the cost induced by the overall testing time. In general, BIST is significantly faster than ATE, while it can provide only limited fault-coverage, and driving higher fault-coverage from BIST means additional area cost overhead. On the other hand, higher fault-coverage can be easily achieved from ATE, but excessive use of ATE results in additional test time. This paper proposes a novel probabilistic method to balance the fault-coverage and the test overhead costs in a combined BIST/ATE test environment. The proposed technique is then applied to two BIST/ATE test scenarios to find the optimal fault-coverage/cost combinations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting an I-IP for in-field SoC test

    Page(s): 404 - 412
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5306 KB) |  | HTML iconHTML  

    Today's complex system-on-chip integrated circuits include a wide variety of functional IPs whose correct manufacturing must be guaranteed by IC producers. Infrastructure IPs are increasingly often inserted to achieve this purpose; such blocks, explicitly designed for test, are coupled with functional IPs both to obtain yield improvement during the manufacturing process and to perform volume production test. In this paper, a new test control schema based on the use of an infrastructure IP (I-IP) is proposed for the test on-site of SoCs. The proposed in-field test strategy is based on the ability of a single I-IP to periodically monitor the behavior of the system by reusing the test structures introduced for manufacturing test. The feasibility of this approach has been proved for SoCs including microprocessors and memories equipped with P1500 compliant solutions. Experimental results highlight the advantages in term of reusability and scalability, low impact on system availability and reduced area overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Arithmetic operators robust to multiple simultaneous upsets

    Page(s): 289 - 297
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (275 KB) |  | HTML iconHTML  

    Future technologies, below 90 nm, will present transistors so small that they will be heavily influenced by electromagnetic noise and SEU induced errors. This way, together with process variability, design as known today is likely to change. Since many soft errors might appear at the same time, a different design approach must be taken. The use of inherently robust operators as an alternative to conventional digital arithmetic operators is proposed in this study. The behavior of the proposed operators is analyzed through the simulation of single and multiple random faults injection, and it is shown to be adequate for several classes of applications, standing to multiple simultaneous upsets. The number of tolerated upsets varies according to the number of extra bits appended to the operands, and is limited only by the area restriction. For example, in a multiplier with 2 extra bits per operand, one can obtain robustness against up to 15 simultaneous faults. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transient current testing of dynamic CMOS circuits

    Page(s): 264 - 271
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (277 KB) |  | HTML iconHTML  

    We propose methods for testing dynamic CMOS circuits using the transient power supply current, iDDT. The methods are based on setting the primary inputs of the circuit under test, switching the clock signal and monitoring iDDT. We target resistive open defects that can either cause the circuit to fail, or introduce unacceptable delay and hence result in degraded circuit performance. Results of fault simulation of domino CMOS circuits show a high rate of detection for resistive open faults that cannot be otherwise detected using traditional voltage or IDDQ testing. We also show that by using a normalization procedure, the defects can be detected with a single threshold setup in the presence of leakage and process variations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.