Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT), 2012 IEEE International Symposium on

Date 3-5 Oct. 2012

Filter Results

Displaying Results 1 - 25 of 42
  • [Front matter]

    Publication Year: 2012 , Page(s): i - ix
    Save to Project icon | Request Permissions | PDF file iconPDF (196 KB)  
    Freely Available from IEEE
  • Modeling SRAM start-up behavior for Physical Unclonable Functions

    Publication Year: 2012 , Page(s): 1 - 6
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (564 KB) |  | HTML iconHTML  

    One of the emerging technologies for cryptographic key storage is hardware intrinsic security based on Physical Unclonable Functions (PUFs); a PUF is a physical structure of a device that is hard to clone due to its inherent, device-unique and deep-submicron process variations. SRAM PUF is an example of such technology that is becoming popular. So far, only a little is published about modeling and analysis of their start-up values (SUVs). Reproducing the same start-up behavior every time the chip is powered-on is crucial to produce the same cryptographic key. This paper presents an analytical model for SUVs of an SRAM PUF based on Static Noise Margin (SNM), and reports some industrial measurements to validate the model. Simulation of the impact of different sensitivity parameters (such as variation in power supply, temperature, transistor geometry) has been performed. The results show that out of all sensitivity parameters, variation in threshold voltage is the one with the highest impact. Industrial measurements on real memory devices validate the simulation results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parametric counterfeit IC detection via Support Vector Machines

    Publication Year: 2012 , Page(s): 7 - 12
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (265 KB) |  | HTML iconHTML  

    We present a method to detect a common type of counterfeit Integrated Circuits (ICs), namely used ones, from their brand new counterparts using Support Vector Machines (SVMs). In particular, we demonstrate that we can train a one-class SVM classifier using only a distribution of process variation-affected brand new devices, but without prior information regarding the impact of transistor aging on the IC behavior, to accurately distinguish between these two classes based on simple parametric measurements. We demonstrate effectiveness of the proposed method using a set of actual fabricated devices which have been subjected to burn-in test, in order to mimic the impact of aging degradation over time, and we discuss the limitations and the potential extensions of this approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Path-delay fingerprinting for identification of recovered ICs

    Publication Year: 2012 , Page(s): 13 - 18
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (489 KB) |  | HTML iconHTML  

    The counterfeiting of integrated circuits (ICs) has been on the rise over the past decade, impacting the security and reliability of electronic systems. Reports show that recovered ICs contribute to about 80% of all counterfeit ICs in the market today. Such ICs are recovered from scrapped boards of used devices. Identification of such counterfeit ICs is a great challenge since these ICs have an identical appearance, functionality, and package as fresh ICs. In this paper, a novel path-delay fingerprinting technique is proposed to distinguish recovered ICs from fresh ICs. Due to degradation in the field, the path delay distribution of recovered ICs will become different from that found in fresh ICs. Statistical data analysis can effectively separate the impact of process variations from aging effects on path delay. Simulation results of benchmark circuits using 45 nm technology demonstrate the efficiency of this technique for recovered IC identification. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using partial masking in X-chains to increase output compaction for an X-canceling MISR

    Publication Year: 2012 , Page(s): 19 - 24
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (739 KB) |  | HTML iconHTML  

    An X-Canceling MISR [Touba 07] provides the ability to tolerate unknowns (X's) in the output response with very little loss of observability of non-X values. When the density of X's is low, an X-Canceling MISR is extremely efficient as the number of control bits depends only on the total number of X's in the output response. However, for higher X-densities, an X-Canceling MISR becomes less efficient. This paper describes a very effective approach for using an X-Canceling MISR for designs with high X-density. It utilizes the idea of stitching together scan cells that capture the largest number of X's into "X-chains" as was proposed in [Wohl 08]: In the proposed approach, a partial X-masking approach is used for the X-chains to eliminate the vast majority of the X's at very little cost in terms of control bits. Only the X's coming from the scan cells not in the X-chains plus X's that are left unmasked in the X-chains need to be handled by the X-canceling MISR thereby significantly reducing the total number of control bits required. Experimental results show an order of magnitude improvement in the output compaction can be achieved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the development of Software-Based Self-Test methods for VLIW processors

    Publication Year: 2012 , Page(s): 25 - 30
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (698 KB) |  | HTML iconHTML  

    Software-Based Self-Test (SBST) approaches are an effective solution for detecting permanent faults; this technique has been widely used with a good success on generic processors and processors-based architectures; however, when VLIW processors are addressed, traditional SBST techniques and algorithms must be adapted to each particular VLIW architecture. In this paper, we present a method that formalizes the development flow to write effective SBST programs for VLIW processors, starting from known algorithms addressing traditional processors. In particular, the method addresses the parallel Functional Units, such as ALUs and MULs, embedded into a VLIW processor. Fault simulation campaigns confirm the validity of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Low pin count DfT technique for RFID ICs

    Publication Year: 2012 , Page(s): 31 - 36
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1205 KB) |  | HTML iconHTML  

    The need of uniquely identifiable objects for multiple applications has given great attention to RFID ICs over the years. The test challenges imposed by the nature of this type of IC include small die size, reduced number of external pins, low power mixed-signal design and the need of a low cost production test. In this work, a DfT technique for RFID ICs that deals with some of these limitations is presented. The method requires only 3 external test pins. Results show that the proposed method allows combining and managing functional tests (used for testing most of the analog parts of the chip) and structural test (scan test) reaching high fault coverage. A test control unit and a test wrapper are added to the core. The architecture of the test control unit is presented as well as area, test coverage and test time results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generation and compaction of mixed broadside and skewed-load n-detection test sets for transition faults

    Publication Year: 2012 , Page(s): 37 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (243 KB) |  | HTML iconHTML  

    This paper describes an n-detection test generation strategy for mixed test sets, which consist of both broadside and skewed-load tests, targeting transition faults. The strategy consists of a test generation procedure without test compaction heuristics and a static test compaction procedure. The test generation procedure decides, every time a fault is targeted, whether to generate a broadside or a skewed-load test. The static test compaction procedure allows tests and test types to be modified in order to obtain more effective tests. Experimental results demonstrate the following. (1) The size of the test set produced by the test generation procedure grows slower than linearly with n. After static test compaction the increase in test set size with n is closer to the linear increase that is typical of compacted n-detection test sets. (2) For an individual fault, the n-detection test set may contain a mix of broadside and skewed-load tests to reach the target of n detections. (3) For the higher values of n, static test compaction typically improves the quality of the test set while reducing its size significantly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A scan-based attack on Elliptic Curve Cryptosystems in presence of industrial Design-for-Testability structures

    Publication Year: 2012 , Page(s): 43 - 48
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (651 KB) |  | HTML iconHTML  

    This paper presents a scan-based attack on hardware implementations of Elliptic Curve Cryptosystems (ECC). Several up-to-date Design-for-Testability (DfT) features are considered, including response compaction, X-Masking and partial scan. Practical aspects of the proposed scan-based attack are described, namely timing and leakage analysis that allows finding out data related to the secret key among the bits observed through the DfT structures. We use an experimental setup which allows full automation of the proposed scan attack on designs including DfT configurations. We require around 8 chosen points to implement the attack for retrieving a 192-bit scalar. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • #SAT-based vulnerability analysis of security components — A case study

    Publication Year: 2012 , Page(s): 49 - 54
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (267 KB) |  | HTML iconHTML  

    In this paper we describe a new approach to assess a circuit's vulnerability to fault attacks. This is achieved through analysis of the circuit's design specification, making use of modern SAT solving techniques. For each injectable fault, a corresponding SAT instance is generated. Every satisfying solution for such an instance is equivalent to a circuit state and an input assignment for which the fault affects the circuit's outputs such that the error is not detected by the embedded fault detection. The number of solutions is precisely calculated by a #SAT solver and can be translated into an exact vulnerability measure. We demonstrate the applicability of this method for design space exploration by giving detailed results for various implementations of a deterministic random bit generator. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software exploitable hardware Trojans in embedded processor

    Publication Year: 2012 , Page(s): 55 - 58
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (481 KB) |  | HTML iconHTML  

    Growing threat of hardware Trojan attacks in untrusted foundry or design house has motivated researchers around the world to analyze the threat and develop effective countermeasures. In this paper, we focus on analyzing a specific class of hardware Trojans in embedded processor that can be enabled by software or data to leak critical information. These Trojans pose a serious threat in pervasively deployed embedded systems. An attacker can trigger these Trojans to extract valuable information from a system during field deployment. We show that an adversary can design a low-overhead hard-to-detect Trojan that can leak either secret keys stored in a processor, the code running in it, or the data being processed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimization of Trojan footprint by reducing Delay/Area impact

    Publication Year: 2012 , Page(s): 59 - 62
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (487 KB) |  | HTML iconHTML  

    Due to the globalization and capital costs of building and maintaining fabrication facilities, the number of fabs is shrinking every day and more vendors outsource the fabrication process to offshore fabrication facilities. Using such facilities makes the integrated circuits vulnerable to malicious alterations. These alterations are more commonly known as hardware Trojans and are usually created by insertion of additional logic circuitry. In this paper we propose two types of Trojans that we practically generated and explained how we potentially hide them from verification tools and trig them automatically. The minimal performance/area impact of our Trojans is through keeping the place and route of different components of existing logic unchanged and just adding few gates to the design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Designing and implementing a Malicious 8051 processor

    Publication Year: 2012 , Page(s): 63 - 66
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB) |  | HTML iconHTML  

    We report our experiences in designing and implementing several hardware Trojans within the framework of the Malicious Processor Design Challenge competition. It was held as part of the Cyber Security Awareness Week (CSAW) at the Polytechnic Institute of New York University in November 2011. A malicious processor provides an attacker the ability to bypass traditional defensive techniques as they occupy a layer below the entire software stack. To show that, we present several attack techniques employing hardware Trojans to compromise the security of an 8051 processor performing RC-5 encryption algorithm implemented on a Digilent ATLYS Spartan-6 FPGA development board. We show three powerful attacks using extra hardware: a back door Trojan allows an attacker dump the whole or partial memory region, a bomb counter Trojan disables/enables special/extra functions, and a power sink Trojan exposes the state of the carry flag by changing the power profile of the system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the design of two single event tolerant slave latches for scan delay testing

    Publication Year: 2012 , Page(s): 67 - 72
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2629 KB) |  | HTML iconHTML  

    This paper proposes two new slave latches for improving the Single Event Upset (SEU) tolerance of a flipflop in scan delay testing. The two proposed slave latches utilize additional circuitry to increase the critical charge of the flip-flop compared to designs found in the technical literature. The first (second) latch design achieves a 5.6 (2.4) times larger critical charge with 11% (4%) delay and 16 % (9%) power consumption overhead at 32 nm feature size as compared to the best design found in the technical literature. Moreover, it is shown that the proposed slave latches have also superior performance in the presence of a single event with a multiple node upset. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hardening a memory cell for low power operation by gate leakage reduction

    Publication Year: 2012 , Page(s): 73 - 78
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1493 KB) |  | HTML iconHTML  

    A single event causing multiple node upsets is a significant phenomenon for CMOS memories; its occurrence is due to the reduced feature size and the lower power supply voltage in the nanoscales. A low power memory cell that utilizes positive ground level voltage to reduce leakage power (requiring two transistors), is considered and two schemes are proposed for hardening. These designs require 4 additional transistors for hardening, thus they are 12T. The addition of two transistors to reduce the gate leakage is also applied to the DICE cell for comparison purposes (thus making it a 14T scheme for low power operation). A comprehensive simulation based assessment of the performance of these low power cells is pursued under different feature sizes and values of the (virtual) ground level voltage. Figures of merit for performance such as power dissipation, write/read times and static noise margin (SNM) are reported as well as the charge plot of the critical node pair (for tolerance to a single event with single/multiple node upset). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Single event upset tolerance in flip-flop based microprocessor cores

    Publication Year: 2012 , Page(s): 79 - 84
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1959 KB) |  | HTML iconHTML  

    Soft errors due to single event upsets (SEUs) in the flip-flops of a design are of increasing importance in nanometer technology microprocessor cores. In this work, we present a flip-flop oriented soft error detection and correction technique. It exploits a transition detector at the output of the flip-flop for error detection along with an asynchronous local error correction scheme to provide soft error tolerance. Alternatively, a low cost soft error detection scheme is introduced, which shares a transition detector among multiple flip-flops, while error recovery relies on architectural replay. To validate the proposed approach, it has been applied in the design of a 32-bit MIPS microprocessor core using a 90nm CMOS technology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An on-line soft error mitigation technique for control logic of VLIW processors

    Publication Year: 2012 , Page(s): 85 - 91
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (875 KB) |  | HTML iconHTML  

    The soft error phenomenon is forecast to be a real threat for today's technology of ICs. While implementing error detection and correction codes for regular structural memory arrays have been effectively used to stem the emerging soft error threat, utilizing a low overhead approach for the complex and unstructured control logic of modern processors is still a challenge. This paper presents a low overhead reliability enhancement scheme for the control logic of a Very Large Instruction Word (VLIW) processor. First, a soft error sensitivity analysis has been carried out in order to distinguish the most vulnerable signals inside the control unit. Subsequently, these vulnerable control signals have been classified into either an opcode-dependent or instruction-dependent control signal. The strategy for protecting opcode-dependent control signals utilizes a ROM memory, while instruction-dependent control signals are protected using a RAM memory as a cache to store a history of these control signals along with the Triple Modular Redundancy concept to mask the single transient faults. This technique has been implemented on a high-performance processor, the Xentium processor, in order to validate its degree of fault tolerance and performance overhead as well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring hardware transaction processing for reliable computing in chip-multiprocessors against soft errors

    Publication Year: 2012 , Page(s): 92 - 97
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB) |  | HTML iconHTML  

    With shrinking transistor feature size, lowering nodal capacitance and supply voltage at new technology generations, microprocessors are becoming more vulnerable to single-event upsets and transients, a.k.a., soft errors. While chip-multiprocessor (CMP) architecture has been employed in mainstream microprocessors and the number of on-chip processor cores keeps increasing, the system-level reliability of chip-multiprocessors is degrading reversely proportional to the core number. In this work, we propose to exploit abundant on-chip processor cores for redundant hardware transaction processing, which provides native support for error detection and recovery in transactional chip-multiprocessors (TxCMPs) against soft errors. The proposed transactional processor cores execute everything as transactions and TxCMPs execute redundant transactions on different cores. To alleviate the performance overhead due to transaction commits, we further propose two architectural optimizations, namely early partial commit packet transmission and speculative transaction execution in reliable computing mode. Our experimental evaluation confirms the effectiveness of our optimized TxCMPs in achieving low cost reliable computing against soft errors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Amalgamated q-ary codes for multi-level flash memories

    Publication Year: 2012 , Page(s): 98 - 103
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (507 KB) |  | HTML iconHTML  

    A flash memory is a non-volatile memory based on electron storing mechanism. A multi-level flash memory cell can store one of q symbols (q >; 2). As q increases, the data becomes less reliable and the probability it may be distorted by different types of errors increases. This paper presents an amalgamated q-ary code capable of correcting a mixture of ts symmetric errors and additional ta asymmetric errors of limited magnitude l. In the proposed code, each q-ary codeword is composed of n multi-bit symbols, each multi-bit (i.e. q-ary) symbol is viewed as two sub-symbols over two different alphabets. The new construction has higher code rate than the conventional single-alphabet code. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accurate calculation of SET propagation probability for hardening

    Publication Year: 2012 , Page(s): 104 - 108
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (218 KB) |  | HTML iconHTML  

    A novel method is proposed to enhance SET propagation probability and it is shown how it can assist the hardening process. This paper provides a method to determine a set of patterns that must be applied at the inputs to determine propagation characteristics of the SET that are meaningful for hardening purposes. The impact of the proposed method is experimentally verified on the ISCAS and ITC benchmarks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transient pulse propagation using the Weibull distribution function

    Publication Year: 2012 , Page(s): 109 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (514 KB) |  | HTML iconHTML  

    The proposed deterministic model aims to improve Soft Error Rate estimation by accurately approximating the generated pulse and all subsequent pulses. The generated pulse is approximated by a piecewise function consisting of two Weibull cumulative distribution functions. This method is an improvement over existing methods as it offers high accuracy while requiring less pre-characterization. This is accomplished by fitting a pulse to the Weibull function using actual gate parameters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accurate simulation of SEUs in the configuration memory of SRAM-based FPGAs

    Publication Year: 2012 , Page(s): 115 - 120
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB) |  | HTML iconHTML  

    SRAM-based FPGAs are more and more relevant in a growing number of applications, ranging from the automotive to the aerospace ones. Designers of safety-critical applications demand accurate methodologies to evaluate the Single Event Upsets (SEUs) sensitivity of their designs. In this paper, we present an accurate simulation method for the evaluation of the effects of SEUs in the configuration memory of SRAM-based FPGAs. The approach is able to simulate SEUs affecting the configuration memory of both logic and routing resources since it is able to accurately model the electrical behavior of SEUs in the configuration memory. Detailed experimental results on a large set of benchmark circuits are provided and the comparison with fault injection experiments is shown in order to validate the accuracy of the proposed method. The results clearly demonstrate the benefits of our approach since simulation results predict almost completely the results obtained through fault injection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-reliability fault tolerant digital systems in nanometric technologies: Characterization and design methodologies

    Publication Year: 2012 , Page(s): 121 - 125
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (130 KB) |  | HTML iconHTML  

    This paper reports the main contribution of a project devoted to the definition of techniques to design and evaluate fault tolerant systems implemented using the SoPC paradigm, suitable for missionand safety-critical application environments. In particular, the effort of the five involved research units has been devoted to address some of the main issues related to the specific technological aspects introduced by these flexible platforms. The overall target of the research is the development of a design methodology for highly reliable systems realized on reconfigurable platforms based on a System-on-Programmable Chip (SoPC), as discussed in the next section. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A systematic methodology to improve yield per area of highly-parallel CMPs

    Publication Year: 2012 , Page(s): 126 - 133
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1105 KB) |  | HTML iconHTML  

    Manufacturing yield of chip multi-processors (CMPs) has become a significant problem as more transistors are integrated onto a single die, and the defect rate keeps increasing for "end-of-Moore" nano-scale CMOS technologies. Since such CMP designs usually have significant structural symmetry, adding spares to these should be an effective method for increasing yield per area, as is the case for memories. However, a systematic approach to add spares to optimize CMP yield per area has never been developed, primarily due to the lack of (i) a general model of CMP architectures, and (ii) a practically-useable model for computing areas of chip versions with different numbers of spares. This paper develops such models and, in conjunction with a systematic approach for enumerating a wide range of spare configurations, uses these to compute area overhead and yield for each configuration. In particular, this paper proposes a k-way spare sharing technique to obtain optimal spare configurations which maximize yield per area of any CMP by efficiently traversing the design space for adding spares. Experimental results show significant yield per area improvements over the previous approaches and show that these benefits will continue to grow with increase in the levels of parallelism in CMPs as well as with continued technology scaling. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the multiple fault detection of a nano crossbar

    Publication Year: 2012 , Page(s): 134 - 139
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1547 KB) |  | HTML iconHTML  

    This paper proposes an approach for testing a nano crossbar switch; fault detection is considered in the presence of faulty switches and nets in the crossbar. To ensure detection, a one-to-one (onto) relationship in the setting (programming) of the switches is established in each of the configurations of the crossbar. This is accomplished using a constant-sum transformation of the characteristic matrix of the crossbar by utilizing different graph algorithms in O(N4, 5) where N is the matrix dimension. Matrix properties are related to graph algorithms to generate permutation matrices as corresponding to the configurations (phases) of the crossbar. Simulation results are provided to further substantiate the validity of the proposed approach to test nano crossbars of very large dimension and with different switch distribution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.