By Topic

Test Workshop (LATW), 2011 12th Latin American

Date 27-30 March 2011

Filter Results

Displaying Results 1 - 25 of 57
  • Author index

    Publication Year: 2011 , Page(s): 1 - 10
    Save to Project icon | Request Permissions | PDF file iconPDF (425 KB)  
    Freely Available from IEEE
  • Committees

    Publication Year: 2011 , Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (335 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2011 , Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • [Front cover]

    Publication Year: 2011 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (76 KB)  
    Freely Available from IEEE
  • Analysis of SEU parameters for the study of SRAM cells reliability under radiation

    Publication Year: 2011 , Page(s): 1 - 5
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (907 KB) |  | HTML iconHTML  

    A simplified RC circuit is used to simulate the effects of ionizing particles in a 90nm SRAM. The main characteristic of the memory cell bit flip are discussed and compared for characteristic parameters. The effect of the surrounded circuit on the impacted transistor is discussed in order to extract parameters characteristic of the SEU occurrence. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Configurable Thru-Silicon-Via interconnect Built-In Self-Test and diagnosis

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (739 KB) |  | HTML iconHTML  

    Three-dimensional integration is a key technology for systems whose performance / power requirements cannot be achieved by traditional silicon technologies. Testing is one of the major challenges of 3D integration. This paper proposes a configurable Interconnect Built-In Self-Test (BIST) technique for inter-die interconnects (Thru-Silicon Vias TSVs). The proposed technique accounts for faults like opens and shorts and also delay faults due to crosstalk. In the proposed fault model, the signal transitions on victim TSVs are affected by the transitions on the aggressor TSVs. The Kth Aggressor Fault model (KAF) assumes that the aggressors of each victim TSV are the K-order neighbors. The test times are reduced as more victim TSVs are concurrently tested. The neighboring order K is technology dependent and it is determined such that the test times are minimal without loss in fault coverage. The proposed BIST has lower area than existing interconnect BIST solutions, while the configuration capabilities increase the area by up to 80%. However, due to relative high TSV pitch (10s μm), the area overheads are small. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing in an agile product development environment: An industry experience report

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB) |  | HTML iconHTML  

    Product development nowadays requires great focus on time to market, as well as in quality, in order to meet customer expectations. Several agile methods and methodologies have been proposed to tackle the early release of software products and meet these stringent deadlines. However, in an agile team, the quality guarantee usually performed by tester groups is directly affected by these huge changes in the way and time when tasks are performed during a project; the traditional tester role usually does not adjust well to these new scenarios. This paper presents empirical observations on test practices in agile projects. These projects were developed at Nokia Institute of Technology (INdT) in the Network Technologies group, where protocol compliance, performance, low level details, and other requirements have to be guaranteed. We provide an experience report on agile testing and identify some important issues in dealing with it and on adapting the tester role to this kind of environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Functional test generation for the pLRU replacement mechanism of embedded cache memories

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (190 KB) |  | HTML iconHTML  

    Testing cache memories is a challenging task, especially when targeting complex and high-frequency devices such as modern processors. While the memory array in a cache is usually tested exploiting BIST circuits that implement March-based solutions, there is no established methodology to tackle the cache controller logic, mainly due to its limited accessibility. One possible approach is Software-Based Self Testing (SBST): however, devising test programs able to thoroughly excite the replacement logic and made the results observable is not trivial. A test program generation approach, based on a Finite State Machine (FSM) model of the replacement mechanism, is proposed in this paper. The effectiveness of the method is assessed on a case study considering a data cache implementing the pLRU replacement policy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Communication fault injection for multi-protocol Java applications testing

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (205 KB) |  | HTML iconHTML  

    Network applications with high dependability requirements must be carefully tested under communication faults to enhance confidence in proper behavior. Fault injection is very useful for these tests. When the applications use more than one protocol, such as UDP, TCP and RMI, a suitable tool that properly handles all of them is required. However, existing tools either are unable to test this kind of application or impose relevant drawbacks. This paper presents Comform, a communication fault injector for multi-protocol Java applications. It intercepts protocol messages at JVM level and uses firewall rules for fault emulation. The approach is useful for both white and black box testing and preserves the target's source code. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formally verifying an RTOS scheduling monitor IP core in embedded systems

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB) |  | HTML iconHTML  

    The implementation of complex, high-performance functionalities in nano-CMOS technologies faces significant design and test challenges related to the need of adopting robust and efficient system validation methodologies. This aspect is particularly true for embedded systems devoted to critical applications, where human life or large amounts of economical resources are at a premium. In this work, we propose a new approach to obtain formal models that are suitable to formal equivalence checking methodologies. These checking methodologies deal with verifying the functionality of embedded systems devoted to critical applications. The main advantage of this approach is based on the fact that it makes it possible to apply a formal algorithm on the model to guarantee design bugs absence. Along with the work, a case-study is presented to demonstrate the various aspects of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability enhancement via Sleep Transistors

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (307 KB) |  | HTML iconHTML  

    CMOS is still the predominating technology for digital designs with no identifiable concurrence in the near future. Driving forces of this leadership are the high miniaturization capability and the reliability of CMOS. The latter, though, is decreasing with an alarming pace against the background of technologies with sizes at the nanoscale. The consequence is a rising demand of solutions to improve lifetime reliability and yield of today's integrated systems. Thereby, a common solution is the redundant implementation of components. However, redundancy collides with another major issue of integrated circuits - power dissipation. The main contribution of this work is an approach that increases the lifetime reliability at only low delay and power penalty. Therefore, the well-known standby-leakage reduction technique “Sleep Transistors” is combined with the idea of redundancy. Additional, we propose an extended flow for reliability verification on transistor level. Simulation results indicate that the new approach can increase the lifetime reliability by more than factor 2 compared to initial designs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An improved OBT strategy for untuned continuous-time analog filters

    Publication Year: 2011 , Page(s): 1 - 5
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (120 KB) |  | HTML iconHTML  

    This paper presents an enhanced Oscillation Based Test (OBT) scheme that performs a relative comparison between two oscillation frequencies, eliminating the need of a fixed reference and allowing in this way the application of the OBT technique to non tuned circuits. Indeed, the fact of dealing with untuned filters prevents careless application of the traditional OBT. This is because, as a consequence of the process parameters dispersion, the use of one unique and fixed reference oscillation frequency for evaluating the test results is no longer practical. In order to overcome this problem a new approach based on the relative comparison of two oscillation frequencies is proposed. For evaluating the effectiveness of the idea, catastrophic fault models at both, device and circuit level, have been adopted. The results are encouraging, despite the circuit complexity introduced for adapting the classical OBT to untuned filters. Although a Gm-C filter was used for the proof of concept herein presented, it results clear that this strategy can be applied to any kind of continuous-time analog filters like MOSFET-C or R-AO-C filters, among others. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A TLM-based approach to functional verification of hardware components at different abstraction levels

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (195 KB) |  | HTML iconHTML  

    Verification has long been recognized as an integral part of the hardware design process. When designing a system, engineers usually use various design representations and concretize them step by step up to a physical layout. At the beginning of the process, when there is much of indeterminacy, only abstract reference models are applicable to verification; when the process is close to the end, more concrete ones can be utilized. The article concerns problems of developing reusable verification systems (testbenches), which can be used to analyze different versions of the same component at different abstraction levels. We suggest an approach to construct reusable reaction checkers basing on a concept of Transaction Level Modeling (TLM). The paper includes general description of the approach, considers several particular cases, and outlines our experience. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fault-tolerant service discovery protocol for emergency search and rescue missions

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (682 KB) |  | HTML iconHTML  

    In service discovery protocols for mobile ad hoc networks (Manets) assisting emergency search and rescue missions, a key problem is to keep the system operating despite the existence of faults. In this paper, we propose a fault-tolerant service discovery mechanism based on localization that considers the requesting node's localization, the request response time and the maximum node speed to guarantee the delivery of packets in adversarial environments. Another issue addressed in this work is a service selection mechanism that applies information fusion in intermediate nodes to reduce the message replies in these networks. The mechanism dynamically selects the best resource providers during the reply transmissions. Results show that both mechanisms make it possible to maintain the tradeoff between the discovery success rate and the reduction in message replies, hence minimizing the network traffic and increasing the network lifetime. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neutron detection in atmospheric environment through static and dynamic SRAM-based test bench

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (757 KB) |  | HTML iconHTML  

    In this paper, we propose a technique for the detection of neutrons in atmospheric environment developed in the framework of HAMLET project. This technique relies on the sensitivity of SRAM cells to particle radiation. In particular, we introduce a system based on a memory test bench that records the neutron reactions in memory devices. This system presents a good rate of flexibility from different points of view. It is conceived to be modular, programmable, low power consuming and portable. The main novelty of the proposed test bench is its capability to run detection test with static (hold mode) and dynamic (operation) function. The test bench is independent of the type of memory, allowing the use and the study of the interaction between particles and electronic devices built with different technologies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Methodology and platform for fault co-emulation

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (178 KB) |  | HTML iconHTML  

    A platform and a technique to improve stuck-at fault grading efficiency through the use of hardware co-emulation is presented. IC manufacturers are always seeking for new ways to test their devices in order to deliver parts with zero defects to their customers. Scan is a well known technique that attains high fault coverage results with efficiency. Demands for new features motivate the creation of high complex systems with a mixture of analog and digital blocks with a communication interface that is difficult to cover with scan patterns. In addition, the logic that configures the chip for each of the different test modes, some BIST memory interfaces, asynchronous clock dividers or generators, among others, are examples of circuits that are blocked or have few observation/control points during scan. A FPGA based-platform that uses heterogeneous models to emulate digital, analog and memory blocks for fault grading patterns on complex systems is described. Also introduced in our proposal are four types of models that can be used with FPGAs, and the results of applying our fault co-emulation technique to some benchmark circuits including ISCAS89, ADC, iopads and memory controllers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Impact of RF-based fault injection in Pierce-type crystal oscillators under EMC standard tests in microcontrollers

    Publication Year: 2011 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (900 KB) |  | HTML iconHTML  

    Crystal oscillators are usually implemented using Pierce's configuration due to its high stability, small amount of components, and easy adjustment. With technology development and device shrinking, modern microcontroller embedded oscillators include all network components integrated on chip to attend cost-effective designs supporting both crystals and ceramic resonators. This fact makes the oscillator more sensitive to feedback network load and strays related to the ESD protections required at the external crystal I/O pins. Robust applications such as industrial, automotive, biomedical, and aerospace require aggressive EMC qualification tests where high power RF interference is injected causing jitter, frequency deviation, or even clock corruption that traduces in severe faults at system level. This work discusses the impact of RF interference on crystal oscillators. A theoretical load factor analysis is proposed and compared to experimental results obtained from a 0.35μm CMOS silicon test vehicle. Finally, a test strategy for microcontrollers and complex SoCs is presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Test and calibration of MEMS convective accelerometers with a fully electrical setup

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB) |  | HTML iconHTML  

    MEMS devices are expected to be used in a growing number of high-volume and low-cost applications. However because they typically requires the application of physical test stimuli to verify their specifications, test and calibration costs are actually a bottleneck to reduce the overall production cost of MEMS sensor. This paper presents an alternative electrical-only solution for testing and calibrating the sensitivity of MEMS convective accelerometers. It is based on simple impedance measurements performed both at ambient temperature and under nominal biasing conditions. The method is evaluated through Monte-Carlo simulations considering typical process fluctuations and mismatch. Results show the potentialities of the technique that permits to reduce the dispersion of sensitivity by a factor higher than 11 after calibration. As a consequence, a production yield of more than 99.8% can be expected for low-cost products using only electrical measurements for the calibration scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Behavioral-level thermal- and aging-estimation flow

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1193 KB) |  | HTML iconHTML  

    In recent transistor technologies design metrics highly interdepend on each other and cannot be regarded isolated. For example temperature analysis requires detailed knowledge of the power consumption and leakage currents exponentially depend on the temperature. Additionally long-term aging- or degradation-effects such as electromigration and NBTI occur in recent designs and need to be considered too. For these reasons we propose a flow applying run-time efficient and accurate methods and tools from the power-, thermal-, and aging-estimation domain in combination with a model describing the physical properties of the IC package design. The flow iterates the parameter estimation to handle all interdependencies and results in a steady state after few runs and only seconds of execution time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the functional test of MESI controllers

    Publication Year: 2011 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (124 KB) |  | HTML iconHTML  

    This paper proposes a method to identify a functional sequence able to test the circuitry implementing the MESI protocol in a multi-processor or multi-core system. The method is purely functional and does not require any knowledge about the real implementation of the circuitry. It is simply based on forcing the processors to execute a program, while observing the results (both in terms of produced data and performance behavior). Therefore, the method is particularly suitable to be used for at-speed manufacturing test, incoming inspection, and on-line test. Experimental results have been gathered on a compatible MIPS64 multi-processor system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Error-resilient design of branch predictors for effective yield improvement

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    Speculative execution methods have been long employed in microprocessors in order to boost their performance. Being speculative, their implementation is self-correcting functionally, as the speculation needs always to be verified, and, if incorrect, its effect nullified. Hence, the actions of a faulty speculative component self-correct, albeit at the cost of some performance degradation. As speculation techniques are aggressively employed to enhance microprocessor's performance, however, such performance faults may result in frequent violation of their expected speculation accuracy, significantly degrading the overall performance of the system. Hence, microprocessors with defective speculative components are discarded, resulting in yield loss. In this work, we propose several error-resilient design strategies for branch predictors; a representative example of speculative processor subsystems. The proposed methods support indexing mechanisms that can effectively re-map the history information, used for predicting branches, to fault-free entries, mitigating the impact of faults in heavily-used entries. Experimental results indicate that the proposed error-resilient design methods significantly reduce the impact of performance faults, effectively improving yield. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Test power reduction via deterministic alignment of stimulus and response bits

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    Toggling of scan cells during the shift of consecutive complementary values reflects into excessive switching activity in the combinational logic under test unnecessarily. Elevated levels of power dissipation during test ensue as a result, endangering the reliability of the chip. The test power problem may be alleviated via a proper specification of don't care bits to create transition-less runs of bit values. However, in order to reduce Test Data Volume (TDV), these don't care bits are typically exploited to encode patterns through the on-chip decompressor. Furthermore, this approach would not address scan-out and/or capture power. In this paper, we propose a DfT-based approach for reducing test power in any scan architecture. The proposed on-chip mechanism enables the alignment of transition-wise costly stimulus/response bits in scan slices, absorbing these transitions and reducing power. The proposed solution is test set independent and reduces power without resorting to x-filling, enabling orthogonal x-filling techniques to be applied in conjunction. Experimental results justify the efficacy of the proposed method in attaining test power reductions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IC immunity modeling process validation using on-chip measurements

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (716 KB) |  | HTML iconHTML  

    Developing integrated circuit (IC) immunity models and simulation flow has become one of the major concerns of ICs suppliers to predict whether a chip will pass susceptibility tests before fabrication and avoid redesign cost. This paper presents an IC immunity modeling process including the standard immunity test applied to a dedicated test chip. An on-chip voltage sensor is used to characterize the radio frequency interference propagation inside the chip and thus validate the immunity modeling process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scan chain configuration method for broadcast decompressor architecture

    Publication Year: 2011 , Page(s): 1 - 5
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (119 KB) |  | HTML iconHTML  

    The high test data volume and long test application time are two major concerns for testing scan based circuits. Broadcast-based test compression techniques can reduce both the test data volume and test application time. The broadcast rate is a major issue in these techniques. This paper describes a novel broadcast-based test decompressor architecture and a new method of configuration of the scan chain for this architecture based on the test set analysis. This paper presents and compares several similar heuristic algorithms that according to the test set analysis produce the scan chain configuration with the maximum broadcast rate for the given test set. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating the efficiency of data-flow software-based techniques to detect SEEs in microprocessors

    Publication Year: 2011 , Page(s): 1 - 6
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (183 KB) |  | HTML iconHTML  

    There is a large set of software-based techniques that can be used to detect transient faults. This paper presents a detailed analysis of the efficiency of dataflow software-based techniques to detect SEU and SET in microprocessors. A set of well-known rules is presented and implemented automatically to transform an unprotected program into a hardened one. SEU and SET are injected in all sensitive areas of MIPS-based microprocessor architecture. The efficiency of each rule and a combination of them are tested. Experimental results are used to analyze the overhead of data-flow techniques allowing us to compare these techniques in the respect of time, resources and efficiency in detecting this type of faults. This analysis allows us to implement an efficient fault tolerance method that combines the presented techniques in such way to minimize memory area and performance overhead. The conclusions can lead designers in developing more efficient techniques to detect these types of faults. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.