By Topic

Test Symposium (ATS), 2010 19th IEEE Asian

Date 1-4 Dec. 2010

Filter Results

Displaying Results 1 - 25 of 86
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (6645 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (18 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (62 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (113 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (151 KB)  
    Freely Available from IEEE
  • Message from the General Chair

    Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (68 KB)  
    Freely Available from IEEE
  • Message from the Program Co-chairs

    Page(s): xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (57 KB)  
    Freely Available from IEEE
  • Organizing Committee

    Page(s): xiv - xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (331 KB)  
    Freely Available from IEEE
  • list-reviewer

    Page(s): xvii
    Save to Project icon | Request Permissions | PDF file iconPDF (65 KB)  
    Freely Available from IEEE
  • Special Panel Session

    Page(s): xviii
    Save to Project icon | Request Permissions | PDF file iconPDF (60 KB)  
    Freely Available from IEEE
  • Efficient Simulation of Structural Faults for the Reliability Evaluation at System-Level

    Page(s): 3 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (307 KB) |  | HTML iconHTML  

    In recent technology nodes, reliability is considered a part of the standard design ¿ow at all levels of embedded system design. While techniques that use only low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to consider the overall application of the embedded system. Multi-level models with high abstraction are essential to efficiently evaluate the impact of physical defects on the system. This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system. A case study of a system consisting of hardware and software for image compression and data encryption is presented and the method is compared to a standard gate/RT mixed-level approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Jitter Characterization of Pseudo-random Bit Sequences Using Incoherent Sub-sampling

    Page(s): 9 - 14
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2196 KB) |  | HTML iconHTML  

    In this paper, jitter analysis algorithms for characterizing timing jitter of multi-Gbps pseudo-random bit sequences (PRBSs) are presented. For signal acquisition, incoherent sub-sampling is employed to increase the effective sampling rate of a digitizer and to simplify its signal acquisition architecture by removing the need for timing synchronization circuits. As a substitute for these circuits, algorithms for signal clock recovery (CR) and waveform reconstruction from the acquired data are developed in this research. The algorithms utilize peak identification of the sampled signal spectrum and the sparsity of the reconstructed waveform in the frequency domain as decision making criteria for accurate signal reconstruction. The jitter value of such a reconstructed waveform is quantified with the use of a wavelet based denoising method to generate a self-reference signal against which zero-crossing times are compared to generate jitter statistics. In addition, the data dependent jitter components can be differentiated from the original jitter by analyzing zero-crossing discrepancies of the self-reference signal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FSimGP^2: An Efficient Fault Simulator with GPGPU

    Page(s): 15 - 20
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (282 KB) |  | HTML iconHTML  

    General Purpose computing on Graphical Processing Units (GPGPU) is a paradigm shift in computing that promises a dramatic increase in performance. But GPGPU also brings an unprecedented level of complexity in algorithmic design and software development. In this paper, we present an efficient parallel fault simulator, FSimGP2, that exploits the high degree of parallelism supported by a state-of-the-art graphic processing unit (GPU) with the NVIDIA Compute Unified Device Architecture (CUDA). A novel three-dimensional parallel fault simulation technique is proposed to achieve extremely high computation efficiency on the GPU. The experimental results demonstrate a speedup of up to 42× compared to another GPU-based fault simulator and up to 53× over a state-of-the-art algorithm on conventional processor architectures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Quasi-best Random Testing

    Page(s): 21 - 26
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (146 KB) |  | HTML iconHTML  

    Random testing, having been employed in both hardware and software for a long time, is well known for its simplicity and straightforwardness, in which each test is selected randomly regardless of the tests previously generated. However, traditionally, it seems to be inefficient for its random selection of test patterns. Therefore, a new concept of quasi-best distance random testing is proposed in the paper to make it more effective in testing. The new idea is based on the fact that the distance between two adjacent selected test vectors in a test sequence would greatly influence the efficiency of fault testing. Procedures of constructing such a testing sequence are presented and discussed in detail. The new approach has shown its remarkable advantage of fitting in most circuits. Experimental results and mathematical analysis of efficiency are also given to assess the performances of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing of Low-Cost Digital Microfluidic Biochips with Non-regular Array Layouts

    Page(s): 27 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (331 KB) |  | HTML iconHTML  

    Digital micro fluidic biochips with non-regular arrays are of interest for clinical diagnostic applications in a cost-sensitive market segment. Previous techniques for biochip testing are limited to regular micro fluidic arrays. We present an automatic test pattern generation (ATPG) method for non-regular digital micro fluidic chips. The ATPG method can generate test patterns to detect catastrophic defects in non-regular arrays where the full reconfigurability of the digital micro fluidic platform is not utilized. It automates test-stimulus design and test-resource selection, in order to minimize the test application time. We also present an integer linear programming model for the compaction of test patterns, while maintaining the desired fault coverage. We utilize a fabricated biochip with non-regular micro fluidic arrays to evaluate the proposed ATPG method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Derivation of Optimal Test Set for Detection of Multiple Missing-Gate Faults in Reversible Circuits

    Page(s): 33 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (333 KB) |  | HTML iconHTML  

    Logic synthesis of reversible circuits has received considerable attention in the light of advances recently made in quantum computation. Implementation of a reversible circuit is envisaged by deploying several special types of quantum gates, such as k-CNOT. Although the classical stuck-at fault model is widely used for testing conventional CMOS circuits, new fault models, namely single missing-gate fault (SMGF), repeated-gate fault (RGF), partial missing-gate fault (PMGF), and multiple missing-gate fault (MMGF), have been found to be more suitable for modeling defects in quantum k-CNOT gates. This article presents an efficient algorithm to derive an optimal test set (OTS) for detection of multiple missing-gate faults in a reversible circuit implemented with k-CNOT gates. It is shown that the OTS is sufficient to detect all single missing-gate faults (SMGFs) and all detectable repeated gate faults (RGFs). Experimental results on some benchmark circuits are also reported. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Determining the Real Output Xs by SAT-Based Reasoning

    Page(s): 39 - 44
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (338 KB) |  | HTML iconHTML  

    Embedded testing, built-in self-test and methods for test compression rely on efficient test response compaction. Often, a circuit under test contains sources of unknown values (X), uninitialized memories for instance. These X values propagate through the circuit and may spoil the response signatures. The standard way to overcome this problem is X-masking. Outputs which carry an X value are usually determined by logic simulation. In this paper, we show that the amount of Xs is significantly overestimated, and in consequence outputs are over masked, too. An efficient way for the exact computation of output Xs is presented for the first time. The resulting X-masking promises significant gains with respect to test time, test volume and fault coverage. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Selection of Testable Paths with Specified Lengths for Faster-Than-At-Speed Testing

    Page(s): 45 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (161 KB) |  | HTML iconHTML  

    Faster than at-speed testing provides an effective way to detect small delay defects (SDDs). It requires test patterns to be delicately classified into groups according to the delay of sensitized paths. Each group of patterns is applied at certain frequency. In this paper, we propose to generate tests for faster than at-speed testing using path delay fault (PDF) model and single path sensitization criterion. An effective path selection and grouping method is introduced, which could quickly and accurately identify paths whose delay falls into a given delay span. Several techniques are used to improve the efficiency of the testable path selection procedure. Experimental results on ISCAS'89 benchmark circuits show that the proposed method could achieve high transition fault coverage and high test quality of SDDs with low CPU time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Test Pattern Selection and Compaction for Sequential Circuits in an HDL Environment

    Page(s): 53 - 56
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB) |  | HTML iconHTML  

    In this paper we are revisiting the issue of sequential circuit test generation, and use a selective random pattern test generation method implemented in an HDL environment. The method uses a statistical expectation graph and states of the sequential circuit for selecting the appropriate test vectors to achieve better fault coverage and a more compact test set. To further reduce the size of the generated test set, a static compaction method, which is also implemented in an HDL environment, is used after the test generation process. The experimental results show that selecting good test patterns among random test patterns, not only can be implemented dynamically in an HDL design environment, but also results in a better fault coverage and shorter test pattern length in comparison with some traditional deterministic methods. In addition, it will be shown that static test set compaction methods can considerably reduce the test length of test patterns for sequential designs obtained by our proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tackling the Path Explosion Problem in Symbolic Execution-Driven Test Generation for Programs

    Page(s): 59 - 64
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB) |  | HTML iconHTML  

    Symbolic techniques have been shown to be very effective in path-based test generation, however, they fail to scale to large programs due to the exponential number of paths to be explored. In this paper, we focus on tackling this path explosion problem and propose search strategies to achieve quick branch coverage under symbolic execution, while exploring only a fraction of paths in the program. We present a reach ability-guided strategy that makes use of the reach ability graph of the program to explore unvisited portions of the program and a conflict-driven backtracking strategy that utilizes conflict analysis to perform nonchronological backtracking. We present experimental evidence that these strategies can significantly reduce the search space and improve the speed of test generation for programs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Reliability Model for Object-Oriented Software

    Page(s): 65 - 70
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (159 KB) |  | HTML iconHTML  

    Software reliability is one of the important attributions of dependable systems. A number of software reliability models have been developed by now, but few of them take object-oriented features into account. Nowadays, more and more software systems are developed in object-oriented technology and object-oriented programming languages contain new language features, most notably inheritance, polymorphism, and dynamic binding which have a strong impact on the reliability of software. In this paper, we propose an accurate model for object-oriented software reliability estimation. The central idea of the strategies proposed is to incorporate the influence factors of complexity of the software under test and the test effectiveness into the object-oriented software reliability model so as to make the software reliability model more adequate and accurate to the real situation for object-oriented software. Results from substantial experiments have shown the rationality and usefulness of the new model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Approach to Generating High Quality Test Cases

    Page(s): 71 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (351 KB) |  | HTML iconHTML  

    High quality test cases can effectively detect software errors and ensure software quality. However, except the regular expression-based test generation method, test cases generated from other model-based test generation methods have not contain the whole information of the model, resulting in test inadequacy. And test cases derived from regular expression have the prohibited lengths that cause the sustainable increase of test cost. To obtain high quality test cases, we suggest a new method for test generation by way of regular expression decomposition. Unlike the previous model decomposition techniques, our method lays emphasis on information completeness after regular expression is decomposed. Based on two empirical assumptions, we propose two processes of regular expression decomposition and three decomposition rules. Then we perform a case study to demonstrate our approach. The results show that our approach generates high quality test cases as well as avoids the problem of test complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Study on Software Reliability Prediction Based on Transduction Inference

    Page(s): 77 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (148 KB) |  | HTML iconHTML  

    Non-parametric statistical methods are applied to verdict that early failure behavior of the testing process may have less impact on later failure process, so it happens in software failure time prediction that one does not have enough information to estimate the software failure process well but do have enough information to estimate the failure data at given instance. The prediction accuracy of software reliability prediction models based on recurrent neural network, feed-forward neural network, relevance vector machine, support vector machine and some nonhomogeneous Poisson process models is compared. Experimental results show that software failure time prediction models based on transduction inference theory could achieve higher prediction accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formula-Oriented Compositional Minimization in Model Checking

    Page(s): 81 - 84
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (307 KB) |  | HTML iconHTML  

    This paper presents a new approach to reduce finite state machines with respect to a CTL formula to alleviate state explosion problem. Reduction is achieved by removing parts useless to the formula of original machines. The main contribution of this paper is to exploit relations among sub formulas of the CTL formula so as to gain more reduction, as well as to extend traditional pruning method, which handles only existential formulas, to handle universal formulas. Based on this kind of reduction, verification of a large system, which usually consists of several components, can be done by evaluating properties on a reduced version of the system, which is built by composing components of the system one by one while doing reduction after each composition. Experimental results show the effectiveness of the approach. Especially when a property is written in a more detailed way, that is to describe the system part by part, the approach has a great potential. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Variation-Aware Fault Modeling

    Page(s): 87 - 93
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (369 KB) |  | HTML iconHTML  

    To achieve a high product quality for nano-scale systems both realistic defect mechanisms and process variations must be taken into account. While existing approaches for variation-aware digital testing either restrict themselves to special classes of defects or assume given probability distributions to model variabilities, the proposed approach combines defect-oriented testing with statistical library characterization. It uses Monte Carlo simu-lations at electrical level to extract delay distributions of cells in the presence of defects and for the defect-free case. This allows distinguishing the effects of process variations on the cell delay from defect-induced cell delays under process variations. To provide a suitable interface for test algorithms at higher levels of abstraction the distributions are represented as histograms and stored in a histogram data base (HDB). Thus, the computationally expensive defect analysis needs to be performed only once as a preprocessing step for library characterization, and statistical test algorithms do not require any low level information beyond the HDB. The generation of the HDB is demonstrated for primitive cells in 45nm technology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.