By Topic

High-Level Design Validation and Test Workshop, 2003. Eighth IEEE International

Date 12-14 Nov. 2003

Filter Results

Displaying Results 1 - 25 of 29
  • Eighth IEEE International High-Level Design Validation and Test Workshop

    Save to Project icon | Request Permissions | PDF file iconPDF (266 KB)  
    Freely Available from IEEE
  • Nano, quantum, and molecular computing: are we ready for the validation and test challenges?

    Page(s): 3 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (414 KB) |  | HTML iconHTML  

    In the recent years a lot of research effort is being spent in the areas of nanotechnology, quantum computation, and biologically inspired computing. As we are faced with various challenges regarding their implementability, architectural visions, and design automation, not much has been done in the field of high level design and validation in looking further into the future, and ponder about the state of the art in design validation and test in such miniscule technology era. Very few reported research work have surfaced on the design and validation challenges for these technologies. However, this certainly is a matter of concern because the technology of the small will be ridden with random faults and hence architectural design strategies need to change to take into account these stochastic models of failures to build robust designs. Validation of such designs also have to capture the stochastic behavioral models of the technology, and hence traditional validation and testing techniques will not work directly. Are we getting ready with our theory; technology and tools to address these challenges? This futuristic panel asks technology and computer aided design experts, as well as finding agency program managers questions about the technological barriers to be surpassed, as well as how the funding agencies such as NSF are ramping up for this technological future. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software-based self-test methodology for crosstalk faults in processors

    Page(s): 11 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (483 KB) |  | HTML iconHTML  

    Due to signal integrity problems inherent sensitivity to timing, power supply voltage and temperature, it is desirable to test AC failures such as crosstalk-induced errors at operational speed and in the circuit's natural operational environment. To overcome the daunting cost and increasing performance hindrance of high-speed external testers, Software-Based. Self-Test (SBST) is proposed as a high-quality. low-cost at-speed testing solution for AC failures in programmable processors and System-on-Chips (SoC). SBST utilizes low-cost testers, applies tests and captures test responses in the natural operational environment. Hence SBST avoids artificial testing environment and external tester induced inaccuracies. Different from testing for stuck-at faults, testing for crosstalk faults requires a sequence of test vectors delivered at the operational speed. SBST applies tests in functional mode using instructions. Different instructions impose different controllability and observability constraints on a module-under-test (MUT). The complexity of searching for an appropriate sequence of instructions and operands becomes prohibitively high. In this paper, we propose a novel methodology to conquer the complexity challenge by efficiently combining structural test generation technique with instruction-level constraints. MUT in several time frames is automatically flattened and augmented with Super Virtual Constraint Circuits (SuperVCCs), which guide an automatic test pattern generation (ATPG) tool to select. appropriate test instructions and operands. The proposed methodology enables automatic test-program generation and high-fidelity test solution:for AC failures. Experimental results are shown on a commercial embedded processor (Xtensa/sup /spl trade// from Tensilica Inc). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FPgen - a test generation framework for datapath floating-point verification

    Page(s): 17 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (413 KB) |  | HTML iconHTML  

    FPgen is a new test generation framework targeted toward the verification of the floating point (FP) datapath, through the generation of test cases. This framework provides the capacity to define virtually any architectural FP coverage model, consisting of verification tasks. The tool supplies strong constraint solving capabilities, allowing the generation of random tests that target these tasks. We present an overview of FPgen's functionality, describe the results of its use for the verification of several FP units, and compare its efficiency with existing test generators. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Piparazzi: a test program generator for micro-architecture flow verification

    Page(s): 23 - 28
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB) |  | HTML iconHTML  

    Because of their complexity, modern microprocessors need new tools that generate tests for micro-architectural events. Piparazzi is a test generator, developed at IBM, that generates (architectural) test programs for microarchitectural events. Piparazzi uses a declarative model of the micro-architecture and the user's definition of the required event to create an instance of a Constraint Satisfaction Problem (CSP). It then uses a dedicated CSP solver to generate a test program that covers the specific event. We show how Piparazzi yields significant improvements in covering micro-architectural events, by describing its technology and by exhibiting experimental results. Piparazzi has already been successful in finding both functional and performance bugs that could only be discovered using an exact micro-architectural model of the processor. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic functional verification of memory oriented global source code transformations

    Page(s): 31 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (452 KB) |  | HTML iconHTML  

    In this paper, we present a fully automatic technique to verify an important class of optimizing program transformations applied to reduce accesses to the data memory. These are prevalent while developing software for power and performance-efficient embedded multimedia systems. The verification of the transformations relies on an automatic proof of functional equivalence of the initial and the transformed program functions. It is based on extracting and reasoning on the polyhedral models representing the dependencies between the elements of the output and the input variables, which are preserved under the transformations considered. If the verification reports failure, the technique also identifies the errors and their location in the function, hence providing an effective means to debug the transformed program function. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Refactoring digital hardware designs with assertion libraries

    Page(s): 37 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (402 KB) |  | HTML iconHTML  

    Refactoring is the concept of restructuring software to increase its readability and maintainability without changing the observable behavior: To the best of our knowledge, the concept of refactoring has only been applied to software development. In this paper, we describe a methodology to extend this concept into the Digital Hardware Design process using the Open Verification Library. We present a case of a network protocol bus functional model in which we want to increase the design readability so that maintenance and bug fixes are less costly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-level optimization of pipeline design

    Page(s): 43 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (397 KB) |  | HTML iconHTML  

    We describe an automatic method for synthesizing pipelined processors that optimizes throughput and automatically resolves control and data hazards. We present rules that describe how to resolve hazards based on the data dependencies between functional units. We demonstrate our method by showing optimal pipeline configurations of the DLX. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrating CNF and BDD based SAT solvers

    Page(s): 51 - 56
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (478 KB) |  | HTML iconHTML  

    This paper presents an integrated infrastructure of CNF and BDD based tools to solve the Boolean Satisfiability problem. We use both CNF and BDDs not only as a means of representation, but also to efficiently analyze, prune and guide the search. We describe a method to successfully re-orient the decision making strategies of contemporary CNF tools in a manner that enables an efficient integration with BDDs. Keeping in mind that BDDs suffer from memory explosion problems, we describe learning-based search space pruning techniques that augment the already employed conflict analysis procedures of CNF tools. Our infrastructure is targeted towards solving those hard-to-solve instances where contemporary CNF tools invest significant search times. Experiments conducted over a wide range of benchmarks demonstrate the promise of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Logic transformation and coding theory-based frameworks for Boolean satisfiability

    Page(s): 57 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (341 KB) |  | HTML iconHTML  

    This paper proposes a new framework to the solution of Boolean Satisfiability. The first approach is based on certain structural analysis using circuit representation. Here, we convert the given CNF into multilevel circuits based on testability-driven transformation and optimization, and then apply a test technique developed by the authors to verify SAT. This test technique is based on the concepts developed by an earlier-proposed verification tool, VERILAT. Certain algebraic coding theory results are then derived that provide a lower bound on the number of solutions to SAT problems. These proposed frameworks have a real potential for providing new theoretical insights. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhancing SAT-based equivalence checking with static logic implications

    Page(s): 63 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (459 KB) |  | HTML iconHTML  

    We propose a novel technique to improve SAT-based Combinational Equivalence Checking (CEC) by statically adding meaningful clauses to the CNF formula of the miter circuit. A fast preprocessing quickly builds up the implication graph for the miter circuit under verification, resulting in a large set of direct, indirect and extended backward implications. The non-trivial implications are converted into two-literal clauses and added to the miter CNF database. These added clauses constrain the search space, and provide correlation among the different variables, which enhances the Boolean Constraint Propagation (BCP). Experimental results on ISCAS'85 CEC instances show that with the added clauses, an average speedup of more than 950x was achieved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Relating vehicle-level and network-level reliability through high-level fault injection

    Page(s): 71 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (503 KB) |  | HTML iconHTML  

    This paper presents some recent results to improve the evaluation of reliability due to network connections in automotive environments. Evaluation is based on the adoption of performance thresholds aiming at detecting performance loss at particular types of fault occurrence. For this activity we modeled the vehicle network at the functional level and then integrated it into a complete vehicle model describing both electronic and mechanical behavior; in this way, it is possible to build an automated fault injection environment to forecast the effects of faults at the network level on the vehicle dynamics. Furthermore, an on-line threshold manager permits to interrupt a single simulation when a fault activates an error threshold, reducing the overall campaign simulation time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing ThumbPod: Softcore bugs are hard to find

    Page(s): 77 - 82
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (463 KB) |  | HTML iconHTML  

    We present the debug and test strategies used in the ThumbPod system for Embedded Fingerprint Authentication. ThumbPod uses multiple levels of programming (Java, C and hardware) with a hierarchy of programmable architectures (KVM on top of a SPARC core on top of an FPGA). The ThumbPod project teamed up seven graduate students in the concurrent development and verification of all these programming layers. We pay special attention to the strengths and weaknesses of our bottom-up testing approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Verifying LOC based functional and performance constraints

    Page(s): 83 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (475 KB) |  | HTML iconHTML  

    In the era of billion-transistor design, it is critical to establish effective verification methodologies from the system level all the way down to the implementations. Assertion languages (e.g. IBM's Sugar2.0, Synopsys's OpenVera) have gained wide acceptance for specifying functional properties for automatic validation. They are, however, based on linear temporal logic (LTL), and hence have certain limitations. Logic of constraints (LOC) was introduced for specifying quantitative performance constraints, and is particularly suitable for automatic transaction level analysis. We analyze LTL and LOC, and show that they have different domains of expressiveness. Using both LTL and LOC can make the verification process more effective in the context of simulation assertion checking as well as formal verification. Through industrial case studies, we demonstrate the usefulness of this verification methodology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of Bayesian networks and data mining for coverage directed verification category simulation-based verification

    Page(s): 91 - 95
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (347 KB) |  | HTML iconHTML  

    Today directed random simulation is one of the most commonly used verification techniques. Because this technique in no proof of correctness, it is important to test the design as complete as possible. But this is a hard to reach goal, that needs a lot of computing power and much human interaction. There has been a proposal for using Bayesian networks to implement an automatic feedback loop (Shai Fine et al, 40th Design Automation Conference, 2003). In addition, this paper introduces another implementation of an automatic feedback loop using data mining techniques. Both approaches are applied to the same design and the results are compared. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhancing the control and efficiency of the covering process [logic verification]

    Page(s): 96 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (410 KB) |  | HTML iconHTML  

    Coverage directed test generation (CDG) is a technique for providing feedback from the coverage domain back to a generator that produces new stimuli to the tested design. In this paper, we describe two algorithms that act in a CDG framework. The first algorithm controls the coverage events distribution using a "water-filling" approach. The second algorithm improves the efficiency of the covering process using clustering techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Functional vector generation for assertion-based verification at behavioral level using interval analysis

    Page(s): 102 - 107
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (439 KB) |  | HTML iconHTML  

    The 2001 International Technology Roadmap for Semiconductors (ITRS) predicts that it is unlikely that verification will be manageable for designs envisioned beyond 2007 without design-for-verifiability. Some CAD vendors have promoted assertion-based verification (ABV) as one of the first commercial design-for-verification techniques. In order to handle complex design, this methodology has to be complemented with tools that automatically generate vectors or counterexamples that violate/verify proposed assertions or constraints. This paper presents an assertion checking technique for behavioral models that combines a non-linear solver and state exploration techniques and avoids expanding behavior into logic equations. The kernel of the technique is a modified interval analysis (MODIA) that avoids most of the problems of classical interval analysis (IA) and improves reuse during vector generation. The results show that the proposed technique is able to handle very efficiently data-dominated designs, which research and commercial assertion/property checkers are unable or need more CPU effort to verify. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Redundant functional faults reduction by saboteurs synthesis [logic verification]

    Page(s): 108 - 113
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (382 KB)  

    High-level descriptions of digital systems are perturbed by using high-level fault models in order to perform functional verification. Fault lists should be accurately created in order to avoid waste of time during ATPG and fault simulation. However, automatic fault injection tools can insert redundant faults which are not symptoms of design errors. Such redundant faults should be removed from the fault list before starting the verification session. This paper proposes an automatic strategy for high-level faults injection, which removes redundant bit coverage faults. An efficient implementation of a bit coverage saboteur is proposed, which allows one to use synthesis for redundant faults removal. Experimental results highlight the effectiveness of the methodology. By using the proposed injection strategy, functional APTG time is reduced and fault coverage is increased. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ATPG-based preimage computation: efficient search space pruning with ZBDD

    Page(s): 117 - 122
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB) |  | HTML iconHTML  

    Computing image/preimage is a fundamental step in formal verification of hardware systems. Conventional OBDD-based methods for formal verification suffer from spatial explosion, since OBDDs can grow exponentially in large designs. On the other hand, SAT/ATPG based methods are less demanding on memory. But the run-time can be huge for these methods, since they must explore an exponential search space. In order to reduce this temporal explosion of SAT/ATPG based methods, efficient learning techniques are needed. In this paper, we present a new ZBDD based method to compactly store and efficiently search previously explored search-states for 'ATPG-based preimage computation'. We learn front these search-states and avoid searching their subsets or supersets. Both,solution and conflict subspaces are pruned based on simple set operations using ZBDDs. We integrate our techniques into an ATPG engine and demonstrate their efficiency on ISCAS '89 benchmark circuits. Experimental results show that significant search-space pruning for preimage computation is achieved, compared to previous methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • BDD-based verification of scalable designs

    Page(s): 123 - 128
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (380 KB) |  | HTML iconHTML  

    Many formal verification techniques make use of Binary Decision Diagrams (BDDs). In most applications the choice of the variable ordering is crucial for the performance of the verification algorithm. Usually BDDs operate on the Boolean level, i.e. BDDs are a bit-level data structure. In this paper we present a method to speed-up BDD-based verification of scalable designs that makes use of a learning process for word-level information. In a preprocessing a scalable ordering is extracted from the RTL that is used as a static ordering for large designs. Experimental results show that significant improvements can be achieved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Matching in the presence of don't cares and redundant sequential elements for sequential equivalence checking

    Page(s): 129 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (323 KB) |  | HTML iconHTML  

    Full sequential equivalence checking by state space traversal has been shown to be unpractical for large designs. To address state space explosion new approaches have been proposed that exploit structural characteristics of a design and make use of multiple analysis engines (e.g. BDDs, Simulation, SAT) to transform the sequential equivalence checking problem into a combinational equivalence checking problem. While these approaches, based on induction techniques, have been successful in general, they are not able to reach proof of equivalence in presence of complex transformations between the reference design and its implementation. One of these transformations is redundant Flip-Flops (FFs) removal. FFs may be removed by redundancy removal, or don't care optimization techniques applied by synthesis tools. Consequently, some FFs in the reference design may have no equivalent FFs in the implementation net-list. Latest researches in this area have proposed specific solutions for particular cases. Matching in the presence of redundant constant input FFs has been addressed and identification of sequential redundancy is performed. This paper presents an indepth study of some possible causes of unmatched FFs due to redundancy removal, and proposes a generic approach to achieve prove of equivalence in presence of redundant FFs. Our approach is independent from specific synthesis transformations. It is able to achieve matching in presence of complex redundancies, and is able to perform formal equivalence checking in presence of don't cares. The experimental results show a significant improvement in the matching rates of FFs when compared to industrial equivalence checking tools. This higher matching is directly translated to a higher success rate in proving equivalency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mathematical framework for representing discrete functions as word-level polynomials

    Page(s): 135 - 139
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB) |  | HTML iconHTML  

    This paper presents a mathematical framework for modeling arithmetic operators and other RTL design modules as discrete word-level functions and proposes a polynomial representation of those functions. The proposed representation attempts to bridge the gap between bit-level BDD representations and word-level representations, such as *BMDs and TEDs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-level test generation for hardware testing and software validation

    Page(s): 143 - 148
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB) |  | HTML iconHTML  

    It is now common for design teams to develop systems where hardware and software components cooperate; they are thus facing the challenging task of validating and testing systems where hardware and software parts exist. In this paper a high-level test generation approach is presented, which is able to produce input stimuli that can be fruitfully exploited for test and validation purposes of both hardware and software components. Experimental results are reported showing that the proposed approach produces high quality vectors in terms of the adopted metrics for hardware and software faults. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scheduling of transactions for system-level test-case generation

    Page(s): 149 - 154
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (434 KB) |  | HTML iconHTML  

    We present a methodology for scheduling system-level transactions generated by a test-case generator. A system, in this context, may be composed of multiple processors, busses, bus-bridges, memories, etc. The methodology is based on an exploration of scheduling abilities in a hardware system. In its focus is a language for specifying transactions and their ordering. Through the use of hierarchy, the language provides the possibility of applying high-level scheduling requests. The methodology is realized in X-Gen, a system-level test-case generator used in IBM. The model and algorithm used by this tool are also discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comparison of BDDs, BMC, and sequential SAT for model checking

    Page(s): 157 - 162
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (446 KB) |  | HTML iconHTML  

    BDD-based model checking and bounded model checking (BMC) are the main techniques currently used in formal verification. In general, there are robustness issues in SAT-based versus BDD-based model checking. The research reported in this paper attempts to analyze the asymptotic run-time behavior of modern BDD-based and SAT based techniques for model checking to determine the circuit characteristics which lead to worst-case behavior in these approaches. We show evidence for a run-time characterization based on sequential correlation and clause density. We demonstrate that it is possible to predict the worst-case behavior of BMC based on these characterizations. This leads to some interesting insights into the behavior of these techniques on a variety of example circuits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.