By Topic

Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on

Issue 8 • Date Aug 2001

Filter Results

Displaying Results 1 - 9 of 9
  • OCCOM-efficient computation of observability-based code coverage metrics for functional verification

    Page(s): 1003 - 1015
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB) |  | HTML iconHTML  

    Functional simulation is still the primary workhorse for verifying the functional correctness of hardware designs. Functional verification is necessarily incomplete because it is not computationally feasible to exhaustively simulate designs. It is important, therefore, to quantitatively measure the degree of verification coverage of the design. Coverage metrics proposed for measuring the extent of design verification provided by a set of functional simulation vectors should compute statement execution counts (controllability information) and check to see whether effects of possible errors activated by program stimuli can be observed at the circuit outputs (observability information). Unfortunately, the metrics proposed thus far either do not compute both types of information or are inefficient, i.e., the overhead of computing the metric is very large. In this paper, we provide the details of an efficient method to compute an observability-based code coverage metric that can be used while simulating complex hardware description language (HDL) designs. This method offers a more accurate assessment of design verification coverage than line coverage and is significantly more computationally efficient than prior efforts to assess observability information because it breaks up the computation into two phases: functional simulation of a modified HDL model followed by analysis of a flowgraph extracted from the HDL model. Commercial HDL simulators can be directly used for the time-consuming first phase and the second phase can be performed efficiently using concurrent evaluation techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RAGS-real-analysis ALAP-guided synthesis

    Page(s): 931 - 941
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB) |  | HTML iconHTML  

    A new technique for single-bus heterogeneous system scheduling and hardware-software codesign is presented. This technique addresses challenging real-time problem domains (e.g., multirate periodic dependent tasks) for preemptive target systems including real-time operating system overheads and communication contention along with proper protocol handling. Such realistic system attributes are often times ignored in scheduling and codesign efforts, but are addressed here using a novel “real-analysis” approach. This technique foregoes the usual list- or cluster-oriented scheduling techniques for an as-late-as-possible (ALAP) guided iterative improvement procedure. A detailed system simulation is used at the scheduling level, which in turn is used as the core of the overall cosynthesis. Since an accurate simulation is the basis for the schedule feasibility check, separate verification steps become unnecessary. In addition to using real analysis as part of the scheduling, several other unique or unusual features are employed including the use of ALAP in heterogeneous scheduling, recursive search level at a time, allowing backtracking while maintaining polynomial execution time, as well as others. This framework appears to be unique in its ability to address the allocation/scheduling and codesign of heterogeneous systems, which, in particular, employ arbitrated buses for intertask communication View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical method for the analysis of interconnects delay in submicrometer layouts

    Page(s): 957 - 966
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB) |  | HTML iconHTML  

    In deep-submicrometer layouts, the determination of the signal delay due to interconnects is a main aspect of the design. Usually, on-chip interconnects are modeled by a distributed resistance-capacitance (RC) line. Key aspects of the interconnect modeling are the extraction of parasitic capacitances and the determination of reduced lumped models suited for electrical simulation. This paper addresses both these aspects. The parasitic capacitance extraction problem of layouts is efficiently carried out by means of the floating random walk (FRW) algorithm. It is shown how the employment of the Monte Carlo integration jointly to an extended version of the FRW algorithm allows to directly synthesize an accurate reduced-order RC equivalent net. The new method can deal with very complex geometries in an efficient way and needs neither fracturing of the original layout into subregions nor discretization of interconnects View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of peak power dissipation in VLSI circuits using the limiting distributions of extreme order statistics

    Page(s): 942 - 956
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB)  

    In this paper, we present a statistical method for estimating the peak power dissipation in very large scale integrated (VLSI) circuits. The method is based on the theory of extreme order statistics and its application to the probabilistic distributions of the cycle-by-cycle power consumption, the maximum-likelihood estimation, and the Monte-Carlo simulation. It enables us to predict the maximum power of a VLSI circuit in the set of constrained input vector pairs as well as the complete set of all possible input vector pairs. The simulation-based nature of the proposed method allows us to avoid the limitations of a gate-level delay model and a gate-level circuit structure. Most significantly, the proposed method produces maximum power estimates to satisfy user-specified error and confidence levels. Experimental results show that this method typically produces maximum power estimates within 5% of the actual value and with a 90% confidence level by only simulating less than 2500 input vectors View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing of scan circuits containing nonisolated random-logic legacy cores

    Page(s): 980 - 993
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (396 KB) |  | HTML iconHTML  

    We consider issues related to the testing of a random-logic legacy core embedded in user-defined logic. We assume that the only information available about the core is its test set. We develop a model for the core and the surrounding logic and provide procedures for testing the core and its surrounding logic under this model without adding design-for-testability (DFT) logic (such as a test wrapper). The procedures maximize the information extracted from the test set given for the core in order to maximize the fault coverage achieved without DFT. This maximizes the ability to test the circuit at-speed through its functional paths that go through cores and user-defined logic. We also describe DFT insertion procedures. The core and the surrounding logic are considered simultaneously during DFT insertion to minimize the amount of DFT logic required. We consider combinational logic (corresponding to full-scan) as well as sequential logic View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combined word-length optimization and high-level synthesis of digital signal processing systems

    Page(s): 921 - 930
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    Conventional approaches for fixed-point implementation of digital signal processing algorithms require the scaling and word-length (WL) optimization at the algorithm level and the high-level synthesis for functional unit sharing at the architecture level. However, the algorithm-level WL optimization has a few limitations because it can neither utilize the functional unit sharing information for signal grouping nor estimate the hardware cost for each operation accurately. In this study, we develop a combined WL optimization and high-level synthesis algorithm not only to minimize the hardware implementation cost, but also to reduce the optimization time significantly. This software initially finds the WL sensitivity or minimum WL of each signal throughout fixed-point simulations of a signal flow graph, performs the WL conscious high-level synthesis where signals having the similar WL sensitivity are assigned to the same functional unit, and then conducts the final WL optimization by iteratively modifying the WLs of the synthesized hardware model. A list-scheduling-based and an integer linear-programming-based algorithms are developed for the WL conscious high-level synthesis. The hardware cost function to minimize is generated by using a synthesized hardware model. Since fixed-point simulation is used to measure the performance, this method can be applied to general, including nonlinear and time-varying, digital signal processing systems. A fourth-order infinite-impulse response filter, a fifth-order elliptic filter, and a 12th-order adaptive least mean square filter are implemented using this software View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Functional vector generation for HDL models using linear programming and Boolean satisfiability

    Page(s): 994 - 1002
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB)  

    Our strategy for automatic generation of functional vectors is based on exercising selected paths in the given hardware description language (HDL) model. The HDL model describes interconnections of arithmetic, logic, and memory modules. Given a path in the HDL model, the search for input stimuli that exercise the path can be converted into a standard satisfiability (SAT) checking problem by expanding the arithmetic modules into logic gates. However, this approach is not very efficient. We present a new HDL-SAT checking algorithm that works directly on the HDL model. The primary feature of our algorithm is a seamless integration of linear-programming techniques for feasibility checking of arithmetic equations that govern the behavior of data-path modules and SAT checking for logic equations that govern the behavior of control modules. This feature is critically important to efficiency, since it avoids module expansion and allows us to work with logic and arithmetic equations whose cardinality tracks the size of the HDL model. We describe the details of the HDL-SAT checking algorithm in this paper. Experimental results that show significant speedups over state-of-the-art gate-level SAT checking methods are included View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Synchronous approach to the functional equivalence of embedded system implementations

    Page(s): 1016 - 1033
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB) |  | HTML iconHTML  

    Design space exploration is the process of analyzing several functionally equivalent alternatives to determine the most suitable one. A fundamental question is whether an implementation is consistent with the high-level specification or whether two implementations are “equivalent.” The synchronous assumption has made it possible to develop efficient procedures for establishing functional equivalence between different implementations in the domains of synchronous circuits and synchronous reactive systems. We extend this notion to embedded systems that do not satisfy the synchronous assumption inside their boundaries but only at the interface with the environment. Leveraging this property, we define synchronous equivalence for embedded systems that strongly resembles the concept of functional equivalence for sequential circuits. We develop efficient synchronous equivalence analysis algorithms for embedded system designs. The efficiency comes from analyzing the behavior statically on abstract representations, at a cost that some of the negative results may be false, i.e. the analysis is conservative. We develop primitives for making the representation more/less abstract, trading off complexity of the algorithms with the conservativeness of the results. We apply our analysis algorithms to an ATM switch and demonstrate that synchronous equivalence opens design exploration avenues uncharted before View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Graph-theory-based simplex algorithm for VLSI layout spacing problems with multiple variable constraints

    Page(s): 967 - 979
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB) |  | HTML iconHTML  

    An efficient algorithm is provided for solving a class of linear programming problems containing a large set of distance constraints of the form xi-xj⩾k and a small set of multivariable constraints of forms other than xi-xj⩾k. This class of linear programming formulation is applicable to very large scale integration (VLSI) layout spacing problems, including hierarchy-preserving hierarchical layout compaction, layout compaction with symmetric constraints, layout compaction with attractive and repulsive constraints, performance-driven layout compaction, etc. The longest path algorithm is efficient for solving spacing problems containing only distance constraints. However, it fails to solve problems that involve multiple-variable constraints. The linear programming formulation of a spacing problem requires use of the simplex method, which involves many matrix operations. This can be very time consuming when handling huge constraints systems derived from VLSI layouts. Herein it is found that most of the matrix operations can be replaced with fewer and faster graph operations, creating a more efficient graph-theory-based algorithm. Theoretical analysis shows that the proposed algorithm reduces the computation complexity of the simplex method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The purpose of this Transactions is to publish papers of interest to individuals in the areas of computer-aided design of integrated circuits and systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief

VIJAYKRISHNAN NARAYANAN
Pennsylvania State University
Dept. of Computer Science. and Engineering
354D IST Building
University Park, PA 16802, USA
vijay@cse.psu.edu