By Topic

Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on

Issue 1 • Date Jan 2000

Filter Results

Displaying Results 1 - 14 of 14
  • Efficient design exploration based on module utility selection

    Page(s): 19 - 29
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB)  

    In this paper, we present a design exploration framework, called WIZARD, which aims at finding module selections that will lead to superior designs while considering scheduling and resource binding under latency and power constraints. The framework contains two phases: choosing the resource configuration, and determining a module binding for each resource. We introduce a powerful model called an acceptability function which models design objectives, based on tradeoffs among different design constraints as well as a user's willingness to accept a design. Module utility measure cooperating with inclusion scheduling is the key to the success of our method. The utility of a module reflects the usefulness of the module based on the acceptability function. Inclusion scheduling is an algorithm to provide information for determining the number of functional units as well as module usefulness. We also present a heuristic which modifies module utility values based on the given acceptability function until they lead to superior selections. Many experiments on well-known benchmarks show the effectiveness of the approach when the obtained module selections are compared with the results from enumerating all module selections, as well as other schemes such as MSSR and PSGA View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of signal transition activity in FIR filters implemented by a MAC architecture

    Page(s): 164 - 169
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB)  

    A novel method for the accurate calculation of the transition activity at the nodes of a multiplier-accumulator (MAC) architecture implementing finite impulse response filters is proposed in this paper. The method is developed for input signals, which can be described by a stationary Gaussian process. The transition activity per bit of a signal word is modeled according to the dual-bit-type (DBT) model and it is described as a function of the signal statistics. An efficient analytical method has been developed for the determination of the signal statistics at each node of the MAC architecture. It is based on the mathematical formulation of the multiplexing in time of signal sequences with known statistics. The effect of the multiplexing mechanism on the breakpoints of the DBT model, which influences significantly the accuracy of the method, is also determined. Several experiments both with synthetic and real data have been conducted. The numerical results produced by the proposed models are in very good agreement with the measured values of the transition activity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transient sensitivity computation in controlled explicit piecewise linear simulation

    Page(s): 98 - 110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB)  

    This paper presents a general method for computing transient sensitivities using both direct and adjoint methods in controlled explicit event driven simulation algorithms that employ piecewise linear device models. Sensitivity information provides first order assessment of circuit variability with respect to design variables and parasitics. This information is particularly useful for noise analysis, timing rule generation, and circuit optimization. Techniques for incorporating transient sensitivity into adaptively controlled explicit simulation, a general piecewise linear simulator, are presented. Sensitivity computation includes algorithms to handle instantaneous charge redistribution due to the discontinuous conductance models of the piecewise linear elements, and the loss of simulation accuracy due to the nonmonotonic responses in autonomous adjoint circuits with nonzero initial conditions. Results demonstrate the efficiency and accuracy of the proposed techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IC test using the energy consumption ratio

    Page(s): 129 - 141
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB)  

    Dynamic current-based test techniques can potentially address the drawbacks of traditional and Iddq test methodologies. The quality of dynamic current-based test is degraded by process variations in integrated circuit (IC) manufacture. The energy consumption ratio (ECR) is a new metric that improves the effectiveness of dynamic current test by reducing the impact of process variations by an order of magnitude. In this paper, we address issues that are of practical importance to an ECR-based test methodology. We use the ECR to test a low-voltage submicron IC with a microprocessor core. The ECR more than doubles the effectiveness of the dynamic current test already used to test the IC. The defect coverage of the ECR is greater than that offered by any other test, including Iddq. We develop a logic-level fault simulation tool for the ECR. We also show that statistical techniques can be used to set thresholds for an ECR-based test process. Our results demonstrate that the ECR offers several advantages relative to other transient current-based test methods and to Iddq test. The ECR offers the potential to be a high quality low cost test methodology View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global optimization for digital MOS circuits performance

    Page(s): 161 - 164
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (128 KB)  

    Apart from maximization of parametric yield, minimization of the spread in performance functions due to process variation is of extreme importance in very large scale integrated circuit design. To achieve efficient minimization of the spread, a novel algorithm based on the genetic algorithm and global approximation methods is proposed. The algorithm operates in two stages designated as coarse and fine optimization stages and adjusts design parameter set to simultaneously achieve the target performance and reduction in performance spread. The algorithm has distinctive features, such as global optimum design, subexponential complexity algorithm for N-P complete problem of global optimization, and simultaneous optimization of many functions. The algorithm is demonstrated using four design examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Heterogeneous technology mapping for area reduction in FPGAs with embedded memory arrays

    Page(s): 56 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB)  

    It has become clear that large embedded configurable memory arrays will be essential in future field programmable gate arrays (FPGAs). Embedded arrays provide high-density high-speed implementations of the storage parts of circuits, Unfortunately, they require the FPGA vendor to partition the device into memory and logic resources at manufacture-time. This leads to a waste of chip area for customers that do not use all of the storage provided. This chip area need not be wasted, and can in fact be used very efficiently, if the arrays are configured as multioutput ROMs, and used to implement logic, In this paper, we describe two versions of a new technology mapping algorithm that identifies parts of circuits that can be efficiently mapped to an embedded array and performs this mapping, The first version of the algorithm places no constraints on the depth of the final circuit; on a set of 29 sequential and combinational benchmarks, the tool is able to map, on average, 59.7 4-LUTs into a single 2-Kbit memory array, while increasing the critical path by 7%, The second version of the algorithm places a constraint on the depth of the final circuit; it maps, on average, 56.7 4-LUTs into the same memory array, while increasing the critical path by only 2.3%. This paper also considers the effect of the memory array architecture on the ability of the algorithm to pack logic into memory, It is shown that the algorithm performs best when each array has between 512 and 2048 bits, and has a word width that can be configured as 1, 2, 4, or 8 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying a robust heteroscedastic probabilistic neural network to analog fault detection and classification

    Page(s): 142 - 151
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB)  

    The problem of distinguishing and classifying the responses of analog integrated circuits containing catastrophic faults has aroused recent interest. The problem is made more difficult when parametric variations are taken into account. Hence, statistical methods and techniques such as neural networks have been employed to automate classification. The major drawback to such techniques has been the implicit assumption that the variances of the responses of faulty circuits have been the same as each other and the same as that of the fault-free circuit. This assumption can be shown to be false. Neural networks, moreover, have proved to be slow. This paper describes a new neural network structure that clusters responses assuming different means and variances. Sophisticated statistical techniques are employed to handle situations where the variance tends to zero, such as happens with a fault that causes a response to be stuck at a supply rail. Two example circuits are used to show that this technique is significantly more accurate than other classification methods. A set of responses can be classified in the order of 1 s View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Equivalent Elmore delay for RLC trees

    Page(s): 83 - 97
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB)  

    Closed-form solutions for the 50% delay, rise time, overshoots, and settling time of signals in an RLC tree are presented. These solutions have the same accuracy characteristics of the Elmore delay for RC trees and preserves the simplicity and recursive characteristics of the Elmore delay. Specifically, the complexity of calculating the time domain responses at all the nodes of an RLC tree is linearly proportional to the number of branches in the tree and the solutions are always stable. The closed-form expressions introduced here consider all damping conditions of an RLC circuit including the underdamped response, which is not considered by the Elmore delay due to the nonmonotone nature of the response. The continuous analytical nature of the solutions makes these expressions suitable for design methodologies and optimization techniques. Also, the solutions have significantly improved accuracy as compared to the Elmore delay for an overdamped response. The solutions introduced here for RLC trees can be practically used for the same purposes that the Elmore delay is used for RC trees View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TAIR: testability analysis by implication reasoning

    Page(s): 152 - 160
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB)  

    To predict the difficulty of testing a wire stuck-at fault, testability analysis algorithms provide an estimated testability value by computing controllability and observability. In most common previous work such as COP and SCOAP, signal correlation between controllability and observability is not well handled. As a result, the estimated values can be quite inaccurate, On the other hand, some previous work can take into account signal correlation but may require more CPU time. This paper discusses an efficient method for testability analysis improvement. Our algorithm starts with results obtained from conventional testability analysis such as COP. For each stuck- at fault, we gradually refine these results by recursively applying some simple signal correlation rules. Experimental results show that, with reasonable run-time overhead, significant improvement for testability analysis can be achieved View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A behavioral model of a 1.8-V flash A/D converter based on device parameters

    Page(s): 69 - 82
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB)  

    A behavioral model of a 1.8-V, 6-bit flash analog-to-digital converter has been developed based on device parameters using the gm/Id methodology. This approach eliminates the need for recharacterization of blocks when device sizes are changed. Furthermore, the performance can be predicted with input only from device and process simulators eliminating the need for a circuit simulator and associated model parameters. Signal to noise plus distortion ratio and differential and integral nonlinearity are predicted and verified at lower resolution with a circuit simulator View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximally and arbitrarily fast implementation of linear and feedback linear computations

    Page(s): 30 - 43
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB)  

    By establishing a relationship between the basic properties of linear computations and eight optimizing transformations (distributivity, associativity, commutativity, inverse and zero element law, common subexpression replication and elimination, constant propagation), a computer-aided design platform is developed to optimally speed-up an arbitrary instance from this large class of computations with respect to those transformations. Furthermore, arbitrarily fast implementation of an arbitrary linear computation is obtained by adding loop unrolling to the transformations set. During this process, a novel Horner pipelining scheme is used so that the area-time (AT) product is maintained constant, regardless of achieved speed-up. We also present a generalization of the new approach so that an important subclass of nonlinear computations, named feedback linear computations, is efficiently, maximally, and arbitrarily sped-up View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Canonical symbolic analysis of large analog circuits with determinant decision diagrams

    Page(s): 1 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    Symbolic analysis has many applications in the design of analog circuits. Existing approaches rely on two forms of symbolic-expression representation: expanded sum-of-product form and arbitrarily nested form. Expanded form suffers the problem that the number of product terms grows exponentially with the size of a circuit. Nested form is neither canonical nor amenable to symbolic manipulation. In this paper, we present a new approach to exact and canonical symbolic analysis by exploiting the sparsity and sharing of product terms. It consists of representing the symbolic determinant of a circuit matrix by a graph-called a determinant decision diagram (DDD)-and performing symbolic analysis by graph manipulations. We show that DDD construction, as well as many symbolic analysis algorithms, takes time almost linear in the number of DDD vertices. We describe an efficient DDD-vertex ordering heuristic and prove that it is optimum for ladder-structured circuits. For practical analog circuits, the numbers of DDD vertices are several orders of magnitude less than the numbers of product terms. The algorithms have been implemented and compared respectively to symbolic analyzers ISAAC and Maple-V in generating the expanded sum-of-product expressions, and SCAPP in generating the nested sequences of expressions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A BIST scheme for RTL circuits based on symbolic testability analysis

    Page(s): 111 - 128
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB)  

    This paper introduces a novel scheme for testing register-transfer level (RTL) controller/data paths using built-in self-test (BIST). The scheme uses the controller netlist and the data path of a circuit to extract a test control/data flow (TCDF) graph. This TCDF is used to derive a set of symbolic justification and propagation paths (known as the test environment) to test some of the operations and variables present in it. If it becomes difficult to generate such test environments with the derived TCDF's, a few test multiplexers are added at suitable points in the circuit to increase its controllability and observability. The test environment of an operation (variable) guarantees the existence of a path from the primary inputs of the circuit to the inputs of the module (register) to which the operation (variable) is mapped, and a path from the output of the module (register) to a primary output of the circuit. Since the search for a test environment is done symbolically, it is very fast and needs to be done only once for each module or register,in the circuit. This test environment can then be used to exercise a module or register in the circuit with pseudorandom pattern generators which are placed only at the primary inputs of the circuit. The test responses can he analyzed with signature analyzers which are only placed at the primary outputs of the circuit. Unlike many RTL BIST schemes, an increase in the data path bit-width does not adversely impact the complexity of our testability analysis scheme since the analysis is symbolic. Every module in the module library is made random-pattern testable, whenever possible, using gate-level testability insertion techniques. This is a one-time cost. Finally, a BIST controller is synthesized to provide the necessary control signals to form the different test environments during testing, and a BIST architecture is superimposed on the circuit. Experimental results on a number of industrial and university benchmarks show that high fault coverage (>99%) can be obtained with our scheme. The average area overhead of the scheme is 6.9% which is much lower than many existing logic-level BIST schemes. The average delay overhead is only 2.5%. The test application time to achieve the high fault coverage for the whole circuit is also quite low View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sibling-substitution-based BDD minimization using don't cares

    Page(s): 44 - 55
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB)  

    In many computer-aided design tools, binary decision diagrams (BDDs) are used to represent Boolean functions. To increase the efficiency and capability of these tools, many algorithms have been developed to reduce the size of the BDDs. This paper presents heuristic algorithms to minimize the size of the BDDs representing incompletely specified functions by intelligently assigning don't cares to binary values. Experimental results show that new algorithms yield significantly smaller BDDs compared with existing algorithms yet still require manageable run-times. These algorithms are particularly useful for synthesis application where the structure of the hardware/software is derived from the BDD representation of the function to implement because the minimization quality is more critical than the minimization speed in these applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The purpose of this Transactions is to publish papers of interest to individuals in the areas of computer-aided design of integrated circuits and systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief

VIJAYKRISHNAN NARAYANAN
Pennsylvania State University
Dept. of Computer Science. and Engineering
354D IST Building
University Park, PA 16802, USA
vijay@cse.psu.edu