By Topic

Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on

Issue 10 • Date Oct 1997

Filter Results

Displaying Results 1 - 17 of 17
  • Efficient approximation of symbolic network functions using matroid intersection algorithms

    Page(s): 1073 - 1081
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    An efficient and effective approximation strategy is crucial to the success of symbolic analysis of large analog circuits. In this paper we propose a new approximation strategy for the symbolic analysis of linear circuits in the complex frequency domain. The strategy directly generates common spanning trees of a two-graph in decreasing order of tree admittance product, using matroid intersection algorithms. The strategy reduces the total time for computing an approximate symbolic expression in expanded format to polynomial with respect to the circuit size under the assumption that the number of product terms retained in the final expression is polynomial. Experimental results are clearly superior to those reported in previous works View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Delay abstraction in combinational logic circuits

    Page(s): 1205 - 1212
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB)  

    In this paper we propose a data structure for abstracting the delay information of a combinatorial circuit. The particular abstraction that we are interested in is one that preserves the delays between all pairs of inputs and outputs in the circuit. Such abstractions are useful when considering the delay of cascaded circuits in high-level synthesis and other such applications in synthesis. The proposed graphical data structure is called the concise delay network, and is of size proportional to (m+n) in best case, where m and n refer to the number of inputs and outputs of the circuit. In comparison, a delay matrix that stores the maximum delay between each input-output pair has size proportional to m×n. For circuits with hundreds of inputs and outputs, this storage and the associated computations become quite expensive, especially when they need to be done repeatedly during synthesis. We present heuristic algorithms for deriving these concise delay networks. Experimental results shows that, in practice, we can obtain concise delay network with the number of edges being a small multiple of (m+n) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On implementation choices for iterative improvement partitioning algorithms

    Page(s): 1199 - 1205
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (244 KB)  

    Iterative improvement partitioning algorithms such as the FM algorithm of Fiduccia and Mattheyses (1982), the algorithm of Krishnamurthy (1984), and Sanchis's extensions of these algorithms to multiway partitioning (1989) all rely on efficient data structures to select the modules to be moved from one partition to the other. The implementation choices for one of these data structures, the gain bucket, is investigated. Surprisingly, selection from gain buckets maintained as last-in-first-out (LIFO) stacks leads to significantly better results than gain buckets maintained randomly (as in previous studies of the FM algorithm or as first-in-first-out (FIFO) queues. In particular, LIFO buckets result in a 36% improvement over random buckets and a 43% improvement over FIFO buckets for minimum-cut bisection. Eliminating randomization from the bucket selection not only improves the solution quality, but has a greater impact on FM performance than adding the Krishnamurthy gain vector. The LIFO selection scheme also results in improvement over random schemes for multiway partitioning and for more sophisticated partitioning strategies such as the two-phase FM methodology. Finally, by combining insights from the LIFO gain buckets with the Krishnamurthy higher-level gain formulation, a new higher-level gain formulation is proposed. This alternative formulation results in a further 22% reduction in the average cut cost when compared directly to the Krishnamurthy formulation for higher-level gains, assuming LIFO organization for the gain buckets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A method for increasing the IDDQ testability

    Page(s): 1186 - 1188
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (72 KB)  

    At different design levels, testability is becoming more and more important since high levels of reliability are required by many applications. In this work, a novel approach to the mapping between signal lines and gate inputs is proposed, targeting the IDDQ testability of internal faults. Suggesting an additional cost function for the routing process, the method provides significant testability enhancements without affecting either the gate-level structure of the circuit or the internal layout of the gates, as proved with regards to bridging faults View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A hierarchical functional structuring and partitioning approach for multiple-FPGA implementations

    Page(s): 1188 - 1195
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB)  

    In this short paper, we present a new integrated synthesis and partitioning approach for multiple-field programmable interconnect chips (FPICs) implementations from register-transfer (RT) netlists. Our approach bridges the gap between RTL/logic synthesis and physical partitioning by finely tuning logic implementations suited for multiple-FPGA systems. We propose a hierarchical functional structuring and partitioning method which fully exploits the design structural hierarchy by decomposing RTL components into sets of logic subfunctions. This allows the partitioner to place portions of components into FPGA partitions. Experimental results on a number of benchmarks and industrial designs show that our approach achieves significant improvement in FPGA configurable logic block (CLB) and I/O-pin utilizations compared to that produced using a traditional multiple-FPGA partitioning method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formal verification of digital systems by automatic reduction of data paths

    Page(s): 1136 - 1156
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (804 KB)  

    Verification of properties (tasks) on a system P containing data paths may require too many resources (memory space and/or computation time) because such systems have very large and deep state spaces. As pointed out by Kurshan, what is needed is a reduced system P' which behaves exactly as P with respect to the properties that must be proved, but more compact than P, so that the verification can be easily performed. The process of finding P' from P is called reduction. P is specified by a network of interacting finite-state machines for data paths and controllers, and tasks are specified by finite-state automate. The verification of a task T on P is performed by the language containment check L(P)⊆L(T), where L(P) is the language generated by P and L(T) is the language accepted by T. It has been shown that, under appropriate conditions, the system P can be reduced to P' and the task T to T' such that L(P')⊆L(T')⇔L(P)⊆L(T). The direct language containment check L(P)⊆L(T) is no longer needed; it is replaced by L(P')⊆L(T'), which is less expensive. More specifically, for the purpose of simplifying the verification of some properties, the system implementation is abstracted locally with respect to the behavior under observation (i.e., bottom-up reduction), in the context of an integrated top-down design/verification technique. The tasks that one may want to verify can express both safety and fairness constraints. In this paper, we prove that the reduction of some data paths to four-state, nondeterministic finite-state machines, and the redundancy removal performed on the controllers is a homomorphic transformation, so that the simplified language containment check can automatically be applied without testing the validity of the homomorphism. This homomorphism correctness verification, required when a formal proof is not available, can be executed using a tool like Cospan, but it may not be completed when the state space to be traversed is too large and deep. The redundancy removal performed on the controllers is important because it eliminates the spurious behaviors introduced in the system by the nondeterminism of the reduced data paths. Redundancy, in fact, may induce a failure in the verification of L(P')⊆L(T'), while L(P)⊆L(T) actually holds. In order to show the effectiveness of the proposed methodology, we verify properties on an extended version of the Mead-Conway Traffic Light Controller, on a modified IRQ communication protocol, and on a relatively prime integers checker and generator View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimum replication min-cut partitioning

    Page(s): 1221 - 1227
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB)  

    Logic replication has been shown to be very effective in reducing the number of cut nets in partitioned circuits. Liu et al. (see IEEE Trans. Computer-Aided Design, vol. 14, p. 623-30, May 1995) considered the circuit partitioning problem with logic replication for separating two given nodes and presented an algorithm to determine a partitioning of the minimum possible cut size. In general, there are many possible partitioning solutions with the minimum cut size and the difference in the required amount of replication by these solutions can be significant. Since there is a size constraint on each component of the partitioning in practice, it is desirable to also minimize the amount of replication. In this paper, we present a network-flow based algorithm to determine an optimum replication min-cut partitioning that requires minimum replication. We show that the algorithm can be generalized to separate two given subsets of nodes giving an optimum partitioning of the minimum possible cut size using the least possible amount of replication. We also show that our algorithm can be used to improve the solutions produced by any existing size-constrained replication min-cut partitioning algorithm by reducing the cut size and shrinking the replication set View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bottleneck removal algorithm for dynamic compaction in sequential circuits

    Page(s): 1157 - 1172
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB)  

    We present a dynamic algorithm for test sequence compaction and test application time (TAT) reduction in combinational and sequential circuits. Several dynamic test compaction algorithms for combinational circuits have been proposed. However, few dynamic methods have been reported in the literature for sequential circuits. Our algorithm is based on two key ideas: (1) at any point during the test generation process, we identify bottlenecks that prevent vector compaction and TAT reduction for test sequences generated thus far, and (2) future test sequences are generated with an aim to eliminate bottlenecks of earlier generated test sequences. If all bottlenecks of a test sequence are eliminated, the sequence is dropped from the test set. Our algorithm can also target TAT reduction under the recently proposed partial scan-in/scan-out model by identifying and eliminating scan bottlenecks. If only the scan bottlenecks of a test sequence are eliminated, the test sequence can be trimmed to reduce the scan-in/scan-out cycles required to apply the sequence. For sequential circuits, we propose a sliding anchor frame technique to specify the unspecified inputs in a test sequence. The anchor frame is the first frame processed by a sequential test generator that is based on an iterative array model of the circuit, and the vector corresponding to the anchor frame is called the anchor vector. Under the sliding anchor frame technique, every vector in the test sequence being extended is considered as an anchor vector. This has the same effect as allowing observation of fault effects at every vector in the sequence, leading to a higher quality of compaction. The final test set generated by our algorithm cannot be further compacted using many known static vector compaction or TAT reduction techniques. For example, reverse or any other order of fault simulation, along with any specification of unspecified values in test sequences, cannot further reduce the number of vectors or TAT. Experimental results on combinational and sequential benchmark circuits, and large production VLSI circuits are reported to demonstrate the effectiveness of our approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the reconfiguration of degradable VLSI/WSI arrays

    Page(s): 1213 - 1221
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB)  

    This paper consider the problem of reconfiguring two dimensional very large scale integration (VLSI/WSI) arrays via the degradation approach. In this approach, all elements are treated uniformly and no elements are dedicated as spares. The goal is to derive a fault-free subarray T from the defective host array such that the dimensions of T are larger than some specified minimum. This problem has been shown to be NP-complete under various switching and routing constraints. However, we show that a special case of the reconfiguration problem with row bypass and column rerouting capabilities is optimally solvable in linear time. Using this result, a new fast and efficient reconfiguration algorithm is proposed. Empirical study shows that the new algorithm indeed produces good results in terms of the percentages of harvest and degradation of VLSI/WSI arrays View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pseudorandom testing for mixed-signal circuits

    Page(s): 1173 - 1185
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB)  

    In this paper, we propose a pseudorandom testing scheme for mixed-signal circuits. We first describe the pseudorandom testing technique for linear analog components and converters in mixed-signal circuits. With proper arithmetic operations on the responses to the random patterns, the impulse response of the device under test (DUT) can be constructed and used as the signature. By checking the constructed signatures against the derived tolerance ranges, we can infer the correctness of the DUT without explicitly measuring the original performance parameters. We also describe a technique of mapping the tolerance ranges in the performance space to its associated tolerance ranges in the signature space. The major advantages of our pseudorandom testing scheme are: (1) a universal input stimulus (white noise) is used and thus test generation can be avoided, (2) signatures for high quality testing can be easily constructed and thus testing cost can be minimized, and (3) the scheme can be used for Built-In Self-Test (BIST) implementation for DSP-based mixed-signal designs. We present simulation results to illustrate the effectiveness of the scheme View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On error correction in macro-based circuits

    Page(s): 1088 - 1100
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB)  

    We consider the problem of correcting errors in a macro-based circuit. Our formulation of the problem allows the correction of errors that arise both in the context of design error correction before the circuit is realized, and when a physical circuit needs to be corrected or diagnosed. Two error classes are defined, namely, component errors and line errors. Both single and multiple errors are considered. Accurate correction procedures are given for single errors. Heuristics are used for correcting multiple errors. Experimental results are given to demonstrate the correction procedures presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A mapped Scharfetter-Gummel formulation for the efficient simulation of semiconductor device models

    Page(s): 1227 - 1233
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB)  

    An efficient numerical solution scheme based on a new mapped finite difference discretization and iterative strategies is developed for submicron semiconductor devices. As a representative model we consider a nonparabolic hydrodynamic system. The discretization is formulated in a mapped reference domain, and incorporates a transformed Scharfetter-Gummel treatment for the current density and energy flux. This permits the use of graded, nonuniform curvilinear grids in the physical domain of interest, which has advantages when gridding irregular domain shapes or grading meshes for steep solution profiles. The solution of the discrete system is carried out in a fully coupled, implicit form, and nonsymmetric gradient-type iterative strategies are investigated. Numerical results demonstrating the performance and reliability of the scheme are presented for test problems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Algorithm-driven synthesis of data conversion architectures

    Page(s): 1116 - 1135
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (712 KB)  

    A new, algorithm-driven methodology is introduced for the synthesis of data conversion systems. It employs a combination of symbolic signal flow graph techniques to generate a canonical representation of the algorithm description, together with pattern recognition techniques, to determine the appropriate functional building blocks for the converter architecture and knowledge-based rules to instantiate such building blocks by electrical subcircuits. By allowing the synthesis process to move to higher levels of abstraction the proposed methodology provides a computer-based framework for the systematic and uniform treatment of various types of conversion systems, including the search for new conversion algorithms and/or new implementation architectures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Symbolic analysis of switched-capacitor networks using compacted nodal analysis in the s-domain

    Page(s): 1196 - 1199
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (132 KB)  

    This short paper presents a method that makes it possible to analyze switched-capacitor (SC) networks in discrete time using compacted nodal analysis in continuous time. Our objective is to perform time-discrete analysis in the z-domain using any symbolic analysis tool (e.g. CASCA) intended for analysis of time-continuous networks in the s-domain. A new equivalent analog circuit is used to describe the switched capacitors in the s-domain with z=s instead of z=esT which gives a simple translation between the two domains. Two SC examples are considered: an inverting amplifier and a biquad View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimized terminal current calculation for Monte Carlo device simulation

    Page(s): 1082 - 1087
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB)  

    We present a generalized Ramo-Shockley theorem (GRST) for the calculation of time-dependent terminal currents in multidimensional charge transport calculations and simulations. While analytically equivalent to existing boundary integration methods, this new domain integration technique is less sensitive to numerical error introduced by calculations of finite precision. Most significantly, we derive entirely new optimized formulas for the ensemble Monte Carlo estimation of steady-state terminal currents from the time-independent form of our GRST, which are in general not equivalent to the time-average of the true time-dependent terminal currents. We then demonstrate, both analytically and by means of example, how our new variance-minimizing terminal current estimators may be exploited to improve estimator accuracy in comparison to existing methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Symbolic timing analysis and resynthesis for low power of combinational circuits containing false paths

    Page(s): 1101 - 1115
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB)  

    This paper presents applications of algebraic decision diagrams (ADDs) to timing analysis and resynthesis for low power of combinational CMOS circuits. We first propose a symbolic algorithm to perform true delay calculation of a technology mapped network; the procedure we propose, implemented as an extension of the SIS synthesis system, is able to provide more accurate timing information than any other method presented so far; in particular, it is able to compute and store the arrival times of all the gates of the circuit for all possible input vectors, as opposed to the traditional methods which consider only the worst case primary inputs combination. Furthermore, the approach does not require any explicit false path elimination. We then extend our timing analysis tool to the symbolic calculation of required times and slacks, and we use this information to perform resynthesis for low power of the circuit by gate resizing. Our approach takes into account false paths naturally; in fact, it guarantees that resizing of the gates does not increase the true delay of the circuit, even in the presence of false paths. Our experiments have shown that many circuits, originally free of false paths, exhibit a large number of these false paths when optimized for area; therefore, the ability to deal with circuits containing false paths is of primary importance. We present experimental results for ADD-based and static timing analysis-based resynthesis, which clearly show that our tool is superior in the case of circuits containing false paths, but at the same time, it provides competitive results in the case of circuits which are free of false paths View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A precorrected-FFT method for electrostatic analysis of complicated 3-D structures

    Page(s): 1059 - 1072
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB)  

    In this paper we present a new algorithm for accelerating the potential calculation which occurs in the inner loop of iterative algorithms for solving electromagnetic boundary integral equations. Such integral equations arise, for example, in the extraction of coupling capacitances in three-dimensional (3-D) geometries. We present extensive experimental comparisons with the capacitance extraction code FASTCAP and demonstrate that, for a wide variety of geometries commonly encountered in integrated circuit packaging, on-chip interconnect and micro-electro-mechanical systems, the new “precorrected-FFT” algorithm is superior to the fast multipole algorithm used in FASTCAP in terms of execution time and memory use. At engineering accuracies, in terms of a speed-memory product, the new algorithm can be superior to the fast multipole based schemes by more than an order of magnitude View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The purpose of this Transactions is to publish papers of interest to individuals in the areas of computer-aided design of integrated circuits and systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief

VIJAYKRISHNAN NARAYANAN
Pennsylvania State University
Dept. of Computer Science. and Engineering
354D IST Building
University Park, PA 16802, USA
vijay@cse.psu.edu