By Topic

Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on

Issue 8 • Date Aug. 2005

Filter Results

Displaying Results 1 - 20 of 20
  • Table of contents

    Publication Year: 2005 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems publication information

    Publication Year: 2005 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (47 KB)  
    Freely Available from IEEE
  • Engineering change protocols for behavioral and system synthesis

    Publication Year: 2005 , Page(s): 1145 - 1155
    Cited by:  Papers (1)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (752 KB) |  | HTML iconHTML  

    Rapid prototyping and development of in-circuit and FPGA-based emulators as key accelerators for fast time-to-market has resulted in a need for efficient error correction mechanisms. Fabricated or emulated prototypes upon error diagnosis require an effective engineering change (EC). We introduce a novel design methodology which consists of pre- and post-processing techniques that enable EC with minimal perturbation. Initially, in a synthesis preprocessing step, the original design specification is augmented with additional design constraints which ensure flexibility for future correction. Upon alteration of the initial design, a new post-processing technique achieves the desired functionality with near-minimal perturbation of the initially optimized design. The key contribution is a constraint manipulation technique which enables the reduction of an arbitrary EC problem into its corresponding classical synthesis problem. As a result, in both pre- and post-processing for EC, classical synthesis algorithms can be used to enable flexibility and perform the correction process. We demonstrate the developed EC methodology on a set of behavioral and system synthesis tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weibull-based analytical waveform model

    Publication Year: 2005 , Page(s): 1156 - 1168
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB) |  | HTML iconHTML  

    Current complimentary metal-oxide-semiconductor technologies are characterized by interconnect lines with increased relative resistance with respect to driver output resistance. Designs generate signal waveshapes that are very difficult to model using a single-parameter model such as the transition time. In this paper, we present a simple and robust two-parameter analytical expression for waveform modeling based on the Weibull cumulative distribution function. The Weibull model accurately captures the variety of waveshapes without introducing significant runtime overhead and produces results with less than 5% error. We also present a fast and simple algorithm to convert waveforms obtained by circuit simulation to the Weibull model. A methodology for characterizing gates for the new model is also presented. Simulation results for many single- and multiple-input gates show errors well below 5%. Our model can be used in a mixed environment where some signals may still be characterized by a single parameter. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compressible area fill synthesis

    Publication Year: 2005 , Page(s): 1169 - 1187
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1112 KB) |  | HTML iconHTML  

    Control of variability and performance in the back end of the VLSI manufacturing line has become extremely difficult with the introduction of new materials such as copper and low-k dielectrics. To improve manufacturability, and in particular to enable more uniform chemical-mechanical planarization (CMP), it is necessary to insert area fill features into low-density layout regions. Because area fill feature sizes are very small compared to the large empty layout areas that need to be filled, the filling process can increase the size of the resulting layout data file by an order of magnitude or more. To reduce file transfer times, and to accommodate future maskless lithography regimes, data compression becomes a significant requirement for fill synthesis. In this paper, we make the following contributions. First, we define two complementary strategies for fill data volume reduction corresponding to two different points in the design-to-manufacturing flow: compressible filling and post-fill compression . Second, we compare compressible filling methods in the fixed-dissection regime when two different sets of compression operators are used: the traditional GDSII array reference (AREF) construct, and the new Open Artwork System Interchange Standard (OASIS) repetitions. We apply greedy techniques to find practical compressible filling solutions and compare them with optimal integer linear programming solutions. Third, for the post-fill data compression problem, we propose two greedy heuristics, an exhaustive search-based method, and a smart spatial regularity search technique. We utilize an optimal bipartite matching algorithm to apply OASIS repetition operators to irregular fill patterns. Our experimental results indicate that both fill data compression methodologies can achieve significant data compression ratios, and that they outperform industry tools such as Calibre V8.8 from Mentor Graphics. Our experiments also highlight the advantages of the new OASIS compression operators over the GDSII AREF construct. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multilevel fixed-point-addition-based VLSI placement

    Publication Year: 2005 , Page(s): 1188 - 1203
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (512 KB) |  | HTML iconHTML  

    A placement problem can be formulated as a quadratic program with nonlinear constraints. Those constraints make the problem hard. Omitting the constraints and solving the unconstrained problem results in a placement with substantial cell overlaps. To remove the overlaps, we introduce fixed points into the nonconstrained quadratic-programming formulation. Acting as pseudocells at fixed locations, they can be used to pull cells away from the dense regions to reduce overlapping. We present an in-depth study of the placement technique based on fixed-point addition and prove that fixed points are generalizations of constant additional forces used previously to eliminate cell overlaps. Experimental results on public-domain benchmarks show that the fixed-point-addition-based placer produces better results than the placer based on constant additional forces. We present an efficient multilevel placer based upon the fixed-point technique and demonstrate that it produces competitive results compared to the existing state-of-the-art placers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Power grid analysis using random walks

    Publication Year: 2005 , Page(s): 1204 - 1224
    Cited by:  Papers (51)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (752 KB) |  | HTML iconHTML  

    This paper presents a class of power grid analyzers based on a random-walk technique. A generic algorithm is first demonstrated for dc analysis, with linear runtime and the desirable property of localizing computation. Next, by combining this generic analyzer with a divide-and-conquer strategy, a single-level hierarchical method is built and extended to multilevel and "virtual-layer" hierarchy. Experimental results show that these algorithms not only achieve speedups over the generic random-walk method, but also are more robust in solving various types of industrial circuits. Finally, capacitors and inductors are incorporated into the framework, and it is shown that transient analysis can be carried out efficiently. For example, dc analysis of a 71 K-node power grid with C4 pads takes 4.16 s; a 348 K-node wire-bond dc power grid is solved in 93.64 s; transient analysis of a 642 K-node power grid takes 2.1 s per timestep. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient and robust method for ring-oscillator simulation using the harmonic-balance method

    Publication Year: 2005 , Page(s): 1225 - 1233
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB) |  | HTML iconHTML  

    A novel approach for simulating the periodic steady state of ring oscillators with the harmonic-balance method is described. This approach is efficient and robust compared with traditional approaches. The key idea exploited is the structure of the ring oscillator, in which the properties of a single delay cell are used to simulate the response of the overall oscillator. The proposed method yields an algorithm that is computationally efficient and readily converges for a wide variety of single-ended and differential ring-oscillator circuits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fine-grained transaction-level verification: using a variable transactor for improved coverage at the signal level

    Publication Year: 2005 , Page(s): 1234 - 1240
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB) |  | HTML iconHTML  

    Maintaining coverage with increasing circuit scale has become a critical problem for logic-verification processes. While transaction-level verification (TLV) is an important step forward, fine-grained (FG) TLV provides better signal-level coverage by reactively changing transactors instead of transaction-level scenarios. Evaluations with a microprocessor design show the effectiveness of FGTLV; all design bugs at the signal level were to be detected, though many were not detected by plain TLV. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchical approach to exact symbolic analysis of large analog circuits

    Publication Year: 2005 , Page(s): 1241 - 1250
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB) |  | HTML iconHTML  

    This paper proposes a novel approach to the exact symbolic analysis of very large analog circuits. The new method is based on determinant decision diagrams (DDDs) representing symbolic product terms. But instead of constructing DDD graphs directly from a flat circuit matrix, the new method constructs DDD graphs in a hierarchical way based on hierarchically defined circuit structures. The resulting algorithm can analyze much larger analog circuits exactly than before. The authors show that exact symbolic expressions of a circuit are cancellation-free expressions when the circuit is analyzed hierarchically. With this, the authors propose a novel symbolic decancellation process, which essentially leads to the hierarchical DDD graph constructions. The new algorithm partially avoids the exponential DDD construction time by employing more efficient DDD graph operations during the hierarchical construction. The experimental results show that very large analog circuits, which cannot be analyzed exactly before like μA725 and other unstructured circuits up to 100 nodes, can be analyzed by the new approach for the first time. The new approach significantly improves the exact symbolic capacity and promises huge potentials for the applications of exact symbolic analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A study of a hybrid phase-pole macromodel for transient simulation of complex interconnects structures

    Publication Year: 2005 , Page(s): 1250 - 1261
    Cited by:  Papers (7)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (448 KB) |  | HTML iconHTML  

    An overview of standard macromodeling techniques (i.e., employ poles but no phase shifts) for transient simulation of high-speed interconnects is first presented. Then, the limitations of these standard macromodeling techniques (e.g., high model order and slow convergence) are discussed. In order to overcome these limitations, generalized method of characteristics (MoC) techniques include the physical phenomenology of phase shift (time delay) in addition to the system poles, thereby making it possible to model single and coupled transmission lines using far fewer terms than when standard macromodeling techniques are employed. Since MoC techniques incorporate the time delay into the model, causality is also guaranteed. In this paper, the MoC idea is extended by developing a hybrid phase-pole macromodel (HPPM) for the modeling of more complex interconnects with embedded discontinuities. Unlike other generalized MoC techniques that have only been applied to single and coupled transmission lines, the HPPM macromodel can be applied to larger portions of the system that contain multiple cascaded transmission lines and discontinuities. The HPPM parameters can be extracted from either measured or simulated transient data. Comparisons between a standard macromodel and the HPPM show that the HPPM has significant advantages in terms of reduced macromodel orders. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards a heterogeneous simulation kernel for system-level models: a SystemC kernel for synchronous data flow models

    Publication Year: 2005 , Page(s): 1261 - 1271
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB) |  | HTML iconHTML  

    As SystemC gains popularity as a modeling language of choice for system-on-chip (SoC) designs, heterogeneous modeling in SystemC and efficient simulation become increasingly important. However, in the current reference implementation, all SystemC models are simulated through a nondeterministic discrete-event (DE) simulation kernel that schedules events at run time mimicking other models of computation (MoCs) using DE, which may get cumbersome. This sometimes results in too many delta cycles hindering the simulation performance of the model. SystemC also uses this simulation kernel as the target simulation engine. This makes it difficult to express different MoCs naturally in SystemC. In an SoC model, different components may need to be naturally expressible in different MoCs. These components may be amenable to static scheduling-based simulation or other presimulation optimization techniques. The goal is to create a simulation framework for heterogeneous SystemC models and to gain efficiency and ease of use within the framework of SystemC reference implementation. In this paper, a synchronous data flow (SDF) kernel extension for SystemC is introduced. Experimental results showing improvement in simulation time are also presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On fault equivalence, fault dominance, and incompletely specified test sets

    Publication Year: 2005 , Page(s): 1271 - 1274
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    It is shown that fault equivalence and fault dominance relations defined based on the sets of completely specified test vectors that detect each fault may not hold when incompletely specified test vectors are used together with three-value simulation. Experimental results are presented to demonstrate the extent of this phenomenon. Its effects are discussed in general and in the context of a specific application. Possible solutions are also discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Worst case crosstalk noise for nonswitching victims in high-speed buses

    Publication Year: 2005 , Page(s): 1275 - 1283
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB) |  | HTML iconHTML  

    Considering a RLC interconnect model, we determine switching patterns and switching times of multiple aggressors to generate the worst case crosstalk noise (WCN) for a quiet or a noisy victim. We consider the routing direction as it has a significant impact under the RLC model. When there are no timing window constraints, we show that the commonly used superposition algorithm results in 15% underestimation on average, and propose a new SS + AS algorithm that has virtually the same complexity as the superposition algorithm but has a much improved accuracy. On average, the SS + AS algorithm only underestimates WCN by 3% compared to time-consuming simulated annealing and genetic algorithm. We also show that applying a RC model to the high-speed interconnects in the International Technology Roadmap for Semiconductors 0.10 μm technology virtually always underestimates WCN, and the underestimation can be up to 80%. Furthermore, we extend our algorithm to consider aggressor switching and victim sampling windows. We show that the extended SS + AS algorithm well approximates WCN with 2% underestimation on average. Although the RC model usually severely underestimates WCN with timing window constraints, it does overestimate when both the aggressor switching and the victim sampling windows are small enough. We conclude that the RLC model is needed for accurate modeling of WCN in design in the multigigahertz region. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A provably passive and cost-efficient model for inductive interconnects

    Publication Year: 2005 , Page(s): 1283 - 1294
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (592 KB) |  | HTML iconHTML  

    To reduce the model complexity for inductive interconnects, the vector potential equivalent circuit (VPEC) model was introduced recently and a localized VPEC model was developed based on geometry integration. In this paper, the authors show that the localized VPEC model is not accurate for interconnects with nontrivial sizes. They derive an accurate VPEC model by inverting the inductance matrix under the partial element equivalent circuit (PEEC) model and prove that the effective resistance matrix under the resulting full VPEC model is passive and strictly diagonal dominant. This diagonal dominance enables truncating small-valued off-diagonal elements to obtain a sparsified VPEC model named truncated VPEC (tVPEC) model with guaranteed passivity. To avoid inverting the entire inductance matrix, the authors further present another sparsified VPEC model with preserved passivity, the windowed VPEC (wVPEC) model, based on inverting a number of inductance submatrices. Both full and sparsified VPEC models are SPICE compatible. Experiments show that the full VPEC model is as accurate as the full PEEC model but consumes less simulation time than the full PEEC model does. Moreover, the sparsified VPEC model is orders of magnitude (1000×) faster and produces a waveform with small errors (3%) compared to the full PEEC model, and wVPEC uses less (up to 90×) model building time yet is more accurate compared to the tVPEC model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sequential circuit ATPG using combinational algorithms

    Publication Year: 2005 , Page(s): 1294 - 1310
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (592 KB) |  | HTML iconHTML  

    In this paper, we introduce two design-for-testability (DFT) techniques based on clock partitioning and clock freezing to ease the test generation process for sequential circuits. In the first DFT technique, a circuit is mapped into overlapping pipelines by selectively freezing different sets of registers so that all feedback loops are temporarily cut. An opportunistic algorithm takes advantage of the pipeline structures and detects most faults using combinational techniques. This technique is feasible to circuits with no or only a few self-loops. In the second DFT technique, we use selective clock freezing to temporarily cut only the global feedback loops. The resulting circuit, called a loopy pipe, may have any number of self-loops. We present a new clocking technique that generates clock waves to test the loopy pipe. Another opportunistic algorithm is proposed for test generation for the loopy pipe. Experimental results show that the fault coverage obtained is significantly higher and test generation time is one order of magnitude shorter for many circuits compared to conventional sequential circuit test generators. The DFT techniques do not introduce any delay penalty into the data path, have small area overhead, allow for at-speed application of tests, and have low power consumption. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2006 IEEE International Symposium on Circuits and Systems (ISCAS 2006)

    Publication Year: 2005 , Page(s): 1311
    Save to Project icon | Request Permissions | PDF file iconPDF (537 KB)  
    Freely Available from IEEE
  • Explore IEL IEEE's most comprehensive resource

    Publication Year: 2005 , Page(s): 1312
    Save to Project icon | Request Permissions | PDF file iconPDF (341 KB)  
    Freely Available from IEEE
  • IEEE Circuits and Systems Society Information

    Publication Year: 2005 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Information for authors

    Publication Year: 2005 , Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (27 KB)  
    Freely Available from IEEE

Aims & Scope

The purpose of this Transactions is to publish papers of interest to individuals in the areas of computer-aided design of integrated circuits and systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief

VIJAYKRISHNAN NARAYANAN
Pennsylvania State University
Dept. of Computer Science. and Engineering
354D IST Building
University Park, PA 16802, USA
vijay@cse.psu.edu