By Topic

Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on

Issue 3 • Date Mar 2003

Filter Results

Displaying Results 1 - 14 of 14
  • The physical design of on-chip interconnections

    Page(s): 254 - 276
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4077 KB) |  | HTML iconHTML  

    Because of the complexity of the route problem in ultra large scale integrated (ULSI) designs, multiple route solutions are possible, some route solutions are more efficient than others, and there is a need for statistical tools to determine whether a designer or preroute algorithm is following an efficient path. In a ULSI environment, the problem of routing is best addressed with the combination of a customized preroute algorithm and routing system. One of the key issues in this context is how to divide the routing task between the preroute algorithm and the routing system; to address this issue, it is necessary to develop criteria to assign certain signals to the preroute algorithm and other signals to the routing system. Another key issue is how to evaluate the interactions of the combination of the algorithm and the routing system in order to decide whether intervention with the preroute algorithm is effective in improving physical properties of routes for select signals without adversely affecting physical properties of routes generated with the routing system. In a practical implementation, it is also important to predict when the combined effort is likely to improve the existing solution and to establish a point of diminishing returns beyond which further interactions are no longer effective. This paper presents a self-consistent formalism for intervention with preroute algorithms in ULSI designs. A framework is presented to quantify the physical properties of routes prepared with a preroute algorithm. This paper also presents statistical frameworks to assess the effectiveness of a preroute algorithm and to decide when to stop its use. The main emphasis is on incorporating intervention with custom algorithms in the design process in a seamless manner. The frameworks presented in this paper are applied to an analysis of the POWER4 Instruction Fetch Unit; in this example, the preroute algorithm is custom interconnection design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exact path delay fault coverage with fundamental ZBDD operations

    Page(s): 305 - 316
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (662 KB)  

    We formulate the path delay fault (PDF) coverage problem as a combinatorial problem that amounts to storing and manipulating sets using a special type of binary decision diagrams, called zero-suppressed binary decision diagrams (ZBDD). The ZBDD is a canonical data structure inherently having the property of representing combinational sets very compactly. A simple modification of the proposed basic scheme allows us to increase significantly the storage capability of the data structure with minimal loss in the fault coverage accuracy. Experimental results on the ISCAS85 benchmarks show considerable improvement over all existing techniques for exact PDF grading. The proposed methodology is simple, it consists of a polynomial number of increasingly efficient ZBDD-based operations, and can handle very large test sets that grade very large number of faults. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient very large scale integration power/ground network sizing based on equivalent circuit modeling

    Page(s): 277 - 284
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (397 KB) |  | HTML iconHTML  

    We present an efficient method of minimizing the area of power/ground (P/G) networks in integrated circuit layouts subject to reliability constraints. Instead of directly sizing the original P/G network extracted from a circuit layout, as done previously, the new method first constructs a reduced but electrically equivalent P/G network. Then the sequence of linear programming method is applied to optimize the reduced network. The solution of the original network is then backsolved from the optimized reduced network. The new method exploits the regularities in the P/G networks to reduce the complexities of P/G networks. Experimental results show that the sizes of reduced networks are typically significantly smaller than that of the original networks. The resulting algorithm is fast enough that P/G networks with more than one million branches can be sized in a few minutes on modern Sun workstations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Layout driven synthesis of multiple scan chains

    Page(s): 317 - 326
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (599 KB) |  | HTML iconHTML  

    In this paper, we investigate the problem of assigning scan cells to multiple scan chains based on physical placement. We formulate a new multichain assignment (MA) problem and discuss its complexity and relationship to partitioning and traveling salesman problems. We show that MA is not adequately solved by approaches based on the traveling salesman methods alone and present two new algorithms for MA based on partitioning and stable marriage (SM) problems. We demonstrate that even though partitioning-based methods offer significant improvements compared to previous methods, the SM-based algorithm offers the best results. We compare our algorithms with previous work based on preplacement and greedy assignments in terms of total scan net lengths and maximum load capacitance of scan cells. Experiments with ISCAS89 benchmarks indicate that our algorithms improve the chain lengths 50%-90%, while reducing the maximum load capacitance by 30%-50%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation of arsenic in situ doping with polysilicon CVD and its application to high aspect ratio trenches

    Page(s): 285 - 292
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (409 KB) |  | HTML iconHTML  

    Filling high aspect ratio trenches is an essential manufacturing step for state of the art memory cells. Understanding and simulating the transport and surface processes enables one to achieve voidless filling of deep trenches, to predict the resulting profiles, and thus to optimize the process parameters and the resulting memory cells. Experiments on arsenic doped polysilicon deposition show that under certain process conditions step coverages greater than unity can be achieved. We developed a new model for the simulation of arsenic doped polysilicon deposition, which takes into account surface coverage dependent sticking coefficients and surface coverage dependent arsenic incorporation and desorption rates. The additional introduction of Langmuir-Hinshelwood type time dependent surface coverage enabled the reproduction of the bottom up filling of the trenches in simulations. Additionally, the rigorous treatment of the time dependent surface coverage allows to trace the in situ doping of the deposited film. The model presented was implemented and simulations were carried out for different process parameters. Very good agreement with experimental data was achieved with theoretically deduced parameters. Simulation results are shown and discussed for polysilicon deposition into 0.1 μm wide and 7 μm deep, high aspect ratio trenches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Power supply transient signal analysis for defect-oriented test

    Page(s): 370 - 374
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (281 KB) |  | HTML iconHTML  

    Transient signal analysis (TSA) is a testing method that is based on the analysis of a set of VDD transient waveforms measured simultaneously at each supply port. Defect detection is performed by applying linear regression analysis to the time or frequency domain representations of these signals. Chip-wide process variation effects introduce signal variations that are correlated across the individual power port measurements. In contrast, defects introduce uncorrelated local variations across these measurements that can be detected as anomalies in the cross-correlation profile derived (using regression analysis) from the power port measurements of defect-free chips. This paper focuses on the application of TSA to the detection of delay faults. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reverse-order-restoration-based static test compaction for synchronous sequential circuits

    Page(s): 293 - 304
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (815 KB) |  | HTML iconHTML  

    We present a new static test sequence compaction procedure called reverse-order-restoration (ROR) for synchronous sequential circuits. It improves the efficiency of the basic vector restoration-based compaction procedure by reversing the order of the vectors in the original test sequence. This reduces the number of faults to be resimulated after every restoration step. We extend the ROR procedure to a class of radix reverse order vector restoration procedures. These procedures dynamically increase the number of vectors to be restored in each step and, thus, speed up the vector restoration process. We also investigate techniques to improve the compaction levels achieved by the ROR-based compaction procedure. By combining reverse order vector restoration and vector omission, higher compaction levels are achieved. Experimental results on test sequences generated by several test generators show the effectiveness of the proposed techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Partial BIST insertion to eliminate data correlation

    Page(s): 374 - 379
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (391 KB)  

    A new partial built-in self-test (BIST) insertion approach based on eliminating data correlation to improve pseudorandom testability is presented. Data correlation causes the circuit to be in a subset of states more or less frequently, which leads to low fault coverage in pseudorandom test. One important cause of correlation is reconvergent fanout. Incorporating BIST test flip-flops into reconvergent paths will break correlation, however, breaking all reconvergent fanout is unnecessary since some reconvergent fanout results in negligible correlation. We introduce a metric to determine the degree of correlation caused by a set of reconvergent fanout paths. We use this metric to identify problematic reconvergent fanout which must be broken through partial BIST insertion. Based on this metric, we provide an exact method and a heuristic method to measure the data correlation. We provide an algorithm to break high correlation reconvergent paths. Our algorithm provides high fault coverage while selecting fewer BIST flip-flops than required using loop-breaking techniques. Experimental results produced using our exact algorithm rank on average among the top 11.6% of all possible solutions with the same number of flip-flops. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Early probabilistic noise estimation for capacitively coupled interconnects

    Page(s): 337 - 345
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (509 KB) |  | HTML iconHTML  

    One of the critical challenges in today's high-performance IC design is to take noise into account as early as possible in the design cycle. Current noise analysis tools are effective at analyzing and identifying noise in the postroute design stage when detailed parasitic information is available. However, noise problems identified at this stage of the design cycle are very difficult to correct due to the limited flexibility in design and may cause additional iterations of routing and placement which adds costly delays in its time to market. In this paper, we introduce a probabilistic preroute noise analysis approach to identify postroute noise failures before the actual detailed route is completed. We introduce new methods to estimate the RC characteristics of victim and aggressor lines, their coupling capacitances, and the aggressor transition times before routing is performed. The approach is based on congestion information obtained from a global router. Since the exact location and relative position of wires in the design are not yet available, we propose a novel probabilistic method for capacitance extraction. We present results on two high-performance microprocessors in 0.18 μm technology that demonstrate the effectiveness of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing ASICs with multiple identical cores

    Page(s): 327 - 336
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (479 KB) |  | HTML iconHTML  

    Predesigned cores and reusable modules are popularly used in the design of large and complex application specific integrated circuits (ASICs). As the size and complexity of ASICs increase, the test effort, including test development effort, test data volume, and test application time, has also significantly increased. This paper shows that this test effort increase can be minimized for ASICs that consist of multiple identical cores. A novel design for testability (DFT) technique is proposed to test ASICs with identical embedded cores. The proposed technique significantly reduces test application time, test data volume, and test generation effort. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimum buffered routing with bounded capacitive load for slew rate and reliability control

    Page(s): 241 - 253
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (867 KB)  

    In high-speed digital VLSI design, bounding the load capacitance at gate outputs is a well-known methodology to improve coupling noise immunity, reduce degradation of signal transition edges, and reduce delay uncertainty due to coupling noise. Bounding load capacitance also improves reliability with respect to hot-carrier oxide breakdown and AC self-heating in interconnects, and guarantees bounded input rise/fall times at buffers and sinks. This paper introduces a new minimum-buffer routing problem (MBRP) formulation which requires that the capacitive load of each buffer, and of the source driver, be upper-bounded by a given constant. Our contributions are as follows: We give linear-time algorithms for optimal buffering of a given routing tree with a single (inverting or noninverting) buffer type. For simultaneous routing and buffering with a single noninverting buffer type, we prove that no algorithm can guarantee a factor smaller than 2 unless P=NP and give an algorithm with approximation factor slightly larger than 2 for typical buffers. For the case of a single inverting buffer type, we give an algorithm with approximation factor slightly larger than 4. We give local-improvement and clustering based MBRP heuristics with improved practical performance, and present a comprehensive experimental study comparing the runtime/quality tradeoffs of the proposed MBRP heuristics on test cases extracted from recent industrial designs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Test pattern generation and clock disabling for simultaneous test time and power reduction

    Page(s): 363 - 370
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (687 KB)  

    Scan-based design has been widely used to transport test patterns in a system-on-a-chip (SOC) test architecture. Two problems that are becoming quite critical for scan-based testing are long test application time and high test power consumption. Previously, many efficient methods have been developed to address these two problems separately. In this paper, we propose a novel method called the multiple clock disabling (MCD) technique to reduce test application time and test power dissipation simultaneously. Our method is made possible by cleverly modifying and integrating a number of existing techniques to generate a special set of test patterns that is suitable for a scan architecture based on the MCD technique. Experimental results for the International Symposium on Circuits and Systems (ISCAS) '85 and '89 benchmark circuits show that significant reduction on both test application time and power dissipation can be achieved compared to the conventional scan method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Methods for minimizing dynamic power consumption in synchronous designs with multiple supply voltages

    Page(s): 346 - 351
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB)  

    We address the problem of minimizing dynamic power consumption under performance constraints by scaling down the supply voltage of computational elements off critical paths. We assume that the number of possible supply voltages and their values are known for each computational element. We focus on solving this problem on cyclic and acyclic graphs corresponding to synchronous designs. We consider multiphase clocked sequential circuits derived using software pipelining techniques. In this paper, we present exact and heuristic methods to solve the problem. The proposed methods take the form of mathematical programming formulations and their associated solution algorithms. The exact methods are based on a mixed integer linear programming formulation of the problem. The heuristic methods are based on linear programming formulations derived from the exact problem formulation. Solution methods are analyzed experimentally in terms of their run time and effectiveness in finding designs with lower dynamic power using circuits from the ISCAS89 benchmark suite. Power reduction factors as high as 69.75% were obtained compared to designs using the highest supply voltages. One of the heuristic methods leads to solutions that are near optimal, typically within 5% from the optimal solution. Low dynamic-power designs with no or a small number of level converters, are also obtained. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A unified approach to reduce SOC test data volume, scan power and testing time

    Page(s): 352 - 363
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (656 KB) |  | HTML iconHTML  

    We present a test resource partitioning (TRP) technique that simultaneously reduces test data volume, test application time, and scan power. The proposed approach is based on the use of alternating run-length codes for test data compression. We present a formal analysis of the amount of data compression obtained using alternating run-length codes. We show that a careful mapping of the don't-cares in precomputed test sets to 1's and 0's leads to significant savings in peak and average power, without requiring either a slower scan clock or blocking logic in the scan cells. We present a rigorous analysis to show that the proposed TRP technique reduces testing time compared to a conventional scan-based scheme. We also improve upon prior work on run-length coding by showing that test sets that minimize switching activity during scan shifting can be more efficiently compressed using alternating run-length codes. Experimental results for the larger ISCAS89 benchmarks and an IBM production circuit show that reduced test data volume, test application time, and low power-scan testing can indeed be achieved in all cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The purpose of this Transactions is to publish papers of interest to individuals in the areas of computer-aided design of integrated circuits and systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief

VIJAYKRISHNAN NARAYANAN
Pennsylvania State University
Dept. of Computer Science. and Engineering
354D IST Building
University Park, PA 16802, USA
vijay@cse.psu.edu