By Topic

Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on

Issue 8 • Date Aug 2002

Filter Results

Displaying Results 1 - 11 of 11
  • Bridging fault modeling and simulation for deep submicron CMOS ICs

    Page(s): 941 - 953
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB) |  | HTML iconHTML  

    Testing bridging faults in deep submicron CMOS digital ICs faces new problems because of pushing the technology limits. The growing dispersion of process parameters makes it hard to use conventional bridging fault models for high-quality testing. A new fault model is proposed to account for bridging faults in a way that is independent of electrical parameters and provides a significant coverage metric. Conditions are defined to ensure that (under steady-state conditions) either a fault is detected by a test sequence or it will not give rise to errors for any other input, independently of the actual values of IC parameters. Such a fault model has been implemented in a simulator and validated over combinational benchmarks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Phantom redundancy: a register transfer level technique for gracefully degradable data path synthesis

    Page(s): 877 - 888
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (375 KB)  

    Presents an area-efficient register transfer level technique for gracefully degradable data path synthesis called phantom redundancy. In contrast to spare-based approaches, phantom redundancy is a recovery technique that does not use any standby spares. Phantom redundancy uses extra interconnect to make the resulting data path reconfigurable in the presence of any (single) functional unit failure. When phantom redundancy is combined with a concurrent error detection technique, error detection followed by reconfiguration is automatic. The authors developed a register transfer level synthesis algorithm that incorporates phantom redundancy constraints. There is a tight interdependence between reconfiguration of a (faulty) data path and scheduling and operation-to-operator binding tasks during register transfer level synthesis. They developed a genetic algorithm-based register transfer level synthesis approach to incorporate phantom redundancy constraints. The algorithm minimizes the performance degradation of the synthesized data path in the presence of any single faulty functional unit. The effectiveness of the technique and the algorithm are illustrated using high-level synthesis benchmarks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of on-chip inductance effects for distributed RLC interconnects

    Page(s): 904 - 915
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB) |  | HTML iconHTML  

    This paper introduces an accurate analysis of on-chip inductance effects for distributed RLC interconnects that takes the effect of both the series resistance and the output parasitic capacitance of the driver into account. Using rigorous first principle calculations, accurate expressions for the transfer function of these lines and their time-domain response have been presented for the first time. Using these, a new and computationally efficient performance optimization techniques for distributed RLC interconnects has been introduced. The new optimization technique has been employed to analyze the impact of line inductance on the circuit behavior and to illustrate the implications of technology scaling on wire inductance. It is shown that reduction in driver output resistance and input capacitance with scaling can make deep submicron designs increasingly susceptible to inductance effects if global interconnects are not scaled. For scaled global interconnects with increasing line resistance per unit length, as prescribed by the International Technology Roadmap for Semiconductors, the effect of inductance on interconnect performance actually diminishes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A constructive genetic algorithm for gate matrix layout problems

    Page(s): 969 - 974
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (313 KB) |  | HTML iconHTML  

    This paper describes an application of a Constructive Genetic Algorithm (CGA) to the gate matrix layout problem (GMLP). The GMLP happens in very large scale integration design and can be described as a problem of assigning a set of circuit nodes (gates) in an optimal sequence, such that the layout area is minimized, as a consequence of optimizing the number of tracks necessary to cover the gates interconnection. The CGA has a number of new features compared to a traditional genetic algorithm. These include a population of dynamic size composed of schemata and structures and the possibility of using heuristics in structure representation and in the fitness function definitions. The application of CGA to GMLP uses a 2-Opt-like heuristic to define the fitness functions and the mutation operator. Computational tests are presented using available instances taken from the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Static noise analysis for digital integrated circuits in partially depleted silicon-on-insulator technology

    Page(s): 916 - 927
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (381 KB)  

    This paper extends transistor-level static noise analysis to consider the unique features of partially depleted silicon-on-insulator (PD-SOI) technology: floating-body-induced threshold voltage variations and parasitic bipolar leakage currents. This involves a unique state-diagram abstraction of the device physics determining the body potential of PD-SOI FETs. Based on this picture, a simple model of the body voltage is derived which takes into account modest knowledge of which nets have dependable regular switching activity. Results are presented using a commercial static noise analysis tool incorporating these extensions and comparisons are made with SPICE. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic generation of synthetic sequential benchmark circuits

    Page(s): 928 - 940
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (398 KB) |  | HTML iconHTML  

    The design of programmable logic architectures and supporting computer-aided design tools fundamentally requires both a good understanding of the combinatorial nature of netlist graphs and sufficient quantities of realistic examples to evaluate or benchmark the results. In this paper, the authors investigate these two issues. They introduce an abstract model for describing sequential circuits and a collection of statistical parameters for better understanding the nature of circuits. Based upon this model they introduce and formally define the signature of a circuit netlist and the signature equivalence of netlists. They give an algorithm (GEN) for generating sequential benchmark netlists, significantly expanding previous work (Hutton et al, 1998) which generated purely combinational circuits. By comparing synthetic circuits to existing benchmarks and random graphs they show that GEN circuits are significantly more realistic than random graphs. The authors further illustrate the viabilty of the methodology by applying GEN to a case study comparing two partitioning algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of reconfigurable composite microsystems based on hardware/software codesign principles

    Page(s): 987 - 995
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB) |  | HTML iconHTML  

    Composite microsystems that integrate mechanical and fluidic components with electronics are emerging as the next generation of system-on-a-chip. Custom microsystems are expensive, inflexible, and unsuitable for high-volume production. The authors address this problem by leveraging hardware/software codesign principles to design reconfigurable composite microsystems. They partition the system design parameters into nonreconfigurable and reconfigurable categories. In this way, operational flexibility is enhanced and the microsystems are designed for a wider range of application. In addition, the Taguchi robust design method is used to make the system robust, and response surface methodologies are used to explore the widest performance range for the system. A case study is presented for a microvalve, which serves as a representative microelectrofluidic device. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application-specific clustered VLIW datapaths: early exploration on a parameterized design space

    Page(s): 889 - 903
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (474 KB)  

    Specialized clustered very large instruction word (VLIW) processors combined with effective compilation techniques enable aggressive exploitation of the high instruction-level parallelism inherent in many embedded media applications, while unlocking a variety of possible performance/cost tradeoffs. In this work, the authors propose a methodology to support early design space exploration of clustered VLIW datapaths, in the context of a specific target application. They argue that, due to the large size and complexity of the design space, the early design space exploration phase should consider only design space parameters that have a first-order impact on two key physical figures of merit: clock rate and power dissipation. These parameters were found to be: maximum cluster capacity, number of clusters, and bus (interconnect) capacity. Experimental validation of their design space exploration algorithm shows that a thorough exploration of the complex design space can be performed very efficiently in this abstract parameterized design space. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An automatic test pattern generator for minimizing switching activity during scan testing activity

    Page(s): 954 - 968
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB) |  | HTML iconHTML  

    An automatic test pattern generation (ATPG) technique is proposed that reduces switching activity during testing of sequential circuits that have full scan. The objective is to permit safe and inexpensive testing of low-power circuits and bare dies that would otherwise require expensive heat removal equipment for testing at high speed. The approach works with standard scan designs that are commonly used and typically have significantly lower overhead than enhanced scan designs. The proposed ATPG exploits all possible "don't cares" that occur during scan shifting, test application, and response capture to minimize switching activity in the circuit under test. An ATPG that minimizes the number of state inputs that are assigned specific binary values has been developed. Don't cares at state inputs are assigned binary values that cause the minimum number of transitions during scan shifting and don't cares at primary inputs during scan shifting and capture are used to block gates that may have transitions during scan shifting. The proposed technique has been implemented and the generated tests are compared with those generated by a simple PODEM implementation for full scan versions of ISCAS89 benchmark circuits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • n-pass n-detection fault simulation and its applications

    Page(s): 980 - 986
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (426 KB) |  | HTML iconHTML  

    An n-detection fault simulation process called n-pass n-detection fault simulation is described. n-pass n-detection fault simulation can be implemented such that it has the same computational complexity (and run time) as the conventional n-detection fault simulation process; however, it is more effective for applications where it is necessary to identify tests that detect large numbers of faults. One such application considered in this work is that of ordering a given test set so as to steepen its fault coverage curve. Experimental results are presented to demonstrate that improved test ordering is obtained by using the proposed n-pass n-detection fault simulation process using approximately the same run time as when conventional n-detection fault simulation is used. n-pass n-detection fault simulation is also effective in cases where the value of n is required to change dynamically during the fault simulation process. This is useful in order to accommodate a limit on the run time of n-detection fault simulation, or when it is not possible to specify a value for n in advance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • BDD minimization by scatter search

    Page(s): 974 - 979
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB) |  | HTML iconHTML  

    Reduced-ordered binary decision diagrams (BDDs) are a data structure for representation and manipulation of Boolean functions. The variable ordering largely influences the size of the BDD, varying from linear to exponential. In this paper, the authors study the BDD minimization problem based on scatter search optimization. Scatter search offers a reasonable compromise between quality (BDD reduction) and time. On smaller benchmarks it delivers almost optimal BDD size with less time than the exact algorithm. For larger benchmarks it delivers smaller BDD sizes than genetic algorithm or simulated annealing at the expense of longer runtime. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The purpose of this Transactions is to publish papers of interest to individuals in the areas of computer-aided design of integrated circuits and systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief

VIJAYKRISHNAN NARAYANAN
Pennsylvania State University
Dept. of Computer Science. and Engineering
354D IST Building
University Park, PA 16802, USA
vijay@cse.psu.edu