By Topic

Very Large Scale Integration (VLSI) Systems, IEEE Transactions on

Issue 4 • Date April 2008

Filter Results

Displaying Results 1 - 19 of 19
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Very Large Scale Integration (VLSI) Systems publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Guest Editorial Special Section on Design Verification and Validation

    Page(s): 337 - 338
    Save to Project icon | Request Permissions | PDF file iconPDF (349 KB)  
    Freely Available from IEEE
  • MMV: A Metamodeling Based Microprocessor Validation Environment

    Page(s): 339 - 352
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1774 KB) |  | HTML iconHTML  

    With increasing levels of integration of multiple processing cores and new features to support software functionality, recent generations of microprocessors face difficult validation challenges. The systematic validation approach starts with defining the correct behaviors of the hardware and software components and their interactions. This requires new modeling paradigms that support multiple levels of abstraction. Mutual consistency of models at adjacent levels of abstraction is crucial for manual refinement of models from the full chip level to production register transfer level, which is likely to remain the dominant design methodology of complex microprocessors in the near future. In this paper, we present microprocessor modeling and validation environment (MMV), a validation environment based on metamodeling, that can be used to create models at various abstraction levels and to generate most of the important validation collaterals, viz., simulators, checkers, coverage, and test generation tools. We illustrate the functionalities in MMV by modeling a 32-bit reduced instruction set computer processor at the system, instruction set architecture, and microarchitecture levels. We show by examples how consistency across levels is enforced during modeling and also how to generate constraints for automatic test generation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Refinement-Based Compositional Reasoning Framework for Pipelined Machine Verification

    Page(s): 353 - 364
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (595 KB) |  | HTML iconHTML  

    We present a refinement-based compositional framework for showing that pipelined machines satisfy the same safety and liveness properties as their non-pipelined specifications. Our framework consists of a set of convenient, easily applicable, and complete compositional proof rules. We show how to apply our compositional framework in the context of microprocessor verification to verify both abstract, term-level models and executable, bit-level models. Our framework enables us to verify machine models that are significantly more complex than the kinds of models that can be verified using current state-of-the-art automated decision procedures. For example, using our framework, we can verify a 32-bit, 10-stage, executable pipelined machine model. In addition, our compositional framework offers drastic improvements in the context of design debugging over monolithic approaches, in part because bugs are isolated to particular steps in the compositional proof and because the counter examples generated are much smaller. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Novel Probabilistic Combinational Equivalence Checking

    Page(s): 365 - 375
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (684 KB) |  | HTML iconHTML  

    Exact approaches to combinational equivalence checking, such as automatic test pattern generation-based, binary decision diagrams (BDD)-based, satisfiability-based, and hybrid approaches, have been proposed over the last two decades. Recently, we proposed another exact approach using signal probability. This probability-based approach assigns probability values to the primary inputs and compares the corresponding output probability of two networks via a probability calculation process to assert if they are equivalent. The shortcoming of all these exact approaches is that if two networks are too complex to be handled, their equivalence cannot be determined, even with tolerance. An approximate approach, named the probabilistic approach, is a suitable way to give such an answer for those large circuits. However, despite generally being more efficient than exact approaches, the probabilistic approach faces a major concern of a non zero aliasing rate, which is the possibility that two different networks have the same output probability/signatures. Thus, minimizing aliasing rate is substantial in this area. In this paper, we propose a novel probabilistic approach based on the exact probability-based approach. Our approach exploits proposed probabilistic equivalence checking architecture to efficiently calculate the signature of network with virtually zero aliasing rate. We conduct experiments on a set of benchmark circuits, including large and complex circuits, with our probabilistic approach. Experimental results show that the aliasing rate is virtually-zero, e.g., 10-6013. Also, to demonstrate the effectiveness of our approach on error detection, we randomly inject errors into networks for comparison. As a result, our approach more efficiently detects the error than a commercial tool, Cadence LEC, does. Although our approach is not exact, it is practically useful. Thus, it can effectively complement exact methods to improve the efficiency and effectiveness of- - combination equivalence checking algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation Bounds for Equivalence Verification of Polynomial Datapaths Using Finite Ring Algebra

    Page(s): 376 - 387
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (721 KB) |  | HTML iconHTML  

    This paper addresses simulation-based verification of high-level [algorithmic, behavioral, or register-transfer level (RTL)] descriptions of arithmetic datapaths that perform polynomial computations over finite word-length operands. Such designs are typically found in digital signal processing (DSP) for audio/video and multimedia applications; where the word-lengths of input and output signals (bit-vectors) are predetermined and fixed according to the desired precision. Initial descriptions of such systems are usually specified as Matlab/C code. These are then automatically translated into behavioral/RTL descriptions for subsequent hardware synthesis. In order to verify that the initial Matlab/C model is bit-true equivalent to the translated RTL, how many simulation vectors need to be applied? This paper derives some important results that show that exhaustive simulation is not necessary to prove/disprove their equivalence. To derive these results, we model the datapath computations as polynomial functions over finite integer rings of the form , where corresponds to the bit-vector word-length. Subsequently, by exploring some number theoretic and algebraic properties of these rings, we derive an upper bound on the number of simulation vectors required to prove equivalence or to identify bugs. Moreover, these vectors cannot be arbitrarily generated. We identify exactly those vectors that need to be simulated. Experiments are performed within practical computer-aided design (CAD) settings to demonstrate the validity and applicability of these results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Validating Power Architecture™ Technology-Based MPSoCs Through Executable Specifications

    Page(s): 388 - 396
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (454 KB) |  | HTML iconHTML  

    Multiprocessor systems-on-chip (MPSoC) pose a considerable validation challenge due to their size and complexity. We approach the problem of MPSoC validation through a tool that employs a reusable abstract executable specification written in C++. The tool effectively leverages a simulation-based, trace-driven mechanism. Traces are computed by simulating a system level register-transfer level (RTL) implementation of an MPSoC. The tool then analyzes the traces for correctness by checking them across executions of the abstract executable specification. We have effectively used the tool on various live MPSoC design projects based on the Power Architecture technology (The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.). We demonstrate the effectiveness of the technique through results from these projects where we uncovered a number of design errors not found by any other technique. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Standard 1500 Compliance Verification for Embedded Cores

    Page(s): 397 - 407
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2138 KB) |  | HTML iconHTML  

    Core-based design and reuse are the two key elements for an efficient system-on-chip (SoC) development. Unfortunately, they also introduce new challenges in SoC testing, such as core test reuse and the need of a common test infrastructure working with cores originating from different vendors. The IEEE 1500 Standard for Embedded Core Testing addresses these issues by proposing a flexible hardware test wrapper architecture for embedded cores, together with a core test language (CTL) used to describe the implemented wrapper functionalities. Several intellectual property providers have already announced IEEE Standard 1500 compliance in both existing and future design blocks. In this paper, we address the problem of guaranteeing the compliance of a wrapper architecture and its CTL description to the IEEE Standard 1500. This step is mandatory to fully trust the wrapper functionalities in applying the test sequences to the core. We present a systematic methodology to build a verification framework for IEEE Standard 1500 compliant cores, allowing core providers and/or integrators to verify the compliance of their products (sold or purchased) to the standard. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic Constraint Based Test Generation for Behavioral HDL Models

    Page(s): 408 - 421
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (789 KB) |  | HTML iconHTML  

    With the emergence of complex high-performance microprocessors, functional test generation has become a crucial design step. Constraint-based test generation is a well-studied directed behavioral level functional test generation paradigm. The paradigm involves conversion of a given circuit model into a set of constraints and employing constraint solvers to generate tests for it. However, automatic extraction of constraints from a given behavioral hardware design language (HDL) model remained a challenging open problem. This paper proposes an approach for automatic extraction of word-level model constraints from the behavioral verilog HDL description. The scenarios to be tested are also expressed as constraints. The model and the scenario constraints are solved together using an integer solver to arrive at the necessary functional test. The effectiveness of the approach is demonstrated by automatically generating the constraint models for: 1) an exclusive-shared-invalid multiprocessor cache coherency model and 2) the 16-bit DLX-architecture, from their respective Verilog-based behavioral models. Experimental results that generate test vectors for high level scenarios like pipeline hazards, cache miss, etc., spanning over multiple time-frames are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault Emulation for Dependability Evaluation of VLSI Systems

    Page(s): 422 - 431
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1039 KB) |  | HTML iconHTML  

    Advances in semiconductor technologies are greatly increasing the likelihood of fault occurrence in deep-submicrometer manufactured VLSI systems. The dependability assessment of VLSI critical systems is a hot topic that requires further research. Field-programmable gate arrays (FPGAs) have been recently pro posed as a means for speeding-up the fault injection process in VLSI systems models (fault emulation) and for reducing the cost of fixing any error due to their applicability in the first steps of the development cycle. However, only a reduced set of fault models, mainly stuck-at and bit-flip, have been considered in fault emulation approaches. This paper describes the procedures to inject a wide set of faults representative of deep-submicrometer technology, like stuck-at, bit-flip, pulse, indetermination, stuck-open, delay, short, open-line, and bridging, using the best suitable FPGA- based technique. This paper also sets some basic guidelines for comparing VLSI systems in terms of their availability and safety, which is mandatory in mission and safety critical application contexts. This represents a step forward in the dependability benchmarking of VLSI systems and towards the definition of a framework for their evaluation and comparison in terms of performance, power consumption, and dependability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Cooling of Integrated Circuits Using Digital Microfluidics

    Page(s): 432 - 443
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2545 KB) |  | HTML iconHTML  

    Thermal management is critical for integrated circuit (IC) design. With each new IC technology generation, feature sizes decrease, while operating speeds and package densities increase. These factors contribute to elevated die temperatures detrimental to circuit performance and reliability. Furthermore, hot spots due to spatially nonuniform heat flux in ICs can cause physical stress that further reduces reliability. While a number of chip cooling techniques have been proposed in the literature, most are still unable to address the varying thermal profiles of an IC and their capability to remove a large amount of heat is undermined by their lack of reconfigurability of flows. We present an alternative cooling technique based on a recently invented ";digital microfluidic"; platform. This novel digital fluid handling platform uses a phenomenon known as electrowetting, and allows for a vast array of discrete droplets of liquid, ranging from microliters to nanoliters, and potentially picoliters, to be independently moved along a substrate. While this technology was originally developed for a biological and chemical lab-on-a-chip, we show how it can be adapted to be used as a fully reconfigurable, adaptive cooling platform. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design Space Exploration for 3-D Cache

    Page(s): 444 - 455
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1995 KB) |  | HTML iconHTML  

    As technology scales, interconnects have become a major performance bottleneck and a major source of power consumption for sub-micro integrated circuit (IC) chips. One promising option to mitigate the interconnect challenges is 3D ICs, in which a stack of multiple device layers are put together on the same chip. In this paper, we explore the architectural design of cache memories using 3D circuits. We present a delay and energy model 3D cache delay-energy estimation tool (3D-Cacti) to explore different 3D design options of partitioning a cache. The tool allows partitioning of a cache across different device layers at various levels of granularity. The tool has been validated by comparing its results with those obtained from circuit simulation of custom 3D layouts. We also explore the effects of various cache partitioning parameters and 3D technology parameters on delay and energy to demonstrate the utility of the tool. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A High-Speed Variation-Tolerant Interconnect Technique for Sub-Threshold Circuits Using Capacitive Boosting

    Page(s): 456 - 465
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2626 KB) |  | HTML iconHTML  

    This paper describes an interconnect technique for subthreshold circuits to improve global wire delay and reduce the delay variation due to process-voltage-temperature (PVT) fluctuations. By internally boosting the gate voltage of the driver transistors, operating region is shifted from subthreshold region to super-threshold region enhancing performance and improving tolerance to PVT variations. Simulations of a clock distribution network using the proposed driver shows a 66%-76% reduction in 3sigma clock skew value and 84%-88% reduction in clock tree delay compared to using conventional drivers. A 0.4-V test chip has been fabricated in a 0.18-mum 6-metal CMOS process to demonstrate the effectiveness of the proposed scheme. Measurement results show 2.6times faster switching speed and 2.4times less delay sensitivity under temperature variations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Interactive Design Environment for C-Based High-Level Synthesis of RTL Processors

    Page(s): 466 - 475
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2854 KB) |  | HTML iconHTML  

    Much effort in register transfer level (RTL) design has been devoted to developing "push-button" types of tools. However, given the highly complex nature, and lack of control on RTL design, push-button type synthesis is not accepted by many designers. Interactive design with assistance of algorithms and tools can be more effective if it provides control to the steps of synthesis. In this paper, we propose an interactive RTL design environment which enables designers to control the design steps and to integrate hardware components into a system. Our design environment is targeting a generic RTL processor architecture and supporting pipelining, multicycling, and chaining. Tasks in the RTL design process include clock definition, component allocation, scheduling, binding, and validation. In our interactive environment, the user can control the design process at every stage, observe the effects of design decisions, and manually override synthesis decisions at will. We present a set of experimental results that demonstrate the benefits of our approach. Our combination of automated tools and interactive control by the designer results in quickly generated RTL designs with better performance than fully-automatic results, comparable to fully manually optimized designs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multi-Mechanism Reliability Modeling and Management in Dynamic Systems

    Page(s): 476 - 487
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1381 KB) |  | HTML iconHTML  

    Reliability failure mechanisms, such as time-dependent dielectric breakdown (TDDB), electromigration, and negative bias temperature instability (NBTI), have become a key concern in integrated circuit (IC) design. The traditional approach to reliability qualification assumes that the system will operate at maximum performance continuously under worst case voltage and temperature conditions. In reality, due to widely varying environmental conditions and an increased use of dynamic control techniques, such as dynamic voltage scaling and sleep modes, the typical system spends a very small fraction of its operational time at maximum voltage and temperature. In this paper, we show how this results in a reliability ";slack"; that can be leveraged to provide increased performance during periods of peak processing demand. We develop a novel, real time reliability model based on workload driven conditions. Based on this model, we then propose a new dynamic reliability management (DRM) scheme that results in 20%-35 % performance improvement during periods of peak computational demand while ensuring the required reliability lifetime. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Characterization of a Novel Nine-Transistor SRAM Cell

    Page(s): 488 - 492
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (685 KB) |  | HTML iconHTML  

    Data stability of SRAM cells has become an important issue with the scaling of CMOS technology. Memory banks are also important sources of leakage since the majority of transistors are utilized for on-chip caches in today's high performance microprocessors. A new nine-transistor (9T) SRAM cell is proposed in this paper for simultaneously reducing leakage power and enhancing data stability. The proposed 9T SRAM cell completely isolates the data from the bit lines during a read operation. The read static-noise-margin of the proposed circuit is thereby enhanced by 2 X as compared to a conventional six-transistor (6T) SRAM cell. The idle 9T SRAM cells are placed into a super cutoff sleep mode, thereby reducing the leakage power consumption by 22.9% as compared to the standard 6T SRAM cells in a 65-nm CMOS technology. The leakage power reduction and read stability enhancement provided with the new circuit technique are also verified under process parameter variations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Very Large Scale Integration (VLSI) Systems society information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (25 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Very Large Scale Integration (VLSI) Systems Information for authors

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (27 KB)  
    Freely Available from IEEE

Aims & Scope

Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing, and systems applications. Generation of specifications, design, and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor, and process levels.

To address this critical area through a common forum, the IEEE Transactions on VLSI Systems was founded. The editorial board, consisting of international experts, invites original papers which emphasize the novel system integration aspects of microelectronic systems, including interactions among system design and partitioning, logic and memory design, digital and analog circuit design, layout synthesis, CAD tools, chips and wafer fabrication, testing and packaging, and system level qualification. Thus, the coverage of this Transactions focuses on VLSI/ULSI microelectronic system integration.

Topics of special interest include, but are not strictly limited to, the following: • System Specification, Design and Partitioning, • System-level Test, • Reliable VLSI/ULSI Systems, • High Performance Computing and Communication Systems, • Wafer Scale Integration and Multichip Modules (MCMs), • High-Speed Interconnects in Microelectronic Systems, • VLSI/ULSI Neural Networks and Their Applications, • Adaptive Computing Systems with FPGA components, • Mixed Analog/Digital Systems, • Cost, Performance Tradeoffs of VLSI/ULSI Systems, • Adaptive Computing Using Reconfigurable Components (FPGAs) 

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Yehea Ismail
CND Director
American University of Cairo and Zewail City of Science and Technology
New Cairo, Egypt
y.ismail@aucegypt.edu