By Topic

Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on

Issue 2 • Date Feb. 2007

Filter Results

Displaying Results 1 - 24 of 24
  • Table of contents

    Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • Guest Editorial

    Page(s): 201 - 202
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (480 KB)  

    The nine papers in this special section are expanded versions of papers first presented at the fourteenth International Symposium on Field-Programmable Gate Arrays in 2006. Briefly summarizes the articles included in this section. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Measuring the Gap Between FPGAs and ASICs

    Page(s): 203 - 215
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (258 KB) |  | HTML iconHTML  

    This paper presents experimental measurements of the differences between a 90-nm CMOS field programmable gate array (FPGA) and 90-nm CMOS standard-cell application-specific integrated circuits (ASICs) in terms of logic density, circuit speed, and power consumption for core logic. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deficiencies to attack and, thereby, improve FPGAs. We describe the methodology by which the measurements were obtained and show that, for circuits containing only look-up table-based logic and flip-flops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 35. Modern FPGAs also contain "hard" blocks such as multiplier/accumulators and block memories. We find that these blocks reduce this average area gap significantly to as little as 18 for our benchmarks, and we estimate that extensive use of these hard blocks could potentially lower the gap to below five. The ratio of critical-path delay, from FPGA to ASIC, is roughly three to four with less influence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 14 times and, with hard blocks, this gap generally becomes smaller View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance Benefits of Monolithically Stacked 3-D FPGA

    Page(s): 216 - 229
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (816 KB) |  | HTML iconHTML  

    The performance benefits of a monolithically stacked three-dimensional (3-D) field-programmable gate array (FPGA), whereby the programming overhead of an FPGA is stacked on top of a standard CMOS layer containing logic blocks (LBs) and interconnects, are investigated. A Virtex-II-style two-dimensional (2-D) FPGA fabric is used as a baseline architecture to quantify the relative improvements in logic density, delay, and power consumption achieved by such a 3-D FPGA. It is assumed that only the switch transistor and configuration memory cells can be moved to the top layers and that the 3-D FPGA employs the same LB and programmable interconnect architecture as the baseline 2-D FPGA. Assuming they are les 0.7, the area of a static random-access memory cell and switch transistors having the same characteristics as n-channel metal-oxide-semiconductor devices in the CMOS layer are used. It is shown that a monolithically stacked 3-D FPGA can achieve 3.2 times higher logic density, 1.7 times lower critical path delay, and 1.7 times lower total dynamic power consumption than the baseline 2-D FPGA fabricated in the same 65-nm technology node View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimality Study of Logic Synthesis for LUT-Based FPGAs

    Page(s): 230 - 239
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (895 KB) |  | HTML iconHTML  

    Field-programmable gate-array (FPGA) logic synthesis and technology mapping have been studied extensively over the past 15 years. However, progress within the last few years has slowed considerably (with some notable exceptions). It seems natural to then question whether the current logic-synthesis and technology-mapping algorithms for FPGA designs are producing near-optimal solutions. Although there are many empirical studies that compare different FPGA synthesis/mapping algorithms, little is known about how far these algorithms are from the optimal (recall that both logic-optimization and technology-mapping problems are NP-hard, if we consider area optimization in addition to delay/depth optimization). In this paper, we present a novel method for constructing arbitrarily large circuits that have known optimal solutions after technology mapping. Using these circuits and their derivatives (called Logic synthesis Examples with Known Optimal (LEKO) and Logic synthesis Examples with Known Upper bounds (LEKU), respectively), we show that although leading FPGA technology-mapping algorithms can produce close to optimal solutions, the results from the entire logic-synthesis flow (logicoptimization+mapping) are far from optimal. The LEKU circuits were constructed to show where the logic synthesis flow can be improved, while the LEKO circuits specifically deal with the performance of the technology mapping. The best industrial and academic FPGA synthesis flows are around 70 times larger in terms of area on average and, in some cases, as much as 500 times larger on LEKU examples. These results clearly indicate that there is much room for further research and improvement in FPGA synthesis View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improvements to Technology Mapping for LUT-Based FPGAs

    Page(s): 240 - 253
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (513 KB) |  | HTML iconHTML  

    This paper presents several orthogonal improvements to the state-of-the-art lookup table (LUT)-based field-programmable gate array (FPGA) technology mapping. The improvements target the delay and area of technology mapping as well as the runtime and memory requirements. 1) Improved cut enumeration computes all K-feasible cuts, without pruning, for up to seven inputs for the largest Microelectronics Center of North Carolina benchmarks. A new technique for on-the-fly cut dropping reduces, by orders of magnitude, the memory needed to represent cuts for large designs. 2) The notion of cut factorization is introduced, in which one computes a subset of cuts for a node and generates other cuts from that subset as needed. Two cut factorization schemes are presented, and a new algorithm that uses cut factorization for delay-oriented mapping for FPGAs with large LUTs is proposed. 3) Improved area recovery leads to mappings with the area, on average, 6% smaller than the previous best work while preserving the delay optimality when starting from the same optimized netlists. 4) Lossless synthesis accumulates alternative circuit structures seen during logic optimization. Extending the mapper to use structural choices reduces the delay, on average, by 6% and the area by 12%, compared with the previous work, while increasing the runtime 1.6 times. Performing five iterations of mapping with choices reduces the delay by 10% and the area by 19% while increasing the runtime eight times. These improvements, on top of the state-of-the-art methods for LUT mapping, are available in the package ABC View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FPGA Pipeline Synthesis Design Exploration Using Module Selection and Resource Sharing

    Page(s): 254 - 265
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (716 KB) |  | HTML iconHTML  

    The primary goal during synthesis of digital signal processing (DSP) circuits is to minimize the hardware area while meeting a minimum throughput constraint. In field-programmable gate array (FPGA) implementations, significant area savings can be achieved by using slower, more area-efficient circuit modules and/or by time-multiplexing faster, larger circuit modules. Unfortunately, manual exploration of this design space is impractical. In this paper, we introduce a design exploration methodology that identifies the lowest cost FPGA pipelined implementation of an untimed synchronous data-flow graph by combined module selection with resource sharing under the context of pipeline scheduling. These techniques are applied together to minimize the area cost of the FPGA implementation while meeting a user-specified minimum throughput constraint. Two different algorithms are introduced for exploring the large design space. We show that even for small DSP algorithms, combining these techniques can offer significant area savings relative to applying any of them alone View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploration and Customization of FPGA-Based Soft Processors

    Page(s): 266 - 277
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1000 KB) |  | HTML iconHTML  

    As embedded systems designers increasingly use field-programmable gate arrays (FPGAs) while pursuing single-chip designs, they are motivated to have their designs also include soft processors, processors built using FPGA programmable logic. In this paper, we provide: 1) an exploration of the microarchitectural tradeoffs for soft processors and 2) a set of customization techniques that capitalizes on these tradeoffs to improve the efficiency of soft processors for specific applications. Using our infrastructure for automatically generating soft-processor implementations (which span a large area/speed design space while remaining competitive with Altera's Nios II variations), we quantify tradeoffs within soft-processor microarchitecture and explore the impact of tuning the microarchitecture to the application. In addition, we apply a technique of subsetting the instruction set to use only the portion utilized by the application. Through these two techniques, we can improve the performance-per-area of a soft processor for a specific application by an average of 25% View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Power-Efficient RAM Mapping Algorithms for FPGA Embedded Memory Blocks

    Page(s): 278 - 290
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (765 KB) |  | HTML iconHTML  

    Contemporary field-programmable gate array (FPGA) design requires a spectrum of available physical resources. As FPGA logic capacity has grown, locally accessed FPGA embedded memory blocks have increased in importance. When targeting FPGAs, application designers often specify high-level memory functions, which exhibit a range of sizes and control structures. These logical memories must be mapped to FPGA embedded memory resources such that physical design objectives are met. In this paper, a set of power-efficient logical-to-physical RAM mapping algorithms is described, which converts user-defined memory specifications to on-chip FPGA memory block resources. These algorithms minimize RAM dynamic power by evaluating a range of possible embedded memory block mappings and selecting the most power-efficient choice. Our automated approach has been validated with both simulation of power dissipation and measurements of power dissipation on FPGA hardware. A comparison of measured power reductions to values determined via simulation confirms the accuracy of our simulation approach. Our power-aware RAM mapping algorithms have been integrated into a commercial FPGA compiler and tested with 34 large FPGA benchmarks. Through experimentation, we show that, on average, embedded memory dynamic power can be reduced by 26% and overall core dynamic power can be reduced by 6% with a minimal loss (1%) in design performance. In addition, it is shown that the availability of multiple embedded memory block sizes in an FPGA reduces embedded memory dynamic power by an additional 9.6% by giving more choices to the computer-aided design algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic Creation of Domain-Specific Reconfigurable CPLDs for SoC

    Page(s): 291 - 295
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (283 KB) |  | HTML iconHTML  

    This paper presents tools that automate the creation of domain-specific complex programmable logic devices (CPLDs), targeted for systems-on-a-chip. By tailoring full-crossbar-based CPLDs to the domains that they support, we provide results that beat fixed reconfigurable architectures by 5.5times-11.8times on average in terms of area-delay product. We also create sparse-crossbar-based CPLD architectures, using a novel switch-smoothing algorithm that makes the crossbars amenable to layout. This algorithm reduced the wire jog pitch of our largest layout from 48 to just 3, allowing for a compact very-large-scale-integration layout. These sparse-crossbar-based CPLDs require just 0.37times the area and 0.30times the delay of our full-crossbar-based CPLDs. We also address the question of how best to add resources to a CPLD in order to support future, unknown circuits, concluding that the best strategy is to add 5% to the crossbar switch density and to provide additional programmable logic arrays of the same size found in the base architecture View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A 90-nm Low-Power FPGA for Battery-Powered Applications

    Page(s): 296 - 300
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (365 KB) |  | HTML iconHTML  

    Programmable logic devices such as field-programmable gate arrays (FPGAs) are useful for a wide range of applications. However, FPGAs are not commonly used in battery-powered applications because they consume more power than application-specified integrated circuits and lack power management features. In this paper, we describe the design and implementation of Pika, a low-power FPGA core targeting battery-powered applications. Our design is based on a commercial low-cost FPGA and achieves substantial power savings through a series of power optimizations. The resulting architecture is compatible with existing commercial design tools. The implementation is done in a 90-nm triple-oxide CMOS process. Compared to the baseline design, Pika consumes 46% less active power and 99% less standby power. Furthermore, it retains circuit and configuration state during standby mode and wakes up from standby mode in approximately 100 ns View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhanced Design Flow and Optimizations for Multiproject Wafers

    Page(s): 301 - 311
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (585 KB) |  | HTML iconHTML  

    The aggressive scaling of very large-scale integration feature size and the pervasive use of advanced reticle enhancement technologies lead to dramatic increases in mask costs, pushing prototype and low-volume production designs to the limit of economic feasibility. Multiproject wafers (MPWs), or "shuttle" runs, provide an attractive solution for such designs by providing a mechanism to share the cost of mask tooling among up to tens of designs. However, MPW reticle floorplanning and wafer dicing introduce complexities that are not encountered in typical single-project wafers. Recent works on wafer dicing adopt one or more of the following assumptions to reduce problem complexity: 1) equal production volume requirement for all designs; 2) same dicing plan used for all wafers or for all rows/columns of reticle images on a wafer; 3) unrealistic wafer models such as a rectangular array of projections; and 4) fixed wafer shot-map. Although using one or more of the aforementioned assumptions makes the problem solvable, the performance of the solutions is degraded. In this paper, a comprehensive MPW flow aimed at minimizing the number of wafers needed to fulfil given die production volumes is proposed. The proposed flow includes two main steps: 1) multiproject reticle floorplanning and 2) wafer shot-map and dicing plan definition. For each of these steps, improved algorithms are proposed as follows. The proposed reticle floorplanner uses a hierarchical quadrisection combined with simulated annealing to generate "diceable" floorplans, observing given maximum reticle sizes. The proposed dicing planner allows multiple side-to-side dicing plans for different wafers and different reticle projection rows/columns within a wafer and further improves the dicing yield by partitioning each wafer into a small number of parts before individual die extraction. A wafer shot-map definition heuristic is also proposed in order to fully utilize round wafer real estate by extracting the maxi- mum number of functional dies from both fully and partially printed reticle images. Experiments on industry test cases show that the proposed methods outperform significantly not only previous methods in the literature but also reticle floorplans manually designed by experienced engineers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reducing Data TLB Power via Compiler-Directed Address Generation

    Page(s): 312 - 324
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (434 KB) |  | HTML iconHTML  

    Address translation using the translation lookaside buffer (TLB) consumes as much as 16% of the chip power on some processors because of its high associativity and access frequency. While prior work has looked into optimizing this structure at the circuit and architectural levels, this paper takes a different approach to optimizing its power by reducing the number of data TLB (dTLB) lookups for data references. The main idea is to keep translations in a set of translation registers (TRs) and intelligently use them in software to directly generate the physical addresses without going through the dTLB. The software has to work within the confines of the TRs provided by the hardware and has to maximize the reuse of such translations to be effective. The authors propose strategies and code transformations for achieving this in array-based and pointer-based codes, looking to optimize data accesses. Results with a suite of Spec95 array-based and pointer-based codes show dTLB energy savings of up to 73% and 88%, respectively, compared to directly using the dTLB for all references. Despite the small increase in instructions executed with the mechanisms, the approach can, in fact, provide performance benefits in certain cache-addressing strategies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accelerated Chip-Level Thermal Analysis Using Multilayer Green's Function

    Page(s): 325 - 344
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1120 KB) |  | HTML iconHTML  

    Continual scaling of transistors and interconnects has exacerbated the power and thermal management problems in the design of ultralarge-scale integrated (ULSI) circuits. This paper presents an efficient thermal-analysis method of O(NlgN) complexity, where N is the number of blocks that discretize the heat-source or temperature-observation regions. The method is named LOTAGre and formulated using the Green's function for heat conduction through multiple-layer materials, which account for the structure of ULSI chips and the accompanying heat sinks and mounting accessories. In addition to analyzing the thermal effects of the distributive heat sources, LOTAGre also considers the ambient temperature effects that are generally excluded in conventional Green's function-based thermal-analysis tools in order to avoid the concomitant analytical complexity. By employing the well-known eigen-expansion technique and classical transmission-line theory, fully analytical and explicit formulas are derived in this paper for the multilayer Green's function with the inclusion of the s-domain version, the homogeneous and inhomogeneous solutions to the heat-conduction equation. Then, the discrete cosine transform and its inversion are employed to accelerate the numerical computation of the homogeneous and inhomogeneous solutions. This paper includes extensive experimental results to demonstrate that LOTAGre can be as accurate as FLUENT, a sophisticated computational fluid dynamics tool, while speeding up the simulation run time by two to three orders of magnitude in comparison to FLUENT as well as conventional Green's function-based thermal-analysis methods. This paper also discusses the limitations of using the traditional single-layer thermal model in thermal analysis for approximating a multilayer chip structure View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Efficient Tile-Based ECO Router Using Routing Graph Reduction and Enhanced Global Routing Flow

    Page(s): 345 - 358
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1338 KB) |  | HTML iconHTML  

    Engineering change order (ECO) routing is frequently requested in the later design stage for the purpose of delay and noise optimization. ECO routing is complicated as a result of huge existing obstacles and the requests for various design rules. The tile-based routing model results in fewer nodes of the routing graph than grid and connection-based routers; however, the number of nodes of the tile-based routing graph has grown to over a billion for system-on-chip designs, while no notable progress has been achieved in the routing speed of the tile-based router since it was proposed. This paper first proposes a novel routing graph reduction (RGR) method for promoting tile propagation speed and then depicts a new ECO routing design flow with RGR and enhanced global routing flow (EGRF). RGR can be used to remove redundant tiles as well as align and merge neighboring tiles in order to diminish tile fragmentation such that the tile-based ECO router can run twice as fast while still producing an optimal path. Compared with a commercial placement and routing tool, the proposed tile-based router with RGR obtains better routing performance and routing quality for three ECO routings. EGRF incorporates ECO global routing considering via-resource congestion metric with extended routing and global cell (GCell) restructuring to prevent routing failure in routable designs. The ECO router with the proposed design flow can perform up to 20 times faster than the original tile-based router at the cost of only a slight decline in routing quality. Experimental results also demonstrate that a more congested layout tends to have higher graph reduction rate. Also discussed herein are further refinements by dynamic weighting of via and wire resources based on the vacancy density of the routed design and further application of RGR to multiple-net routing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Identification of Custom Instructions for Extensible Processors

    Page(s): 359 - 368
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (650 KB) |  | HTML iconHTML  

    This paper proposes a fast algorithm to enumerate all convex subgraphs that satisfy the I/O constraints from the dataflow graph (DFG) of a basic block. The algorithm can be tuned to determine all subgraphs or only those connected subgraphs. This allows a choice between better instruction-set extension (ISE) and faster design space exploration. The algorithm uses a grading method to identify the next node for inclusion into a subgraph. If the selected node is included, other related nodes are included as well, thus ensuring that the resultant subgraph is always convex and at the same time, reducing the problem size by a block of nodes. If the selected node is not included, the DFG will be split into smaller DFGs, thus reducing also the problem size. With this as base, the algorithm employs a simple but efficient method to prune the invalid subgraphs that violate the I/O constraints. Results show that for relatively small DFGs with small exploration space, the new algorithm has similar runtimes to that of existing algorithms. However, for larger DFGs with much larger exploration space and with multiple input and output constraints, the runtime improvement can be orders of magnitude better than that of existing algorithms. The new algorithm can be used to quickly identify custom instructions for ISE of embedded processors View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimizing Intratask Voltage Scheduling Using Profile and Data-Flow Information

    Page(s): 369 - 385
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (987 KB) |  | HTML iconHTML  

    Intratask dynamic-voltage scheduling (IntraDVS), which adjusts the supply voltage within an individual-task boundary, has been introduced as an effective technique for developing low-power single-task applications or low-power multitask applications, where a small number of tasks are dominant in total execution time. The original IntraDVS technique used the remaining worst case execution cycles, and the control-flow information to identify the voltage-scaling points (VSPs) of a program. In this paper, two kinds of improvement techniques enhancing the energy performance of the IntraDVS are proposed. One is to use profile information to optimize the voltage schedule for the remaining average-case execution path (RAEP-IntraDVS). The other is to use data-flow information to optimize the locations of VSPs [look-ahead IntraDVS (LaIntraDVS)]. The experimental results show that the RAEP-IntraDVS can reduce the energy consumption by 20% on average and the LaIntraDVS can reduce the energy consumption by 40%-45% compared with the original IntraDVS View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Relationship Between Entropy and Test Data Compression

    Page(s): 386 - 395
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (285 KB) |  | HTML iconHTML  

    The entropy of a set of data is a measure of the amount of information contained in it. Entropy calculations for fully specified data have been used to get a theoretical bound on how much that data can be compressed. This paper extends the concept of entropy for incompletely specified test data (i.e., that has unspecified or don't care bits) and explores the use of entropy to show how bounds on the maximum amount of compression for a particular symbol partitioning can be calculated. The impact of different ways of partitioning the test data into symbols on entropy is studied. For a class of partitions that use fixed-length symbols, a greedy algorithm for specifying the don't cares to reduce entropy is described. It is shown to be equivalent to the minimum entropy set cover problem and thus is within an additive constant error with respect to the minimum entropy possible among all ways of specifying the don't cares. A polynomial time algorithm that can be used to approximate the calculation of entropy is described. Different test data compression techniques proposed in the literature are analyzed with respect to the entropy bounds. The limitations and advantages of certain types of test data encoding strategies are studied using entropy theory View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • LFSR-Reseeding Scheme Achieving Low-Power Dissipation During Test

    Page(s): 396 - 401
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB) |  | HTML iconHTML  

    This paper presents a new low-power test-data-compression scheme based on linear feedback shift register (LFSR) reseeding. A drawback of compression schemes based on LFSR reseeding is that the unspecified bits are filled with random values, which results in a large number of transitions during scan-in, thereby causing high-power dissipation. A new encoding scheme that can be used in conjunction with any LFSR-reseeding scheme to significantly reduce test power and even further reduce test storage is presented. The proposed encoding scheme acts as the second stage of compression after LFSR reseeding. It accomplishes two goals. First, it reduces the number of transitions in the scan chains (by filling the unspecified bits in a different manner). Second, it reduces the number of specified bits that need to be generated via LFSR reseeding. Experimental results indicate that the proposed method significantly reduces test power and in most cases provides greater test-data compression than LFSR reseeding alone View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • In this issue

    Page(s): 402
    Save to Project icon | Request Permissions | PDF file iconPDF (99 KB)  
    Freely Available from IEEE
  • 2007 IEEE International Symposium on Circuits and Systems (ISCAS 2007)

    Page(s): 403
    Save to Project icon | Request Permissions | PDF file iconPDF (617 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Information for authors

    Page(s): 404
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems society information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE

Aims & Scope

The purpose of this Transactions is to publish papers of interest to individuals in the areas of computer-aided design of integrated circuits and systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief

VIJAYKRISHNAN NARAYANAN
Pennsylvania State University
Dept. of Computer Science. and Engineering
354D IST Building
University Park, PA 16802, USA
vijay@cse.psu.edu