By Topic

Computer-Aided Design, 1993. ICCAD-93. Digest of Technical Papers., 1993 IEEE/ACM International Conference on

Date 7-11 Nov. 1993

Filter Results

Displaying Results 1 - 25 of 131
  • Leee/acm International Conference On Computer-aided Design Digest Of Technical Papers [Front Matter and Table of Contents]

    Publication Year: 1993 , Page(s): i - xxviii
    Save to Project icon | Request Permissions | PDF file iconPDF (1953 KB)  
    Freely Available from IEEE
  • New methods for parallel pattern fast fault simulation for synchronous sequential circuits

    Publication Year: 1993 , Page(s): 2 - 5
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (393 KB)  

    The paper describes COMBINED, a super fast fault simulator for synchronous sequential circuits. COMBINED results from coupling a parallel pattern simulator with a nonparallel simulator both working based on single fault propagation. Circuit partitioning and removing all feedback loops implemented into the parallel part of COMBINED result in a reduction of the number of events. In addition, the nonparallel part of COMBINED has been expanded either to detect more faults by introducing restricted symbolic fault simulation, or to reduce the number of events using PStar Algorithm which are also presented. COMBINED runs substantially faster on ISCAS-89 benchmark circuits than a state-of-the-art single fault propagation simulator. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault behavior dictionary for simulation of device-level transients

    Publication Year: 1993 , Page(s): 6 - 9
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (442 KB)  

    The paper presents a methodology for the simulation of massive number of device-level transient faults. Fault injection locations and the gate around those locations are extracted and evaluated with SPICE. The extracted sub-circuits are exercised exhaustively while fault-injections are performed. Faulty behavior at the outputs of each sub-circuit is recorded in a dictionary, along with the associated input vector, fault-injection time, and location. The recorded logical errors are injected concurrent transient simulator is developed to allow simultaneous evaluation of a massive number of fault-injections, in a single simulation pass. The methodology is illustrated by a case study of MC68000 microprocessor. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New methods of improving parallel fault simulation in synchronous sequential circuits

    Publication Year: 1993 , Page(s): 10 - 17
    Cited by:  Papers (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (851 KB)  

    A highly successful parallel fault simulator, called PROOFS, for synchronous sequential circuits has been reported. The performance of PROOFS has been substantially improved in HOPE. In HOPE, a systematic way of screening out faults with short propagation zone is proposed. We propose several new techniques which further reduce the fault simulation time of HOPE. The new techniques are: functional fault injection, static fault ordering by fanout free regions and dynamic fault ordering of potentially detected faults. The three methods are incorporated into HOPE and called HOPE1.1. HOPE1.1 shows significant improvement in performance for all the benchmark circuits experimented as compared to HOPE. Experimental results show that HOPE1.1 is especially effective for large circuits. For s35932 which is the largest circuit experimented with, the number of events is reduced by 24%, and the CPU time by 53% compared to HOPE. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting hardware sharing in high-level synthesis for partial scan optimization

    Publication Year: 1993 , Page(s): 20 - 25
    Cited by:  Papers (26)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (705 KB)  

    A new approach to high level synthesis, which simultaneously addresses testability and resource utilization, is presented. We explore the relationship between hardware sharing, loops in the synthesized data-path, and partial scan overhead. Since loops make a circuit hard to test, a comprehensive analysis of the sources of loops in the data path, created during high level synthesis, is provided. The paper introduces the problem of breaking CDFG loops with a minimal number of scan registers. Subsequent scheduling and assignment avoid formation of loops in the data path by sharing the scan registers, while ensuring high resource utilization. Experimental results demonstrate the effectiveness of the technique to synthesize easily testable data paths, with significantly less partial scan cost than a gate-level partial scan approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High level synthesis for reconfigurable datapath structures

    Publication Year: 1993 , Page(s): 26 - 29
    Cited by:  Papers (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (382 KB)  

    High level synthesis techniques for the synthesis of restructurable datapaths are introduced. The techniques can be used in applications such as design for fault tolerance against permanent faults, design for yield improvement, and design of application specific programmable processors. The paper focuses on design techniques for built in self repair (BISR), which addresses the first two of these applications. The new BISR methodology consists of two approaches which exploit the design space exploration abilities of high level synthesis. The first method uses resource allocation, assignment, and scheduling, and the second uses transformations. The effectiveness of the approaches are verified on a set of benchmark examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An improved method for RTL synthesis with testability tradeoffs

    Publication Year: 1993 , Page(s): 30 - 35
    Cited by:  Papers (27)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (621 KB)  

    A method for high level synthesis with testability is presented with the objective to generate self-testable RTL datapath structures. We base our approach on a new improved testability model that generates various testable design styles while reducing the circuit sequential depth from controllable to observable registers. We follow the allocation method with an automatic test point selection algorithm and with an interactive tradeoff scheme which trades design area and delay with test quality. The method has been implemented and design comparisons are reported. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interleaving based variable ordering methods for ordered binary decision diagrams

    Publication Year: 1993 , Page(s): 38 - 41
    Cited by:  Papers (38)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (417 KB)  

    Ordered binary decision diagrams (OBDDs) are efficient representations of Boolean functions and have been widely used in various computer-aided design tools. Since the size of an OBDD depends on variable ordering, it is important to find a good variable order for the efficient manipulation of OBDDs. In particular, it is important to find the same good variable order for multiple functions, since multiple functions are handled at the same time in most computer-aided design tools. The paper describes new variable ordering algorithms for multiple output circuits. The new algorithms use variable interleaving, while conventional algorithms use variable appending. For some benchmark circuits, OBDDs have been successfully generated by using the new algorithms, while they have not been generated by using conventional algorithms. Consequently, the new variable ordering algorithms are effective and allow us to apply OBDD-based CAD tools to wider classes of circuits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic variable ordering for ordered binary decision diagrams

    Publication Year: 1993 , Page(s): 42 - 47
    Cited by:  Papers (246)  |  Patents (33)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (722 KB)  

    The ordered binary decision diagram (OBDD) has proven useful in many applications as an efficient data structure for representing and manipulating Boolean functions. A serious drawback of OBDD's is the need for application-specific heuristic algorithms to order the variables before processing. Further, for many problem instances in logic synthesis, the heuristic ordering algorithms which have been proposed are insufficient to allow OBDD operations to complete within a limited amount of memory. The paper proposes a solution to these problems based on having the OBDD package itself determine and maintain the variable order. This is done by periodically applying a minimization algorithm to reorder the variables of the OBDD to reduce its size. A new OBDD minimization algorithm, called the sifting algorithm, is proposed and appears especially effective in reducing the size of the OBDD. Experiments with dynamic variable ordering on the problem of forming the OBDD's for the primary outputs of a combinational circuit show that many computations complete using dynamic variable ordering when the same computation fails otherwise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Breadth-first manipulation of very large binary-decision diagrams

    Publication Year: 1993 , Page(s): 48 - 55
    Cited by:  Papers (20)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (804 KB)  

    The paper presents an efficient method for manipulating very large shared binary decision diagrams (SBDDs) which are too large to be stored within main memory. In contrast that the conventional depth-first algorithm causes random access of memory, the proposed method is intended to cause sequential access of memory. The main idea of our method is level-by-level manipulation of shared quasi-reduced BDD's (SQBDD's) upon a breadth-first algorithm. A garbage collection algorithm based on sliding type compaction is also introduced in order to reduce page faults in succeeding manipulation. We implemented and evaluated the proposed method on the workstation Sun SPARC Station 10 and 64 Mbyte main memory and a 1 Gbyte hard disk drive. As a result it took only 5.6 h to obtain an SQBDD of more than 12 million nodes, which represents all the primary outputs of a 15-bit multiplier, from its circuit description. If we use the conventional SBDD manipulator instead, it is estimated that it would take about 1900 h. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient methodology for extraction and simulation of transmission lines for application specific electronic modules

    Publication Year: 1993 , Page(s): 58 - 65
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (818 KB)  

    Physical interconnect introduces new challenges for parameter extraction and delay calculation for application specific electronic module (ASEM) design automation. Efficiency dictates the precharacterization of extracted electrical parameters in the same manner as application specific integrated circuits (ASICs). However, ASEM interconnect is dominated by frequency dependent LC propagation which makes precharacterization difficult for all possible configurations. Moreover, simulating the transient behavior of the ASEM interconnect for noise and delay analysis requires the combined use of a variety of models and techniques for efficiently handling lossy, low-loss, frequency dependent, and coupled transmission lines together with lumped parasitic elements. We propose to use conformal mapping to generate "abstracted" models for the electrical parameters of various RLC interconnect cross-sections, including the frequency dependence caused by ground plane proximity and skin effects. Along with precharacterized lumped parasitic elements and nonlinear driver and load models, these models are simulated using a generalized time-domain macromodeling approach that can combine different types of transmission line analysis in one simulation environment. An automatic selection mechanism is derived for determination of the best time-domain macromodel for a particular distributed segment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulating 3-D retarded interconnect models using complex frequency hopping (CFH)

    Publication Year: 1993 , Page(s): 66 - 72
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (646 KB)  

    With ever increasing clock frequencies, accurate 3-D interconnect analysis in chips and packages is becoming a necessity. The retarded partial element equivalent circuit (rPEEC) method has been successfully applied to 3-D analysis but for large problems it becomes expensive in CPU and memory usage, and in time domain it sometimes has numerical problems. Complex frequency hopping (CFH), a new multi-point moment-matching technique, is expanded to handle retarded networks. CFH speeds up rPEEC frequency domain analysis of large frequency bands significantly. It also allows the efficient calculation of resonances and - most importantly - enables time domain modeling of rPEEC networks that have so far resisted analysis by any other method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bounds on net lengths for high-speed PCBs

    Publication Year: 1993 , Page(s): 73 - 76
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (379 KB)  

    In this paper, a methodology for computation of PCB and MCM net length bounds consistent with timing and noise constraints has been introduced. The length of lines and their segments is derived first from equations based on semiempirical formulas and these values of length are used as initial values for AWE-based simulation. For the simulated length of line, the delay at receivers is presented by multivariable Taylor series with respect to length of segments for multi-pin nets. Partial derivatives for this representation are computed numerically on AWE simulation step. Resulting linear delay functions are used in linear programming formulation to find maximal lengths consistent with timing bounds. This work will help to reduce a number of iterations in PCB and MCM design and thus influence a length of design cycle and quality of solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A net-oriented method for realistic fault analysis

    Publication Year: 1993 , Page(s): 78 - 83
    Cited by:  Papers (15)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (644 KB)  

    In this paper, a net-oriented method to analyze realistic faults is presented. The key point of the method is to analyze the faults caused by a spot defect net by net. First the possible faults related to a net are extracted. Hence all faults in a layout are extracted by enumerating all nets on the layout. An approach to calculate the critical area with respect to each fault is also described. A formula is proposed to compute the fault weight theoretically instead of weighting a fault by counting the number of appearances of the fault. The proposed method has been implemented on a HP750 workstation. To demonstrate its practical performance, all layouts in iscas85 benchmarks as well as some other layouts ranging from 450 to 28,000 transistors have been analyzed. The results show that our method is much faster than other approaches published in literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On-chip test generation for combinational circuits by LFSR modification

    Publication Year: 1993 , Page(s): 84 - 87
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (463 KB)  

    A new on-chip test generation technique based on the built-in self test (BIST) and deterministic test generation concepts has been proposed. Given a test set, the test patterns can be regenerated on the chip and applied to the circuit under test without the use of any external test equipments. A systematic procedure for the modification of a basic linear feedback shift register (LFSR) to realize the on-chip test generation hardware is given. Since the delay introduced by the modification of the LFSR is only two gate delays, at-speed testing of circuits is feasible. Experiments are conducted and test application time and hardware overhead are compared with a known test technique under the same fault coverage conditions. It is shown that both test cost and test application time can be decreased significantly by using the proposed technique. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault-based automatic test generator for linear analog circuits

    Publication Year: 1993 , Page(s): 88 - 91
    Cited by:  Papers (54)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (348 KB)  

    Recognizing that specification testing of analog circuits involves a high cost and lacks any quantitative measure of the testing process, we adopt a fault-based technique. With the help of hierarchical fault models for parametric and catastrophic faults, and a very efficient fault simulator, our simulation-assisted technique automatically determines the test frequencies to detect AC faults in linear analog circuits. By a suitable choice of parameters in the test generator, we can either determine the best test (maximize the error between the good and the faulty responses) for every fault (resulting in a large test set), or generate the smallest test set for all the faults. Finally, fault coverage values provide a quantitative evaluation of the final test set. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A grid-based approach for connectivity binding with geometric costs

    Publication Year: 1993 , Page(s): 94 - 99
    Cited by:  Papers (13)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (744 KB)  

    This paper discusses the problem of connectivity binding with geometric costs and a connectivity binder (GB) built to solve it. The goal of GB is to produce bindings with short interconnection lengths. This is important because routing can account for a significant portion of the layout area and long communication lines tend to lead to longer cycle times due to increased capacitance. Long lines also tend to increase power consumption, so it is important to decrease the interconnection lengths for low-power designs. This issue becomes even more critical as feature sizes are reduced. GB uses a new grid-based connectivity binding approach that incorporates these layout issues into the binding process. The usefulness of this grid-based approach is discussed and demonstrated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Layout-driven module selection for register-transfer synthesis of sub-micron ASIC's

    Publication Year: 1993 , Page(s): 100 - 103
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (457 KB)  

    As sub-micron design rules are utilized for IC fabrication, wiring is becoming an important issue in the register-transfer synthesis of high-speed application-specific integrated circuits. This paper proposes a new algorithm that incorporates performance-driven placement in module selection phase of the synthesis. The algorithm not only efficiently exploits multiple module implementations in the design library, but also finds the module placement which minimizes wiring delay. Experimental results on a practical size example show that considering both module and wiring issues, the algorithm is able to improve the design performance more than 20%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design tool integration using object-oriented database views

    Publication Year: 1993 , Page(s): 104 - 107
    Cited by:  Papers (5)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (447 KB)  

    CAD systems utilize a central design database (DDB) to achieve the integration of diverse design information into one model. A global DDB represents a serious bottleneck for the CAD system; it prevents the extensibility of the CAD system over time and forces all design tools to work on the same comprehensive data model. As a solution, we propose to utilize the object-oriented view mechanism, called MultiView, for declaratively specifying customized tool interfaces (design views) on the CAD database. A design view contains a subset of relevant information from the DDB organized in a fashion most suitable to the needs of a particular tool. MultiView automatically maintains the mapping between the global data model and local design views, thus freeing individual design tools from this burden. The resulting CAD environment assures the consistent integration of design data from different tools, while providing each tool with a customized view of the integrated data. This paper gives numerous examples that demonstrate MultiView and its advantages for tool integration in behavioral synthesis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Beyond the combinatorial limit in depth minimization for LUT-based FPGA designs

    Publication Year: 1993 , Page(s): 110 - 114
    Cited by:  Papers (16)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (601 KB)  

    We present an integrated approach to synthesis and mapping to go beyond the combinatorial limit set up by the depth-optimal FlowMap algorithm. The new algorithm, named FlowSYN, uses the global combinatorial optimization techniques to guide the Boolean synthesis process during depth minimization. The combinatorial optimization is achieved by computing a series of minimum cuts of fixed heights in a network based on fast network flow computation, and the Boolean optimization is achieved by efficient OBDD-based implementation of functional decomposition. The experimental results show that FlowSYN improves FlowMap in terms of both the depth and the number of LUTs in the mapping solutions. Moreover, FlowSYN also outperforms the existing FPGA synthesis algorithms for depth minimization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cube-packing and two-level minimization

    Publication Year: 1993 , Page(s): 115 - 122
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (762 KB)  

    Almost all mapping tools for programmable gate arrays (PGAs) start from a network optimized for the number of literals in the factored form. However, PGA architectures imposed different kinds of constraints on the synthesis process. For example, table look up (TLU) architectures restrict each function to at most m inputs (for a fixed m). This is unlike any type of constraint in PLA or standard-cell synthesis. Thus, standard cost functions like the number of cubes or factored form literals are not necessarily good complexity measures for TLU architectures. Decomposition and block count minimization are two steps in PGA mapping that are applied to an optimized design. In decomposition, a feasible representation of the network is obtained, which can be mapped directly onto the target architecture. Block count minimization then tries to maximally group the functions of the decomposed network into basic blocks such that the total number of blocks used are minimized. We address the problem of modeling the decompositon step, in optimization; in particular, we look at cube-packing, which has proved quite effective for the TLU architectures. We propose a technique for deriving a two-level representation of a logic function which yields better results after cube-packing. The technique rests on the idea of using the support of a set of primes as the basic object is two-level minimization, as opposed to a prime. Experiments indicate an average improvement of 12.5% over standard two-level methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combining technology mapping and placement for delay-optimization in FPGA designs

    Publication Year: 1993 , Page(s): 123 - 127
    Cited by:  Papers (10)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (541 KB)  

    We combine technology mapping and placement into a single procedure, M.map, for the design of RAM-based FPGAs. Iteratively, M.map maps several subnetworks of the Boolean network into a number of CLBs on the layout plane simultaneously. For every output node of the un-mapped portion of the Boolean network, many ways of mapping are possible. The choice depends on the location of the CLB into which the output node will be mapped as well as the interconnection with those already mapped CLBs. To deal with such a complicated interaction among multiple output nodes, multiple ways of mappings and multiple CLBs, any greedy algorithms will be insufficient. Instead, we use a bipartite weighted matching algorithm to find a globally optimum solution. With the availability of the partial placement information. M.map is able to minimize the routing delay in addition to the number of CLBs. Experimental results on a set of benchmarks demonstrate that M.map is indeed very effective in minimizing the real delay (after routing) as well as the number of CLBs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel timing simulation on a distributed memory multiprocessor

    Publication Year: 1993 , Page(s): 130 - 135
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (648 KB)  

    We present a parallel timing simulator, PARSWEC, that exploits speculative parallelism and runs on a distributed memory multiprocessor. It is based on an event-driven timing simulator called SWEC. Our approach uses optimistic scheduling to take advantage of the latency of digital signals. Using data from trace-driven analysis, we demonstrate that optimistic scheduling exploits more parallelism than conservative scheduling for circuits with feedback signal paths. We then describe the PARSWEC implementation and discuss several design trade-offs. Speedups over SWEC on large circuits are as high as 55 on a 64-node CM5 multiprocessor. These results indicate that the feasibility of using distributed memory multiprocessors for large-scale circuit simulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Event driven adaptively controlled explicit simulation of integrated circuits

    Publication Year: 1993 , Page(s): 136 - 140
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (478 KB)  

    Adaptively Controlled Explicit Simulation (ACES) is a timing simulation methodology for integrated circuits and systems. The paper presents the use of event driven simulation and partitioning to enhance the ACES simulation environment by exploiting the latency present in integrated circuits. ACES also uses an improved, adaptively controlled explicit integration approximation which overcomes the stability problems encountered in earlier explicit techniques. Piecewise linear models are used for nonlinear devices allowing efficient simulation of MOS, BiMOS and bipolar circuits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulating sigma-delta modulators in AWEswit

    Publication Year: 1993 , Page(s): 141 - 144
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (425 KB)  

    The paper describes the modeling and simulation of switched capacitor sigma-delta modulators in AWEswit. AWEswit is a mixed signal simulator for switched capacitor circuits that merges electrical circuit simulation with event driven logic simulation. It employs asymptotic waveform evaluation (AWE) as its core simulation engine. Mixed level modeling of the circuit components is facilitated by AWEswit's ability to combine formulations in the current voltage and charge voltage regimes. AWEswit naturally handles the linear bandwidth limitations associated with switched capacitor circuits. Here, AWEswit's approach to modeling the clock feedthrough and signal dependent charge dump that characterize MOSfet switches is described. In addition, a postprocessing technique to superimpose the effect of component nonlinearity on the linear AWEswit solution is presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.