By Topic

Computer Aided Design, 2000. ICCAD-2000. IEEE/ACM International Conference on

Date 5-9 Nov. 2000

Filter Results

Displaying Results 1 - 25 of 95
  • IEEE/ACM International Conference on Computer Aided Design. ICCAD - 2000. IEEE/ACM Digest of Technical Papers (Cat. No.00CH37140)

    Publication Year: 2000
    Save to Project icon | Request Permissions | PDF file iconPDF (616 KB)  
    Freely Available from IEEE
  • Physical planning with retiming

    Publication Year: 2000 , Page(s): 2 - 7
    Cited by:  Papers (21)  |  Patents (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (726 KB)  

    In this paper, we propose a unified approach to partitioning, floorplanning, and retiming for effective and efficient performance optimization. The integration enables the partitioner to exploit more realistic geometric delay model provided by the underlying floorplan. Simultaneous consideration of partitioning and retiming under the geometric delay model enables us to hide global interconnect latency effectively by repositioning FF along long wires. Under the proposed geometric embedding based performance driven partitioning problem, our GEO algorithm performs multi-level top-down partitioning while determining the location of the partitions. We adopt the concept of sequential arrival time and develop sequential required time in our retiming based timing analysis engine. GEO performs cluster-move based iterative improvement on top of multi-level cluster hierarchy, where the gain function obtained from the timing analysis is based on the minimization of cutsize, wirelength, and sequential slack. In our comparison to (i) state-of-the-art partitioner hMetis followed by retiming and simulated annealing based slicing floorplanning, and (ii) state-of-the-art simultaneous partitioning with retiming HPM followed by floorplanning, GEO obtains 35% and 23% better delay results while maintaining comparable cutsize, wirelength, and runtime results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Corner block list: an effective and efficient topological representation of non-slicing floorplan

    Publication Year: 2000 , Page(s): 8 - 12
    Cited by:  Papers (88)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (565 KB)  

    In this paper, a corner block list-a new efficient topological representation for non-slicing floorplan is proposed with applications to VLSI floorplan and building block placement. Given a corner block list, it takes only linear time to construct the floorplan. Unlike the O-tree structure, which determines the exact floorplan based on given block sizes, corner block list defines the floorplan independent of the block sizes. Thus, the structure is better suited for floorplan optimization with various size configurations of each block. Based on this new structure and the simulated annealing technique, an efficient floorplan algorithm is given. Soft blocks and the aspect ratio of the chip are taken into account in the simulated annealing process. The experimental results demonstrate the algorithm is quite promising. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling non-slicing floorplans with binary trees

    Publication Year: 2000 , Page(s): 13 - 16
    Cited by:  Papers (7)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB)  

    Several novel topological representations of non-slicing floorplans have been more recently proposed, providing new ideas and techniques for solving block placement problems and other related layout applications. Among these topological representations, ordered trees exhibit a lower redundancy and, therefore, a provable smaller search space, which makes them the best topological candidate for solving general block placement problems. Starting from the early eighties, binary trees have been widely used to represent slicing floorplans. This paper shows that binary trees can efficiently model non-slicing floorplans as well, as there is a one-to-one mapping between the sets of binary and ordered trees representing the floorplan. Moreover, this paper shows that binary trees exhibiting a certain property can be used to represent block placement configurations with symmetry constraints, which is very useful when dealing with device-level placement problems for analog layout. As the number of these trees is proven to be smaller than the number of symmetric-feasible sequence-pairs, using binary trees is better than using either sequence-pairs or O-trees when solving analog placement problems. A comparative evaluation, substantiating these theoretical results, has been carried out by providing alternative optimization engines to a placement tool operating in an industrial environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On mismatches between incremental optimizers and instance perturbations in physical design tools

    Publication Year: 2000 , Page(s): 17 - 21
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (620 KB)  

    The incremental, "construct by correction" design methodology has become widespread in constraint-dominated DSM design. We study the problem of ECO for physical design domains in the general context of incremental optimization. We observe that an incremental design methodology is typically built from a full optimizer that generates a solution for an initial instance, and an incremental optimizer that generates a sequence of solutions corresponding to a sequence of perturbed instances. Our hypothesis is that in practice, there can be a mismatch between the strength of the incremental optimizer and the magnitude of the perturbation between successive instances. When such a mismatch occurs, the solution quality will degrade-perhaps to the point where the incremental optimizer should be replaced by the full optimizer. We document this phenomenon for three distinct domains-partitioning, placement and routing-using leading industry and academic tools. Our experiments show that current CAD tools may not be correctly designed for ECO-dominated design processes. Thus, compatibility between optimizer and instance perturbation merits attention both as a research question and as a matter of industry design practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Event driven simulation without loops or conditionals

    Publication Year: 2000 , Page(s): 23 - 26
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (468 KB)  

    The past several years have seen much research in event driven logic simulation. Various logic and delay models have been explored. Most simulation research has focused on improving simulation performance. New approaches to both compiled and event driven simulation have been explored. The internal operations of event-driven simulators can be divided into two categories, scheduling, and gate simulation. Much effort has been focused on reducing the cost of scheduling. There has also been effort to reduce the cost of gate simulation. It has also been shown that explicit computation of gate outputs is unnecessary, as long as event propagation is computed correctly. Even though research has reduced the complexity of both scheduling and gate simulation, it is still necessary to test for event propagation and cancellation, and it is necessary to perform some computations during gate simulation. This paper shows that none of these computations are necessary. Most computations are devoted to testing internal states and computing new internal states. In our technique, subroutine addresses are used to maintain states. This permits the elimination of all state-testing and state-computation code. Our technique is significantly faster than conventional event-driven simulation. Unlike earlier methods, our approach can easily be extended to any logic model or any delay model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Observability analysis of embedded software for coverage-directed validation

    Publication Year: 2000 , Page(s): 27 - 32
    Cited by:  Papers (5)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (709 KB)  

    The most common approach to checking correctness of a hardware or software design is to verify that a description of the design has the proper behavior as elicited by a series of input stimuli. In the case of software, the program is simply run with the appropriate inputs, and in the case of hardware, its description written in a hardware description language (HDL) is simulated with the appropriate input vectors. In coverage-directed validation, coverage metrics are defined that quantitatively measure the degree of verification coverage of the design. Motivated by recent work on observability-based coverage metrics for models described in a hardware description language, we develop a method that computes an observability-based code coverage metric for embedded software written in a high-level programming language. Given a set of input vectors, our metric indicates the instructions that had no effect on the output. An assignment that was not relevant to generate the output value cannot be considered as being covered. Results show that our method offers a significantly more accurate assessment of design verification coverage than statement coverage. Existing coverage methods for hardware can be used with our method to build a verification methodology for mixed hardware/software or embedded systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A methodology for verifying memory access protocols in behavioral synthesis

    Publication Year: 2000 , Page(s): 33 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (637 KB)  

    Memory is one of the most important components to be optimized in the several phases of the synthesis process. In behavioral synthesis, a memory is viewed as an abstract construct which hides the detail implementations of the memory. Consequently, for a vendor's memory, behavioral synthesis should create a clean model of the memory wrapper which abstracts the properties of the memory that are required to interface to the rest of the circuit. However, this wrapping process invariably demands the verification problem of the memory access protocols in order to be safely used in behavioral synthesis environment. In this paper, we propose a systematic methodology of verifying the correctness of the memory wrapper. Specifically, we analyze the complexity of the problem, and derive an effective solution which is not only practically efficient but also highly reliable. For designers who use memories as design components in behavioral synthesis, automating our solution shortens the verification time significantly in contrast of simulating memory accesses in the context of full design, which is a quite complex and time-consuming process, especially for designs with many memory access operations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Symbolic debugging scheme for optimized hardware and software

    Publication Year: 2000 , Page(s): 40 - 43
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (515 KB)  

    Symbolic debuggers are system development tools that can accelerate the validation speed of behavioral specifications by allowing a user to interact with an executing code at the source level. In response to a user query, the debugger retrieves the value of a source variable in a manner consistent with respect to the source statement where execution has halted. However, when a behavioral specification has been optimized using transformations, values of variables may be inaccessible in the run-time state. We have developed a set of techniques that, given a behavioral specification CDFG, enforce computation of a selected subset V/sub cut/ of user variables such that (i) all other variables /spl upsi//spl isin/CDFG can be computed from V/sub cut/ and (ii) this enforcement has minimal impact on the optimization potential of the computation. The implementation of the new debugging approach poses several optimization tasks. We have formulated the optimization tasks and developed heuristics to solve them. The effectiveness of the approach has been demonstrated on a set of benchmark designs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated data dependency size estimation with a partially fixed execution ordering

    Publication Year: 2000 , Page(s): 44 - 50
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (793 KB)  

    For data dominated applications, the system level design trajectory should first focus on finding a good data transfer and storage solution. Since no realization details are available at this level, estimates are needed to guide the designer. This paper presents an algorithm for automated estimation of strict upper and lower bounds on the individual data dependency sizes in high level application code given a partially fixed execution ordering. Previous work has either not taken execution ordering into account at all, resulting in large overestimates, or required a fully specified ordering which is usually not available at this high level. The usefulness of the methodology is illustrated on representative application demonstrators. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FIR filter synthesis algorithms for minimizing the delay and the number of adders

    Publication Year: 2000 , Page(s): 51 - 54
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (447 KB)  

    As the complexity of digital filters is dominated by the number of multiplications, many works have focused on minimizing the complexity of multiplier blocks that compute the constant coefficient multiplications required in filters. Although the complexity of multiplier blocks is significantly reduced by using efficient techniques such as decomposing multiplications into simple operations and sharing common subexpressions, previous works have not considered the delay of multiplier blocks which is a critical factor in the design of complex filters. In this paper, we present new algorithms to minimize the complexity of multiplier blocks under the given delay constraints. By analyzing multiplier blocks in view of delay, three delay reduction methods are proposed and combined into previous algorithms. Since the proposed algorithms can generate multiplier blocks that meet the specified delay, a trade-off between delay and hardware complexity is enabled by changing the delay constraints. Experimental results show that the proposed algorithms can reduce the delay of multiplier blocks at the cost of a little increase of complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effects of global interconnect optimizations on performance estimation of deep submicron design

    Publication Year: 2000 , Page(s): 56 - 61
    Cited by:  Papers (15)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (683 KB)  

    In this paper, we quantify the impact of global interconnect optimization techniques that address such design objectives as delay, peak noise, delay uncertainty due to noise, power, and cost. In doing so, we develop a new system-performance simulation model as a set of studies within the MARCO GSRC Technology Extrapolation (GTX) system. We model a typical point-to-point global interconnect and focus on accurate assessment of both circuit and design technology with respect to such issues as inductance, signal line shielding, dynamic delay, buffer placement uncertainty and repeater staggering. We demonstrate, for example, that optimal wire sizing models need to consider inductive effects-and that use of more accurate (-1,3) worst-case capacitive coupling noise switch factors substantially increases peak noise estimates compared to traditional (0,2) bounds. We also find that optimal repeater sizes are significantly smaller than conventional models would suggest, especially when considering energy-delay issues. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Impact of systematic spatial intra-chip gate length variability on performance of high-speed digital circuits

    Publication Year: 2000 , Page(s): 62 - 67
    Cited by:  Papers (22)  |  Patents (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (717 KB)  

    Using data collected from an actual state-of-the-art fabrication facility, we conducted a comprehensive characterization of an advanced 0.18 /spl mu/m CMOS process. The measured data revealed significant systematic, rather than random, spatial intra-chip variability of MOS gate length, leading to large circuit path delay variation. The critical path value of a combinational logic block varies by as much as 17%, and the global skew is increased by 8%. Thus, a significant timing error (/spl sim/25%) and performance loss takes place if variability is not properly addressed. We derive a model, which allows estimating performance degradation for the given circuit and process parameters. Analysis shows that the spatial, rather than proximity-dependent, systematic Lgate variability is the main cause of large circuit speed degradation. The degradation is worse for the circuits with a larger number of critical paths and shorter average logic depth. We propose a location-dependent timing analysis methodology that allows to mitigate the detrimental effects of Lgate variability, and developed a tool linking the layout-dependent spatial information to circuit analysis. We discuss the details of the practical implementation of the methodology, and provide the guidelines for managing the design complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Miller factor for gate-level coupling delay calculation

    Publication Year: 2000 , Page(s): 68 - 74
    Cited by:  Papers (31)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (604 KB)  

    In coupling delay computation, a Miller factor of more than 2/spl times/ may be necessary to account for active coupling capacitance when modeling the delay of deep submicron circuitry in the presence of active coupling capacitance. We propose an efficient method to estimate this factor such that the delay response of a decoupling circuit model can emulate the original coupling circuit. Under the assumptions of zero initial voltage, equal charge transfer, and 0.5V/sub DD/ as the switching threshold voltage, an upper bound of 3/spl times/ for maximum delay and a lower bound of -1/spl times/ for minimum delay can be proven. Efficient Newton-Raphson iteration is also proposed as a technique for computing the Miller factor or effective capacitance. This result is highly applicable to crosstalk coupling delay calculation in deep submicron gate-level static timing analysis. Detailed analysis and approximation are presented. SPICE simulations are demonstrated to show high correlation with these approximations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Challenges and opportunities in broadband and wireless communication designs

    Publication Year: 2000 , Page(s): 76 - 82
    Cited by:  Papers (2)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (941 KB)  

    Communication designs form the fastest growing segment of the semiconductor market. Both network processors and wireless chipsets have been attracting a great deal of research attention, financial resources and design efforts. However, further progress is limited by lack of adequate system methodologies and tools. Our goal in this paper is to provide impetus for development of communication design techniques and tools. The first part addresses network processors (NP) that we study from three viewpoints: application, architecture, and system software and compilation tools. In addition to summary of main issues and representative case studies, we identify main system design issues. The second part of the tutorial focuses on wireless design. The main emphasis is on platform-based design methodology that leverages on functional profiling, architecture exploration, and orthogonalization of concerns to facilitate low-power wireless communication systems. The highlight of the paper, an in-depth study of the state-of-the-art wireless design, PicoRadio, is used as explanatory design example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Challenges in physical chip design

    Publication Year: 2000 , Page(s): 84 - 91
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (686 KB)  

    Chip industry obeys a number of laws, various kinds of laws. Mathematical laws if accurate models can be formulated, physical laws, especially solid state physics, obtained by observation and induction, chemical laws pertinent for the manufacturing processes, economical and judicial laws that concern such industries. These laws still hold true, although technology has come a long way since they were formulated. Obviously, modern technologies require a completely different design flow. Homogeneous processors do not benefit much from parts of a traditional flow. The emphasis should be more on modeling applications as networks of communicating processes in a suitable specification language. Equally important is reuse of specification software, considering the short life spans of integrated circuits and the demand for short paths to the market. General multilayer designs require complete new layout synthesis tools. Placement is obsolete and even floorplan design for each layer is not adequate because of the strong geometrical constraints. Wire planning will be more of a must, but has to acquire a more precise meaning in this application. The challenges posed by the unavoidable escape routes, to break free from the confinements of conventional large scale integration methodologies, is the topic of this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • General models for optimum arbitrary-dimension FPGA switch box designs

    Publication Year: 2000 , Page(s): 93 - 98
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (629 KB)  

    An FPGA switch box is said to be hyper-universal if it is routable for all possible surrounding multi-pin net topologies satisfying the routing resource constraints. It is desirable to design hyper-universal switch boxes with the minimum number of switches. A previous work, Universal Switch Module, considered such a design problem concerning 2-pin net routings around a single FPGA switch box. However, as most nets are multi-pin nets in practice, it is imperative to study the problem that involves multi-pin nets. In this paper, we provide a new view of global routings and formulate the most general k-sided switch box design problem into an optimum k-partite graph design problem. Applying a powerful decomposition theorem of global routings, we prove that, for a fixed k, the number of switches in an optimum k-sided switch box with W terminals on each side is O (W), by constructing some hyper-universal switch boxes with O(W) switches. Furthermore, we obtain optimum, hyper-universal 2-sided and 3-sided switch boxes, and propose hyper-universal 4-sided switch boxes with less than 6.7 W switches, which is very close to the lower bound 6 W obtained for pure 2-pin net models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A timing-constrained algorithm for simultaneous global routing of multiple nets

    Publication Year: 2000 , Page(s): 99 - 103
    Cited by:  Papers (14)  |  Patents (41)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (674 KB)  

    In this paper we propose a new approach for VLSI interconnect global routing that can optimize both congestion and delay, which are often competing objectives. Our approach provides a general framework that may use any single-net routing algorithm and any delay model in global routing. It is based on the observation that there are several routing topology flexibilities under timing constraints. These flexibilities are exploited for congestion reduction through a network flow based hierarchical bisection and assignment process. Experimental results on benchmark circuits are quite promising. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Provably good global buffering using an available buffer block plan

    Publication Year: 2000 , Page(s): 104 - 109
    Cited by:  Papers (11)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (758 KB)  

    To implement high-performance global interconnect without impacting the performance of existing blocks, the use of buffer blocks is increasingly popular in structured-custom and block-based ASIC/SOC methodologies. Recent works by Cong et al. (1999) and Tang and Wong (2000) give algorithms to solve the buffer block planning problem. In this paper we address the problem of how to perform buffering of global nets given an existing buffer block plan. Assuming that global nets have been already decomposed into two-pin connections, we give a provably good algorithm based on a recent approach of Garg and Konemann (1998) and Fleischer (1999). Our method routes connections using available buffer blocks, such that required upper and lower bounds on buffer intervals-as well as wirelength upper bounds per connection-are satisfied. Our model allows more than one buffer to be inserted into any given connection. In addition, our algorithm observes buffer parity constraints, i.e., it will choose to use an inverter or a buffer (=co-located pair of inverters) according to source and destination signal parity. The algorithm outperforms previous approaches and has been validated on top-level layouts extracted from a recent high-end microprocessor design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predictable routing

    Publication Year: 2000 , Page(s): 110 - 113
    Cited by:  Papers (26)  |  Patents (48)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (502 KB)  

    Predictable routing is the concept of using prespecified patterns to route a net. By doing this, we allow an more accurate prediction mechanism for metrics such as congestion and wirelength earlier in the design flow. Additionally, we can better plan the routes, insert buffers and perform wire sizing earlier. With comparable routing quality we show that we can predictably route up to 30% of a selected subset of nets. Also, we introduce methods for finding a group of nets which can be predictably routed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Counterexample-guided choice of projections in approximate symbolic model checking

    Publication Year: 2000 , Page(s): 115 - 119
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (521 KB)  

    BDD-based symbolic techniques of approximate reachability analysis based on decomposing the circuit into a collection of overlapping sub-machines (also referred to as overlapping projections) have been recently proposed. Computing a superset of the reachable states in this fashion is susceptible to false negatives. Searching for real counterexamples in such an approximate space is liable to failure. In this paper the "hybridization effect" induced by the choice of projections is identified as the cause for the failure. A heuristic based on Hamming Distance is proposed to improve the choice of projections, that reduces the hybridization effect and facilitates either a genuine counterexample of proof of the property. The ideas are evaluated on a real large design example from the PCI Interface unit in the MAGIC chip of the Stanford FLASH Multiprocessor. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Smart simulation using collaborative formal and simulation engines

    Publication Year: 2000 , Page(s): 120 - 126
    Cited by:  Papers (33)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (789 KB)  

    We present Ketchum, a tool that was developed to improve the productivity of simulation-based functional verification by providing two capabilities: (1) automatic test generation and (2) unreachability analysis. Given a set of "interesting" signals in the design under test (DUT), automatic test generation creates input stimuli that drive the DUT through as many different combinations (called coverage states) of these signals as possible to thoroughly exercise the DUT. Unreachability analysis identifies as many unreachable coverage states as possible. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation coverage enhancement using test stimulus transformation

    Publication Year: 2000 , Page(s): 127 - 133
    Cited by:  Papers (2)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (654 KB)  

    This paper introduces the concept of abstract state exploration histories to a simulation environment, and presents a test stimulus transformation (TST) technique to improve simulation coverage. State exploration histories are adapted from reachability analysis in Formal Verification. In TST, an aggressively abstracted state exploration history is maintained during simulation. While this history is being collected, test stimuli from an existing test bench are transformed on-the-fly to explore new scenarios that are not in the history. The results showed that 3-fold increase in transition coverage for a cache coherence controller, and 10 times faster coverage convergence for a MPEG2 decoder can be achieved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic response time optimization for SDF graphs

    Publication Year: 2000 , Page(s): 135 - 140
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (610 KB)  

    Synchronous Data Flow (SDF) is a well-known model of computation that is widely used in the control engineering and digital signal processing domains. Existing scheduling methods are mainly static approaches that assume full knowledge of the environment, e.g., data arrival times. In a growing number of practical cases like internet multimedia applications there exists only partial knowledge of the environment, e.g. average data rates. Here, only dynamic scheduling can yield optimal results. In this paper we propose a new dynamic scheduling method that minimizes the maximal response time of the system. It is a generalization of a deadline revision method to allow treatment of data-dependent tasks using EDF scheduling. The applicability and benefit of the new approach is shown using a real-world example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full-chip, three-dimensional, shapes-based RLC extraction

    Publication Year: 2000 , Page(s): 142 - 149
    Cited by:  Papers (7)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (860 KB)  

    In this paper, we report the development of the first commercial full-chip, three-dimensional, shapes-based, RLCK extraction tool, developed as part of a university-industry collaboration. The technique of return-limited inductances is used to provide a sparse, frequency-independent inductance and resistance network with self-inductances that represent sensible "nominal" values in the absence of mutual coupling. Mutual inductances are extracted for accurate noise analysis. The tool, Assura RLCX, exploits high-capacity scan-band techniques and disk caching for inductance extraction as an extension to Cadence's existing Assura RCX extractor. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.