By Topic

Computer Aided Design, 2004. ICCAD-2004. IEEE/ACM International Conference on

Date 7-11 Nov. 2004

Filter Results

Displaying Results 1 - 25 of 200
  • Dynamic transition relation simplification for bounded property checking

    Publication Year: 2004 , Page(s): 50 - 57
    Cited by:  Papers (24)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (892 KB) |  | HTML iconHTML  

    Bounded model checking (BMC) is an incomplete property checking method that is based on a finite unfolding of the transition relation to disprove the correctness of a set of properties or to prove them for a limited execution length from the initial states. Current BMC techniques repeatedly concatenate the original transition relation to unfold the circuit with increasing depths. In this paper we present a method that is based on a dual unfolding scheme. The first unfolding is non-initialized and progressively simplifies concatenated frames of the transition relation. The tail of the simplified frames is then applied in the second unfolding, which starts from the initial state and checks the properties. We use a circuit graph representation for all functions and perform simplification by merging vertices that are functionally equivalent under given input constraints. In the noninitialized unfolding, previous time frames progressively tighten these constraints thus leading to an asymptotic simplification of the transition relation. As a side benefit, our method can find inductive invariants constructively by detecting when vertices are functionally equivalent across time frames. This information is then used to further simplify the transition relation and, in some cases, prove unbounded correctness of properties. Our experiments using industrial property checking problems demonstrate that the presented method significantly improves the efficiency of BMC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Checking consistency of C and Verilog using predicate abstraction and induction

    Publication Year: 2004 , Page(s): 66 - 72
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (834 KB) |  | HTML iconHTML  

    It is common practice to write C models of circuits due to the greater simulation efficiency. Once the C program satisfies the requirements, the circuit is designed in a hardware description language (HDL) such as Verilog. It is therefore highly desirable to automatically perform a correspondence check between the C model and a circuit given in HDL. We present an algorithm that checks consistency between an ANSI-C program and a circuit given in Verilog using predicate abstraction. The algorithm exploits the fact that the C program and the circuit share many basic predicates. In contrast to existing tools that perform predicate abstraction, our approach is SAT-based and allows all ANSI-C and Verilog operators in the predicates. We report experimental results on an out-of-order RISC processor. We compare the performance of the new technique to bounded model checking (BMC). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unification of partitioning, placement and floorplanning

    Publication Year: 2004 , Page(s): 550 - 557
    Cited by:  Papers (35)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1009 KB) |  | HTML iconHTML  

    Large macro blocks, pre-designed datapaths, embedded memories and analog blocks are increasingly used in ASIC designs. However, robust algorithms for large-scale placement of such designs have only recently been considered in the literature, and improvements by over 10% per paper are still common. Large macros can be handled by traditional floorplanning, but are harder to account for in min-cut and analytical placement. On the other hand, traditional floorplanning techniques do not scale to large numbers of objects, especially in terms of solution quality. We propose to integrate min-cut placement with fixed-outline floor-planning to solve the more general placement problem, which includes cell placement, floorplanning, mixed-size placement and achieving routability. At every step of min-cut placement, either partitioning or wirelength-driven, fixed-outline floorplanning is invoked. If the latter fails, we undo an earlier partitioning decision, merge adjacent placement regions and re-floorplan the larger region to find a legal placement for the macros. Empirically, this framework improves the scalability and quality of results for traditional wirelength-driven floorplanning. It has been validated on recent designs with embedded memories and accounts for routability. Additionally, we propose that free-shape rectilinear floorplanning can be used with rough module-area estimates before synthesis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Backend CAD flows for "restrictive design rules"

    Publication Year: 2004 , Page(s): 739 - 746
    Cited by:  Papers (13)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (806 KB) |  | HTML iconHTML  

    To meet challenges of deep-subwavelength technologies (particularly 130 nm and following), lithography has come to rely increasingly on data processes such as shape fill, optical proximity correction, and RETs like altPSM. For emerging technologies (65 nm and following) the computation cost and complexity of these techniques are themselves becoming bottlenecks in the design-silicon flow. This has motivated the recent calls for restrictive design rules such as fixed width/pitch/orientation of gate-forming polysilicon features. We have been exploring how design might take advantage of these restrictions, and present some preliminary ideas for how we might reduce the computational cost throughout the back end of the design flow through the post-tapeout data processes while improving quality of results: the reliability of OPC/RET algorithms and the accuracy of models of manufactured products. We also believe that the underlying technology, including simulation and analysis, may be applicable to a variety of approaches to design for manufacturability (DFM). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal wire retiming without binary search

    Publication Year: 2004 , Page(s): 452 - 458
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (856 KB) |  | HTML iconHTML  

    The problem of retiming over a netlist of macro-blocks to achieve the minimal clock period, where the block internal structures may not be changed and flip-flops may not be inserted on some wire segments, is called the optimal wire retiming problem. To the best of our knowledge, there is no polynomial-time approach to solve it and the existence of such an approach is still an open question. We present a brand new algorithm that solves the optimal wire retiming problem with polynomial-time worst case complexity. Since the new algorithm avoids binary search and is essentially incremental, it has the potential of being combined with other optimization techniques. Experimental results show that the new algorithm is very efficient in practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DAOmap: a depth-optimal area optimization mapping algorithm for FPGA designs

    Publication Year: 2004 , Page(s): 752 - 759
    Cited by:  Papers (27)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (908 KB) |  | HTML iconHTML  

    In This work we study the technology mapping problem for FPGA architectures to minimize chip area, or the total number of lookup tables (LUTs) of the mapped design, under the chip performance constraint. This is a well-studied topic and a very difficult task (NP-hard). The contributions of This work are as follows: (i) we consider the potential node duplications during the cut enumeration/generation procedure so the mapping costs encoded in the cuts drive the area-optimization objective more effectively; (ii) after the timing constraint is determined, we will relax the non-critical paths by searching the solution space considering both local and global optimality information to minimize mapping area; (iii) an iterative cut selection procedure is carried out that further explores and perturbs the solution space to improve solution quality. We guarantee optimal mapping depth under the unit delay model. Experimental results show that our mapping algorithm, named DAOmap, produces significant quality and runtime improvements. Compared to the state-of-the-art depth-optimal, area minimization mapping algorithm CutMap (Cong and Hwan, 1995), DAOmap is 16.02% better on area and runs 24.2X faster on average when both algorithms are mapping to FPGAs using LUTs of five inputs. LUTs of other inputs are also used for comparisons. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A path-based methodology for post-silicon timing validation

    Publication Year: 2004 , Page(s): 713 - 720
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (904 KB) |  | HTML iconHTML  

    This work presents a novel path-based methodology for post-silicon timing validation. In timing validation, the objective is to decide if the timing behavior observed from the silicon is consistent with that predicted by the timing model. At the core of our path-based methodology, we propose a framework to obtain the post-silicon path ranking from observing silicon timing behavior. Then, the consistency is determined by comparing the post-silicon path ranking and the pre-silicon path ranking calculated based on the timing model. Our post-silicon ranking methodology consists of two approaches: ranking optimization and path filtering. We discuss the applications of both approaches and their impacts on the path ranking results. For experiments, we utilize a statistical timing simulator that was developed in the past to derive chip samples and we demonstrate the feasibility of our methodology using benchmark circuits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient computation of small abstraction refinements

    Publication Year: 2004 , Page(s): 518 - 525
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (930 KB) |  | HTML iconHTML  

    In the abstraction refinement approach to model checking, the discovery of spurious counterexamples in the current abstract model triggers its refinement. The proof - produced by a SAT solver - that the abstract counterexamples cannot be concretized can be used to identify the circuit elements or predicates that should be added to the model. It is common, however, for the refinements thus computed to be highly redundant. A costly minimization phase is therefore often needed to prevent excessive growth of the abstract model. In This work we show how to modify the search strategy of a SAT solver so that it generates refinements that are close to minimal, thus greatly reducing the time required for their minimization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Frugal linear network-based test decompression for drastic test cost reductions

    Publication Year: 2004 , Page(s): 721 - 725
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (692 KB) |  | HTML iconHTML  

    In This work we investigate an effective approach to construct a linear decompression network in the multiple scan chain architecture. A minimal pin architecture, complemented by negligible hardware overhead, is constructed by mathematically analysing test data relationships, delivering in turn drastic test reductions. The proposed network drives a large number of internal scan chains with a short input vector, thus allowing significant reductions in both test time and test volume. The proposed method constructs an inverter-interconnect based network by exploring the pairwise linear dependencies of the internal scan chain vectors, resulting in a very low cost network that is nonetheless capable of outperforming much costlier compression schemes. We propose an iterative algorithm to construct the network from an initial set of test cubes. The experimental data shows significant reductions in test time and test volume with no loss of fault coverage. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A stochastic integral equation method for modeling the rough surface effect on interconnect capacitance

    Publication Year: 2004 , Page(s): 887 - 891
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (672 KB) |  | HTML iconHTML  

    In This work we describe a stochastic integral equation method for computing the mean value and the variance of capacitance of interconnects with random surface roughness. An ensemble average Green's function is combined with a matrix Neumann expansion to compute nominal capacitance and its variance. This method avoids the time-consuming Monte Carlo simulations and the discretization of rough surfaces. Numerical experiments show that the results of the new method agree very well with Monte Carlo simulation results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Debugging sequential circuits using Boolean satisfiability

    Publication Year: 2004 , Page(s): 204 - 209
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (793 KB) |  | HTML iconHTML  

    Logic debugging of today's complex sequential circuits is an important problem. In this paper, a logic debugging methodology for multiple errors in sequential circuits with no state equivalence is developed. The proposed approach reduces the problem of debugging to an instance of Boolean satisfiability. This formulation takes advantage of modern Boolean satisfiability solvers that handle large circuits in a computationally efficient manner. An extensive suite of experiments with large sequential circuits confirm the robustness and efficiency of the proposed approach. The results further suggest that Boolean satisfiability provides an effective platform for sequential logic debugging. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Static statistical timing analysis for latch-based pipeline designs

    Publication Year: 2004 , Page(s): 468 - 472
    Cited by:  Papers (8)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (698 KB) |  | HTML iconHTML  

    A latch-based timing analyzer is an essential tool for developing high-speed pipeline designs. As process variations increasingly influence the timing characteristics of DSM designs, a timing analyzer capable of handling process-induced timing variations for latch-based pipeline designs becomes in demand. In this work, we present a static statistical timing analyzer, STAP, for latch-based pipeline designs. Our analyzer propagates statistical worst-case delays as well as critical probabilities across the pipeline stages. We present an efficient method to handle correlations due to re-convergent fanouts. We also demonstrate the impact of not including the analysis of reconvergent fanouts in latch-based pipeline designs. Comparing to a Monte-Carlo based timing analyzer, our experiments show that STAP can accurately evaluate the critical probability that a design violates the timing constraints under a given statistical timing model. The runtime comparison further demonstrates the efficiency of our STAP. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Voltage-drop-constrained optimization of power distribution network based on reliable maximum current estimates

    Publication Year: 2004 , Page(s): 479 - 484
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (742 KB) |  | HTML iconHTML  

    The problem of optimum design of tree-shaped power distribution networks with respect to the voltage drop effect is addressed in this paper. An approach for the width adjustment of the power lines supplying the circuit's major functional blocks is formulated, so that the network occupies the minimum possible area under specific voltage drop constraints at all blocks. The optimization approach is based on precise maximum current estimates derived by statistical means from recent advances in the field of extreme value theory. Experimental tests include the design of power grid for a choice of different topologies and voltage drop tolerances in a typical benchmark circuit. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The impact of device parameter variations on the frequency and performance of VLSI chips

    Publication Year: 2004 , Page(s): 343 - 346
    Cited by:  Papers (20)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (605 KB) |  | HTML iconHTML  

    The distance-correlated (continuous) within-die (WID) process variations of transistor parameters appears to be approximately scaling with process generations. Furthermore, shrinking clock cycles and the scaling of functional block dimensions in complex chips (e.g. CPUs), cause a shortening of interconnect distances. These effects mitigate correlated variations' impact on delay changes across a die. Temperature has a small effect, and supply distribution can be well-understood and designed. Furthermore, uncorrelated (random) variations (e.g. RDF, & LER) currently have a small impact on speed-setting paths, and even multiplying their effect (as processes shrink), would not make them very significant. Coupled with methods for estimating the shift in the maximum operating frequency (Fmax) of a die (due to variations), it is shown that variations will continue to have a small effect on product speeds through the mid-term future. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast flip-chip power grid analysis via locality and grid shells

    Publication Year: 2004 , Page(s): 485 - 488
    Cited by:  Papers (47)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (685 KB) |  | HTML iconHTML  

    Full-chip power grid analysis is time consuming. Several techniques have been proposed to tackle the problem but typically they deal with the power grid as a whole or partition at unnatural boundaries. Using a locality effect under flip-chip packaging, we propose a natural partitioning approach based on overlapping power grid "shells". The technique makes more efficient any previous simulation techniques that are polynomial in grid size. It is also parallelizable and therefore extremely fast. Using complete partitions gives no loss of accuracy compared to a full matrix solution, while lesser partitions are conservative for droop and current. Results on a recent Pentium® microprocessor design show excellent speed and accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimizing the number of test configurations for FPGAs

    Publication Year: 2004 , Page(s): 899 - 902
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (634 KB) |  | HTML iconHTML  

    FPGA test cost can be greatly reduced by minimizing the number of test configurations. A test technique is presented for FPGAs with multiplexer-based routing architectures in which multiple logical paths through each multiplexer is enabled instead of only one path. It is shown that for Xilinx Virtex-II and Spartan-3 FPGAs only 8 test configurations are required to achieve 100% stuck-at, PIP stuck-on, and PIP stuck-off fault coverage. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Best practices in low power design. 1. Power reduction techniques [Tutorial 1]

    Publication Year: 2004 , Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (412 KB) |  | HTML iconHTML  
    Full text access may be available. Click article title to sign in or learn about subscription options.
  • HiSIM: hierarchical interconnect-centric circuit simulator

    Publication Year: 2004 , Page(s): 489 - 496
    Cited by:  Papers (2)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (848 KB) |  | HTML iconHTML  

    To ensure the power and signal integrity of modern VLSI circuits, it is crucial to analyze huge amount of nonlinear devices together with enormous interconnect and even substrate parasitics to achieve the required accuracy. Neither traditional circuit simulation engines such as SPICE nor switch-level timing analysis algorithms are equipped to handle such a tremendous challenge in both efficiency and accuracy. We establish a solid framework that simultaneously takes advantage of a hierarchical nonlinear circuit simulation algorithm and an advanced large-scale linear circuit simulation method using a new predictor-corrector algorithm. Under solid convergence and stability guarantees, our simulator, HiSIM, a hierarchical interconnect-centric circuit simulator, is capable of handling the post-layout RLKC power and signal integrity analysis task efficiently and accurately. Experimental results demonstrate over 180X speed up over the conventional flat simulation method with SPICE-level accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DAG-aware circuit compression for formal verification

    Publication Year: 2004 , Page(s): 42 - 49
    Cited by:  Papers (10)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (877 KB) |  | HTML iconHTML  

    The choice of representation for circuits and Boolean formulae in a formal verification tool is important for two reasons. First of all, representation compactness is necessary in order to keep the memory consumption low. This is witnessed by the importance of maximum processable design size for equivalence checkers. Second, many formal verification algorithms are sensitive to redundancies in the design that is processed. To address these concerns, three different auto-compressing representations for Boolean circuit networks and formulas have been suggested in the literature. We attempt to find a blend of features from these alternatives that allows us to remove as much redundancy as possible while not sacrificing runtime. By studying how the network representation size varies when we change parameters, we show that the use of only one operator node is suboptimal, and demonstrate that the most powerful of the proposed reduction rules, two-level minimization, actually can be harmful. We correct the bad behavior of two-level optimization by devising a simple linear simplification algorithm that can remove tens of thousands of nodes on examples where all obvious redundancies already have been removed. The combination of our compactor with the simplest representation outperforms all of the alternatives we have studied, with a theoretical runtime bound that is at least as good as the three studied representations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards formal verification of analog designs

    Publication Year: 2004 , Page(s): 210 - 217
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (874 KB) |  | HTML iconHTML  

    We show how model checking methods developed for hybrid dynamic systems may be usefully applied for analog circuit verification. Finite-state abstractions of the continuous analog behavior are automatically constructed using polyhedral outer approximations to the flows of the underlying continuous differential and difference equations. In contrast to previous approaches, we do not discretize the entire continuous state space, and our abstraction captures the relevant behaviors for verification in terms of the transitions between "states" (regions of the continuous state space) as a finite state machine in the hybrid system model. The approach is illustrated for two circuits, a standard oscillator benchmark, and a much larger and more realistic delta-sigma (AI) modulator. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hermes: LUT FPGA technology mapping algorithm for area minimization with optimum depth

    Publication Year: 2004 , Page(s): 748 - 751
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (633 KB) |  | HTML iconHTML  

    This work presents Hermes, a depth-optimal LUT based FPGA mapping algorithm. The presented algorithm is based on a new strategy for finding LUTs allowing to find a good LUT in a significantly shorter time compared to the previous methods. The quality of results is improved by enabling LUT re-implementation and by introducing a cost function which encourages input sharing among LUTs. The experimental results show that, on average, the presented algorithm computes 15.5% and 3.5% smaller LUT mappings compared to the ones obtained by FlowMap and CutMap, respectively, using two orders of magnitude less CPU time. The speed of Hermes makes it suitable for running in an incremental manner during logic synthesis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A general framework for probabilistic low-power design space exploration considering process variation

    Publication Year: 2004 , Page(s): 808 - 813
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (746 KB) |  | HTML iconHTML  

    Increasing levels of process variation in current process technologies make it extremely important that design and process decisions be made while considering their impact. This work presents a convex optimization based approach to select supply and threshold voltages to minimize power dissipation in generic multi-Vdd/Vth CMOS designs while considering process variation. We use this probabilistic approach to compare the optimization of different statistical parameters of power dissipation (e.g., mean or high percentile points), and quantify the impact of rising process variations on these power minimization techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analyzing software influences on substrate noise: an ADC perspective

    Publication Year: 2004 , Page(s): 916 - 922
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (736 KB) |  | HTML iconHTML  

    Substrate noise affects the performance of mixed signal integrated circuits. Power supply (di/dt) noise is the dominant source of substrate noise. There have been various attempts at the circuit and software levels to estimate this noise. Software-level noise estimation is especially important, as designing noise tolerant circuits for all circumstances may be prohibitively expensive. In this paper, we propose a new software approach for estimating di/dt noise and incorporate it into a power simulator in order to investigate the influence of software on substrate noise. As a case study, we investigate how an analog-to-digital converter (ADC) can be designed to adapt its resolution in the presence of substrate noise generated by a embedded processor core. The proposed strategies prevent unexpected ADC performance degradations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Challenges and solutions in the design of high-frequency global clock distributions [Tutorial 5]

    Publication Year: 2004 , Page(s): xv
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | PDF file iconPDF (415 KB) |  | HTML iconHTML  
    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient statistical timing analysis through error budgeting

    Publication Year: 2004 , Page(s): 473 - 477
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (759 KB) |  | HTML iconHTML  

    We propose a technique for optimizing the runtime in statistical timing analysis. Given a global acceptable error budget at the primary output which signifies the difference in the area of the accurate and approximate timing CDFs, we propose a formulation of budgeting this global error across all nodes in the circuit. This node error budget is used to simplify the computation of arrival time CDFs at each node using approximations. This simplification reduces the runtime of statistical timing analysis. We investigate two ways of exploiting this node error budget, firstly through piecewise linear approximation (see ibid., A. Devgan and C. Kashyap, 2003) and secondly though hierarchical quadratic approximation. Experimental results on ISCAS/MCNC benchmarks show that our approach is at most 3 times faster than accurate statistical timing analysis and had a very small error. We also found quadratic piecewise approximation to be more accurate than linear approximation but at lesser gains in runtime. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.