By Topic

Design & Test, IEEE

Early Access Articles

Early Access articles are new content made available in advance of the final electronic or print versions and result from IEEE's Preprint or Rapid Post processes. Preprint articles are peer-reviewed but not fully edited. Rapid Post articles are peer-reviewed and edited but not paginated. Both these types of Early Access articles are fully citable from the moment they appear in IEEE Xplore.

Filter Results

Displaying Results 1 - 25 of 32
  • Crosstalk Mitigation for High-Radix and Low-Diameter Photonic NoC Architectures

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (895 KB)  

    Photonic network-on-chip (PNoC) architectures have shown the potential to replace electrical networks-on-chip as they can attain higher bandwidth with lower power-dissipation for on-chip communication. But microring-resonators, which are the basic building blocks of PNoCs, are highly susceptible to crosstalk that can notably degrade optical-signal-to-noise ratio (SXR), reducing reliability in PNoCs. We propose two novel encoding mechanisms to improve worst-case-SXR by reducing crosstalk noise in microring-resonators used within high-radix and low-diameter crossbar-based PNoCs. Our evaluation results indicate that the encoding schemes improve worst-case-SXR in Corona and Firefly PNoCs by up to 18%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Asynchronous Design (Part 1): Overview and Recent Advances

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1097 KB)  

    There has been a continuous growth of interest in asynchronous design over the last two decades, as engineers grapple with a host of challenging trends in the current lateMoore era. As highlighted in the International Technology Roadmap for Semiconductors (ITRS), these include dealing with the impact of increased variability, power and thermal bottlenecks, high fault rates (including due to soft errors), aging, and scalability issues, as individual chips head to the multibilliontransistor range and manycore architectures are targeted. While the synchronous, i.e. centralized clock, paradigm has prevailed in industry for several decades, asynchronous design or the use of a hybrid mix of asynchronous and synchronous components provides the potential for “objectoriented” distributed hardware systems, which naturally support modular and extensible composition, ondemand operation without extensive instrumented power management, and variabilitytolerant design. As highlighted by the ITRS report, it is therefore increasingly viewed as a critical component for addressing the above challenges. This article aims to provide both a short historical and technical overview of asynchronous design, and also a snapshot of the stateoftheart, with highlights of some recent exciting technical advances and commercial inroads. It also covers some of the remaining challenges, as well as opportunities, of the field. Asynchronous design is not new: some of the earliest processors used clockless techniques. Overall, its history can be divided into roughly four eras. The early years, from the 1950’s to the early 1970’s, included the development of classical theory (Huffman,1 Unger,1 McCluskey, Muller2 ) , as well as use of asynchronous design in a number of leading commercial processors (Iliac, Iliac II, Atlas, MU5) and graphics systems (LDS1). The middle years, from the early 1970’s to early 1980’s, were largely an era of retrenchment, with redu- ed activity, corresponding to the advent of the synchronous VLSI era. The mid 1980’s to late 1990’s represented a revival or “coming of age” era, with the beginning of modernized methodologies for asynchronous controller and pipeline design, initial computeraided design (CAD) tools and optimization techniques, the first academic microprocessors (Caltech, Univ. of Manchester, Tokyo Institute of Technology), and initial commercial uptake for use in lowpower consumer products (Philips Semiconductors) and highperformance interconnection networks (Myricom). The modern era, from the early 2000’s to present, includes a surge of activity, with modernization of design approaches, CAD tool development and systematic optimization techniques, migration into onchip interconnection networks, several largescale demonstrations of cost benefits, industrial uptake at several leading companies (IBM, Intel) as well as startups, and application to emerging technologies (sub/ nearthreshold circuits, sensor networks, energy harvesting, cellular automata). The approaches in the modern era bear little resemblance to some of the simple asynchronous examples found in older textbooks. This article is divided into two parts. Part 1 begins with a chronicle of past and recent commercial advances, and highlights the enabling role of asynchronous design in several emerging application areas. Two promising application domains are covered in more detail — GALS Systems and NetworksonChip — given their importance in facilitating the integration of largescale heterogeneous systemsonchip. Finally, several foundational techniques are introduced: handshaking protocols and data encoding, pipelining, and synchronization and arbitration. Part 2 focuses on methodologies for the design of asynchronous systems, including logicand highlevel synthesis?? tool flows for design, analysis, verification and test?? as well as examples of asynchronous processors an View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Asynchronous Design (Part 2): Systems and Methodologies

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (627 KB)  

    This twopart article aims to provide both a short historical and technical overview of asynchronous design, as well as a snapshot of stateoftheart. Part 1 covered foundations of asynchronous design, and highlighted recent applications, including commercial advances and use in emerging application areas. Part 2 focuses on methodologies for designing asynchronous systems, including basics of hazards, synthesis and optimization methods for both logiclevel and highlevel synthesis, and the development of specification languages and CAD tool flows. Finally, two sidebars provide a summary of asynchronous processors and architectures, as well as testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sensor Driven Reliability and Wearout Management

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (941 KB)  

    Gate oxide degradation has become a serious concern for sub-32nm technology node designs. The inherent statistical nature of the degradation mechanism combined with the PVT variations creates lot of uncertainty in the time-to-failure (TTF) of a chip. Traditionally designers use margining as a solution to this problem which limits the performance of the IC. We studied the statistics of the oxide degradation and propose the use of gate-oxide thickness sensors to give additional information to the dynamic reliability management (DRM) systems to allow for less pessimistic reliability budgeting. We also propose an oxide degradation sensor which can act as a canary circuit and provide DRM systems with the degradation state of the chip. The small size of the sensor (3.3 times the size of a minimum sized DFF) enables it to be used in large numbers like hundreds or thousands on a chip, significantly improving the reliability estimate of a die, thereby reducing the reliability margins and improving performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Balancing new reliability challenges and system performance at
 the architecture level

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (309 KB)  

    Late CMOS era technology trends imply an increasing degree of concern regarding device- and component-level reliability within a microprocessor chip. This includes both transient and permanent failure modes. Depending on the target market, varying levels of protection (in terms of error or fault detection and recovery) have to be included in future chips and associated systems. In this paper, we: (a) examine the most important sources of failure at the component level, and describe how they manifest at the system architecture level; (b) present new generation protection mechanisms at the system (architecture) level in order to maintain traditional reliability levels. With traditional solutions like dual and triple-modular redundancy becoming impractical in today's power- and area-constrained design era, we discuss modern techniques of providing more area-efficient solutions, while giving up some coverage. We demonstrate in this highlevel survey summary article of how performance, reliability and power consumption may be traded off against each other for designing future robust systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Novel Simulation Fault Injection using Electronic Systems Level Simulation Models

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (790 KB)  

    Abstract—In this paper, we propose a novel simulation fault injection method for the dependability analysis of complex SoCs using 32 nm technology. In previous simulation fault injections, the original simulation model is modified to implement a saboteur module or many mutants. This creates a problem since the architectural complexity of current SoCs is expected to increase rapidly in the 32 nm era. Furthermore, the modification process may incur additional tasks, such as verification and validation of the modified simulation model. Our simulation fault injection environment uses the modified SystemC simulation kernel augmented for fault injection experiments. The proposed methodology offers the following advantages over previous simulation fault injection methods. First, it does not require changes in the target simulation design model. Second, it minimizes the simulation hardware resource requirements and simulation time. Third, it allows mixed simulation of the ESL model and the register transfer level model using wrappers. To demonstrate the effectiveness of the proposed methodology, we designed the SystemC models of MIPS and TMR MIPS processors and ran the benchmark SW from MiBench to compare the failure rates of the two processors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of 3D DRAM and Its Application in 3D Integrated Multi-Core Computing Systems

    Publication Year: 2013 , Page(s): 1
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (226 KB)  

    This paper concerns appropriate 3D DRAM architecture design and the potential of using 3D DRAM to implement both L2 cache and main memory in 3D multi-core processor-DRAM integrated computing systems. We first present a coarse-grained 3D partitioning strategy for 3D DRAM design that can well exploit the benefits provided by 3D integration without incurring stringent constraints on through-silicon via (TSV) fabrications. Targeting multi-core processors, we further present design techniques that can effectively reduce the access latency of 3D DRAM L2 cache, hence improve the overall 3D integrated computing system performance. The effectiveness of these developed design techniques have been successfully evaluated based on CACTI-based memory modeling and full system simulations over a wide spectrum of multi-programmed workloads. Simulation results show that the proposed heterogeneous 3D DRAM design can improve the harmonic mean IPC by 23.9% on average compared with a baseline scenario using 3D DRAM only as the main memory. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Overcoming Early-Life Failure and Aging Challenges for Robust System Design

    Publication Year: 2013 , Page(s): 1
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (582 KB)  

    The biggest challenge in designing robust systems is to minimize the costs of error detection. Most existing error detection techniques suffer from high power and performance costs, and / or additional design complexity. Circuit failure prediction, together with CASP on-line diagnostics, enable design of robust systems that can effectively overcome reliability challenges associated with early-life failures and aging. The key attractive feature of such an approach is its significantly reduced power cost compared to traditional error detection. It also opens up new research opportunities across multiple abstraction layers (circuit, architecture, virtualization/OS, and applications) for designing optimized robust systems with respect to reliability requirements while balancing power, performance, area, and design complexity constraints. Such global optimization is essential for robust systems of the future. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrated Systems In The More-Than-Moore Era: Designing Low-Cost Energy-Efficient Systems Using Heterogeneous Components

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1198 KB)  

    Moore’s law has provided a metronome for semiconductor technology over the past four decades. However, when CMOS transistor feature size and interconnect dimensions approach their fundamental limits, aggressive scaling will no longer play a significant role in performance improvement. How should the semiconductor industry provide new value in each generation of products in such a scenario? While Moore’s law driven scaling has traditionally focused on improving computation performance (through faster clock frequencies and recently, more parallelism) and memory capacity, electronic systems of the future will provide value by being multi-functional. We envision that integrated systems of the future will perform diverse functions (in addition to traditional computation, storage and communication) such as real-time sensing, energy harvesting, and on-chip testing, to name a few. Enabling such diverse functionality with high performance, high reliability and a low energy budget in a single system requires a radical shift in the principles of system design and integration. Instead of focusing on improving the performance of traditional digital CMOS circuits or exploring nanotechnologies for Silicon and CMOS replacements, we espouse cohesive design and integration of multiple device technologies and diverse components in a single heterogeneous system that is high-performance, energy-efficient and reliable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Attacks and Defenses for JTAG

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (313 KB)  

    This article addresses some security issues surrounding JTAG. We look at the threat of a malicious chip in a JTAG chain. We outline attack scenarios where trust in a digital system is downgraded by the presence of such a chip. To defend against this, we propose a protection scheme that hardens JTAG by making use of lightweight cryptographic primitives, namely stream ciphers and incremental message authentication codes. The scheme defines four levels of protection. For each of the attack scenarios, we determine which protection level is needed to prevent it. Finally, we discuss the practical aspects of implementing these security enhancements such as area, test time and operational overheads. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Test Challenges for 3D Integrated Circuits

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (354 KB)  

    Three-dimensional (3D) integration can potentially overcome barriers in interconnect scaling, thereby providing an opportunity for continued higher performance using CMOS technology. However, one of the obstacles to 3D technology adoption is the insufficient understanding of 3D testing issues and the lack of design-for-testability (DFT) solutions. This paper describes testing and DFT-related challenges for 3D ICs, including problems that are unique to 3D integration, and describes early research results that have been reported in this area. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scan-based Speed-path Debug for a Microprocessor

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (507 KB)  

    Speed-path debug is a critical step in improving clock frequency of a design to meet the performance requirement. However, speed-path debug based on functional patterns can be very expensive. In this paper, we explore speed-path debug techniques based on at-speed scan test patterns. Enhancements are implemented to improve over an earlier proposed scan-based speed-path diagnosis algorithm. We further report the application results by applying the improved algorithm to a leading-edge high-performance microprocessor design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • VPOS: A Specific Operating System for the FPGA Verification of Microprocessor System-level Functions

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB)  

    System-level Functions (SLF) of microprocessors such as Memory Management and Interrupt Handling that provide hardware support to the software are hard to be verified on an FPGA prototype where the behavioral testbench building the complex software contexts can not be mapped. Traditionally, the FPGA verification task is performed by running a General Purpose Operating System (GPOS) like Linux, which is debugging inefficient and hard to be controlled. In this paper, the authors have proposed a Verification Purpose Operating System (VPOS) on FPGA to initialize machine resources and to build software contexts for directed or random tests. This framework greatly reduces the debugging complexity by seamlessly interacting with SW-simulation, and considerably increases coverage as opposed to the Linux-based method by providing high flexibility. We assess the feasibility of our approach by applying it to a microprocessor designed by our institute. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Diagnosis of design-silicon timing mismatch with feature encoding and importance ranking - the methodology explained

    Publication Year: 2014 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (912 KB)  

    For sub-65nm design, there can be many timing effects not explicitly and/or accurately modeled and simulated, resulting in unexpected timing mismatch between simulated timing behavior and observed timing behavior on silicon chips. For diagnosing timing mismatch, this paper describes a diagnosis approach that analyzes and ranks potential design related issues. We explain in detail how one should use diverse "features" to encode the potential design issues and how features can be interpreted properly by various kernel functions in a data learning algorithm for analyzing the mismatch. Then, we explain how kernel-based learning can be used to rank the importance of features such that a feature contributing the most to the timing mismatch is ranked the highest. We conclude the paper by showing simulated experimental results based on an industrial ASIC design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling Low-K Dielectric Breakdown in the Presence of Multiple Feature Geometries and Die-to-Die Linewidth Variation

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (222 KB)  

    Backend geometries on chips contain a wide variety of features. This paper analyzes data from test structures implemented on a 45nm technology test chip to relate geometry to failure rate statistics for low-k dielectric breakdown. An area scaling model is constructed which accounts for the presence of die-to-die linewidth variation, and a methodology is proposed to determine if low-k materials satisfy lifetime requirements in the presence of die-to-die linewidth variation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • At the Beginning

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (105 KB)  

    This special issue of Design & Test makes the excellent point that designing any complex system will require the use of design automation tools, in this case Bio-Design Automation (BDA.) However I think that biologists and bio-engineers new to this area might be in for some surprises when they start using these tools. The following might have happened a long time ago. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Secure and Robust Error Correction for Physical Unclonable Functions

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (297 KB)  

    One area of research that has not received much attention is the amount of information leakage due to error correction in practical PUF-based key generation systems. In this paper, we propose a new Syndrome Coding scheme, called Index-Based Syndrome (IBS) Coding. It differs from conventional syndrome coding methods in that it leaks less information than conventional methods or other variants that use bitwise XOR masking. Under the assumption that PUF outputs are i.i.d., IBS can be shown to be information-theoretically secure. The assumptions required to prove this result have been affirmed using NIST Randomness Tests. Further, IBS coding has coding gains associated with the soft decision encoding and soft decision decoding that is native to IBS, resulting in robust error correction. A Xilinx Virtex-5 implementation had no error correction failures over millions of trials when provisioned at 25oC and 1.0V, regenerated at 120oC and .90V and at -55oC and 1.1V. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistics in Semiconductor Test, Going Beyond Yield

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (757 KB)  

    Semiconductor test has evolved from simply screening individual units to a data-intensive manufacturing operation which enables decisions going far beyond “pass versus fail.” The quantity and complexity of data generated at each test manufacturing step, and indeed in the entire test manufacturing flow, make processing the data into a useful form a daunting task. Many fields have experienced similar explosive growth in data volume and also use statistical methods to understand and predict outcomes. In addition to developing new techniques test should exploit applicable statistical methods from any field such as agriculture or genetics to make test decisions, optimize test flows, and guide what test data should be acquired. Although statistics is a big subject, a small set of methods outlined in this paper are a solid start to the foundation of statistical test. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability Implications of NBTI in Digital Integrated Circuits

    Publication Year: 2014 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (561 KB)  

    Bias temperature instability (BTI) in MOSFETs in one of the major reliability challenges in nano-scale technology. This paper evaluates the severity of Negative BTI (NBTI) degradation in two major circuit applications: random logic and memory array. Simulation results obtained from 65nm PTM node shows that NBTI induced degradation in random logic is considerably lower than that of a single transistor. Simple delay guard-banding can efficiently mitigate the impact of NBTI in random logic. On the other hand, NBTI degradation in memories results in severe READ stability degradation, especially when combined with random process variation. Moreover, in scaled technology nodes, finite number of Si-H bonds in the channel can induce a statistical random variation in the degradation process. Simulations using 32nm/22nm Predictive Technology Model (PTM) shows that statistical random variation of NBTI, on top of random dopant fluctuation (RDF) results in significant random Vt variation in PMOS transistors, resulting in considerable degradation in static noise margin (SNM) of memory cells. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • xMAS: Quick Formal Modeling of Communication Fabrics to Enable Verification

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (198 KB)  

    Although communication fabrics at the microarchitectural level are mainly composed of standard primitives such as queues and arbiters, to get an executable model one has to connect these primitives with glue logic to complete the description. In this paper we identify a richer set of microarchitectural primitives that allows us to describe complete systems by composition alone. This enables us to build models faster (since models are now simply wiring diagrams at an appropriate level of abstraction) and to avoid common modeling errors such as inadvertent loss of data due to incorrect timing assumptions. Our models are formal and they are used for model checking as well as dynamic validation and performance modeling. However, unlike other formalisms this approach leads to a precise yet intuitive graphical notation for microarchitecture that captures timing and functionality in sufficient detail to be useful for reasoning about correctness and for communicating microarchitectural ideas to RTL and circuit designers and validators. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accelerating Emulation and Providing Full Chip Observability and Controllability at Run-Time

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (289 KB)  

    Performing hardware emulation on FPGAs is a significantly faster and more accurate approach for the verification of complex designs than software simulation. Therefore, hardware Simulation Accelerator and Emulator co-processor units are used to offload calculation-intensive tasks from the software simulator. However, the communication overhead between the software simulator and the hardware emulator is becoming a new critical bottleneck. Moreover, in a hardware emulation environment it is impossible to bring outside of the chip a large number of internal signals for verification purposes. Therefore, on-chip observability has become a significant issue. In our work we tackle both aforementioned problems. First, we deploy a novel emulation framework that automatically transforms into synthesizable code certain HDL parts of the testbench, in order to offload them from the software simulator and, more importantly, minimize the aforementioned communication overhead. Next, we extend this architecture by adding multiple fast scan-chain paths in the design in order to provide full circuit observability and controllability on the fly. In this paper, we briefly describe our approach for reducing the communication overhead problem, and present, for the first time, our complete innovative system which offers extensive observability and controllability in complex Design Under Tests (DUTs). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hardware IP Protection During Evaluation Using Embedded Sequential Trojan

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (611 KB)  

    Evaluation of hardware Intellectual Property (IP) cores is an important step in an IP-based system-on-chip (SoC) design flow. From the perspective of both IP vendors and Integrated Circuit (IC) designers, it is desirable that hardware IPs can be freely evaluated before purchase, similar to their software counterparts. However, protection of these IPs against piracy during evaluation is a major concern for the IP vendors. Existing solutions typically use encryption and vendor-specific toolsets, which may be unacceptable due to lack of flexibility to use in-house or third-party design tools. We propose a novel low-cost solution for hardware IP protection during evaluation, by embedding a hardware Trojan inside an IP in the form of a finite state machine (FSM) with special structure. The Trojan disrupts the normal functional behavior of the IP on occurrence of a sequence of rare events, thereby effectively putting an “expiry date” on the usage of the IP. The Trojan is structurally and functionally obfuscated, thus protecting against potential reverse engineering efforts that target isolation of the Trojan circuit. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Embedded Object Concept: a Lego-like Approach to Making Embedded Systems

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    The Embedded Object Concept (EOC) is a concept that utilizes common object-oriented methods used in software by applying them to combined Lego-like software-hardware entities. These modular entities represent objects in object-oriented design methods, and they function as building blocks of embedded systems. This concept enables you to build new embedded systems from electronic Lego-like building blocks. The goal of the EOC is to make designing of embedded systems faster and easier while preserving the commercial applicability of the resulting device. The EOC enables people without comprehensive knowledge in electronics design to create new embedded systems. For experts it shortens the design time of new embedded systems. This article presents the concept and two realized embedded systems: a telerobot and Painmeter. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An interconnect strategy for a heterogeneous processor

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (333 KB)  

    This work focuses on the interconnect infrastructure, functionality and capability of a heterogeneous reconfigurable SoC. The SoC integrates reconfigurable units of various granularity used as stream processing elements. The NoC approach demonstrates benefits in scalability, flexibility and run-time adaptivity for actual and future SoC design. On a reference CMOS090 implementation the described interconnect system works at the system frequency of 200 MHZ sustaining the required run-time bandwidth for several application domains. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Overview of Mixed-Signal Production Test from a Measurement Principle Perspective

    Publication Year: 2013 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5696 KB)  

    In this article, a tutorial on the techniques and procedures used in a production test environment is presented. This overview is structured in such a way that the less experienced test engineer can learn about the common and various methods used in mixed-signal test. Various aspects related to test and their role in the manufacturing process of ICs are discussed. In fact, the paper starts off by motivating the need for testing and then describes the different methods: DC, AC, and dynamic testing as well as clocks, SerDes and RF testing. Design for Test (DFT) techniques are also described. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Design & Test offers original works describing the models, methods, and tools used to design and test microelectronic systems from devices and circuits to complete systems-on-chip and embedded software. The magazine focuses on current and near-future practice, and includes tutorials, how-to articles, and real-world case studies. The magazine seeks to bring to its readers not only important technology advances but also technology leaders, their perspectives through its columns, interviews, and roundtable discussions. Topics include semiconductor IC design, semiconductor intellectual property blocks, design, verification and test technology, design for manufacturing and yield, embedded software and systems, low-power and energy-efficient design, electronic design automation tools, practical technology, and standards.

It was published as IEEE Design & Test of Computers between 1984 and 2012.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Andre Ivanov
Department of Electrical and Computer Engineering, UBC