By Topic

Rapid System Prototyping (RSP), 2010 21st IEEE International Symposium on

Date 8-11 June 2010

Filter Results

Displaying Results 1 - 25 of 32
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (132 KB)  
    Freely Available from IEEE
  • RSP 2010 chairs and committees

    Page(s): 1 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (50 KB)  
    Freely Available from IEEE
  • [Title page]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (133 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (167 KB)  
    Freely Available from IEEE
  • RSP 2010 chairs and committees

    Page(s): 1 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (50 KB)  
    Freely Available from IEEE
  • Validating quality attribute requirements via execution-based model checking

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (461 KB) |  | HTML iconHTML  

    This paper is concerned with correct specification and validation of quality attribute requirements (QAR's) that cross-cut through a diverse set of complex system functions. These requirements act as modifiers of the systems level functional requirements thereby having substantial influence on the eventual architectural selection. Because system designers traditionally address these requirements one quality attribute at a time, the process frequently results in QAR's that contain subtle conflicting behaviors. This paper presents an approach to QAR-induced behavior validation and conflict detection via execution-based model checking early in the software development process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Embedded systems' virtualization: The next challenge?

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB) |  | HTML iconHTML  

    Traditionally, virtualization has been adopted by enterprise industry aiming to make better use of general purpose multi-core processors and its use in embedded systems (ES) seemed to be both a distant and unnecessary reality. However, with the rise of each more powerful multiprocessed ESs, virtualization brings an opportunity to use simultaneously several operating systems (OS) besides offering more secure systems and even an easier way to reuse legacy software. Although ESs have increasingly bigger computational power, they are still far more restricted than general purpose computers, especially in terms of area, memory and power consumption. Therefore, is it possible to use virtualization - a technique that typically demands robust systems - in powerful yet restricted current embedded systems? In this paper we show why the answer should be yes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combining memory optimization with mapping of multimedia applications for multi-processors system-on-chip

    Page(s): 1 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB) |  | HTML iconHTML  

    Multiprocessor systems-on-chips (MPSoCs) are defined as one of the main drivers of the industrial semiconductors revolution. They are good candidates for systems and applications such as multimedia. Memory is becoming a key player for significant improvements in these applications (power, performance and area). With the emergence of more embedded multimedia applications in the industry, this issue becomes increasingly vital. The large amount of data manipulated by these applications requires high-capacity calculation and memory. This leads to the need of new optimization and mapping techniques. This paper presents a novel approach for combining memory optimization with mapping of data-driven applications. This approach consists of task graph transformation and its integration to existing mapping algorithms. Some significant improvements are obtained for memory gain, communication load and physical links. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient hierarchical router for large 3D NoCs

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (229 KB) |  | HTML iconHTML  

    3-Dimensional Networks-on-Chip (3D NoC) are emerging as a promising solution to handle efficiently interconnects' complexity in 3D System-on-Chip (SoC). This paper presents a new router that enables gains in terms of throughput and latency compared to classic 3D mesh the in case of large NoCs. The proposed router is hierarchical since it is composed of 2 totally decoupled modules: one for inter-layer communication and one for intra-layer communication. Throughput and latency evaluation is performed using a SystemC-TLM NoC simulator. Synthesis and extrapolation results show that the hierarchical router is competitive with the classic 3D mesh in terms of area and power. Simulations' results show that the proposed hierarchical router can outperform the 3D mesh by more than 30% in terms of throughput and latency in the case of transpose traffic. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rapid prototyping for digital signal processing systems using Parameterized Synchronous Dataflow graphs

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (142 KB) |  | HTML iconHTML  

    Parameterized Synchronous Dataflow (PSDF) has been used previously for abstract scheduling and as a model for architecting embedded software and FPGA implementations. PSDF has been shown to be attractive for these purposes due to its support for flexible dynamic reconfiguration, and efficient quasi-static scheduling. To apply PSDF techniques more deeply into the design flow, support for comprehensive functional simulation and efficient hardware mapping is important. By building on the DIF (Dataflow Interchange Format), which is a design language and associated software package for developing and experimenting with dataflow-based design techniques for signal processing systems, we have developed a tool for functional simulation of PSDF specifications. This simulation tool allows designers to model applications in PSDF and simulate their functionality, including use of the dynamic parameter reconfiguration capabilities offered by PSDF. Based on this simulation tool, we also present a systematic design methodology for applying PSDF to the design and implementation of digital signal processing systems, with emphasis on FPGA-based systems for signal processing. We demonstrate capabilities for rapid and accurate prototyping offered by our proposed design methodology, along with its novel support for PSDF-based FPGA system implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Failure mode and effect analysis based on electric and electronic architectures of vehicles to support the safety lifecycle ISO/DIS 26262

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (682 KB) |  | HTML iconHTML  

    The draft international standard under development ISO 26262 (Road Vehicles - Functional safety -) describes a safety lifecycle for road vehicles and thereby influences all parts of development, production, operation and decommissioning. Starting from 2011, all developments of new cars should be aligned to this standard. The rapid application and adaption of the ISO 26262 is mandatory to develop safe, advanced and competitive automotive systems and systems of systems. The failure mode and effect analysis (FMEA) is a well applied engineering quality method in the automotive industry and proposed by the ISO 26262 for several analyses. The communication structure of the automotive control system are specified by the electric and electronic architecture (EEA). For a short time all this information can be processed in one tool. It can form an important contribution to the determination of input data for safety assessments. With the FMEA flow embedded in the EEA modeling, analysis can be rapidly provided with altered input data resulting from architecture modifications. This paper presents a formalized tool flow for rapid determination and accumulation of input data for failure mode and effect analysis based on an EEA model, the accomplishment of the analysis within an EEA modeling tool and the automated generation of reports, documenting the results from the FMEA according to a predefined form. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fine grain analysis of simulators accuracy for calibrating performance models

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (146 KB) |  | HTML iconHTML  

    In embedded system design, the tuning and validation of a cycle accurate simulator is a difficult task. The designer has to assure that the estimation error of the simulator meets the design constraints on every application. If an application is not correctly estimated, the designer has to identify on which parts of the application the simulator introduces an estimation error and consequently fix the simulator. However, detecting which are the mispredicted parts of a very large application can be a difficult process which requires a lot of time. In this paper we propose a methodology which helps the designer to fast and automatically isolate the portions of the application mispredicted by a simulator. This is accomplished by recursively analyzing the application source code trace highlighting the mispredicted sections of source code. The results obtained applying the methodology to the TSIM simulator show how our methodology is able to fast analyze large applications isolating small portions of mispredicted code. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Counter Embedded Memory architecture for trusted computing platform

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB) |  | HTML iconHTML  

    Due to various hacker attacks, trusted computing platform has received a lot of attentions recently. Encryption is introduced to maintain the confidentiality of data stored on such platform, while Message Authentication Codes (MACs) and authentication trees are employed to verify the data memory integrity. These encryption and authentication architectures suffer from several potential vulnerabilities which have been omitted by the previous work. In this paper, we first address our concern about a type of cryptanalysis; a ciphertext stored on memory can be decrypted and attacked by an adversary and the MACs and the authentication trees would become the victim of cryptanalytic attacks. In addition, we show that such an attack can be extended to multi-core systems by simply corrupting other unprotected cores and performing malicious behaviors. To handle these scenarios, we propose a Counter Embedded Memory (CEM) design, and employ embedded counters to record every data fetch and trace malicious operations. The proposed platform with CEM allows the system to trace unexpected memory access, thus can indicate potential attack in progress. We present both qualitative discussion and quantitative analysis to show the effectiveness of the proposed architecture. Our FPGA rapid prototype shows that the additional memory overhead is only 0.10% and the latency can be totally neglected. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An FPGA based semi-parallel architecture for higher order Moving Target Indication (MTI) processing

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (366 KB) |  | HTML iconHTML  

    The design and implementation of a higher order Moving Target Indication (MTI) engine is presented. This is part of a single chip radar signal processor also incorporating the subsequent algorithms. The bottleneck in use of higher order filters for MTI is not an algorithmic one but one related to implementation. Thus the challenge is to minimize area utilization and achieve the required speed. The proposed architecture employs the use of multiple offchip memory banks for achieving the required memory bandwidth and use of dedicated FPGA resources for area minimization. The requirement of stacking a large number of radar returns in memory and then reading them all for filtering within a single return time demands a parallel memory reading and data processing approach. But this demand has to be balanced with the requirement to consume as little area as possible to leave room for the following algorithms. Considering these constraints, a semi parallel architecture employing multiple filters, each built around a DSP48 slice configured as a Multiply Accumulate (MACC) unit in a time shared manner is used. An analysis of various factors that affect speed and area is also made. The architecture is implemented on a Virtex-4SX35 FPGA using Xilinx XtremeDSP Kit. The design is tested using unprocessed baseband data from a TA-10K air traffic control radar. Results show a marked improvement in the clutter suppression capability of the radar. The design achieves the required speed using only 7% of the available FPGA slices. Thus, not only can the other algorithms be implemented on the same chip but there is room for enhancements as well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scenario path identification for distributed systems: A graph based approach

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (289 KB) |  | HTML iconHTML  

    With increased complexity of software systems being developed; analysis of use case scenarios is gaining importance leading to effective test case identification during early part of the life cycle. Existing approaches provide various methods for analysis of UML activity diagrams and scenario path identification based on graph models of activity diagrams. In most cases these methods consider a single activity diagram. However use case scenarios may span multiple activity diagrams, which have become quite common with distributed development of software systems. In this paper we propose Activity Relationship graph model that depicts the interrelationship of activity diagrams modeling a use case. Activity Relationship graph ARG is a hierarchical graph where each node depicts an activity diagram modeled as activity diagram graph (AG). We also define a set of metrics named Use case Scenario Paths (USP) that measures the minimum number of independent paths in ARG. An algorithm is proposed to analyze ARG and derive the number of Use case Scenario Paths. This gives a measure of the number of test paths for a requirement based on analysis models early in the life cycle. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MpAssign: A framework for solving the many-core platform mapping problem

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (253 KB) |  | HTML iconHTML  

    Many-core platforms, providing large numbers of parallel execution resources, emerge as a response to the increasing computation needs of embedded applications. A major challenge raised by this trend is the efficient mapping of applications on parallel resources. This is a nontrivial problem because of the number of parameters to be considered for characterizing both the applications and the underlying platform architectures. Recently, several authors have proposed to use Multi-Objective Evolutionary Algorithm (MOEA) to solve this problem within the context of mapping applications on Network-on-Chips (NoC). However, these proposals have several limitations: (1) only few meta-heuristics are explored (mainly NSGAII and SPEA2), (2) only few cost functions are provided, and (3) they only deal with a small number of the application and architecture constraints. In this paper, we propose a new framework which avoids all of the problems cited above. Our framework allows designers to (1) explore several new meta-heuristics, (2) easily add a new cost function (or to use an existing one) and (3) take into account any number of architecture and application constraints. The paper also presents experiments illustrating how our framework is applied to the problem of mapping streaming applications on a NoC based many-core platform. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated synthesis of Time-Triggered Architecture-based TrueTime models for platform effects simulation and analysis

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (816 KB) |  | HTML iconHTML  

    The TrueTime toolbox simulates real-time control systems, including platform-specific details like process scheduling, task execution and network communications. Analysis using these models provides insight into platform-induced timing effects, such as jitter and delay. For safety-critical applications, the Time-Triggered Architecture (TTA) has been shown to provide the necessary services to create robust, fault-tolerant control systems. Communication induced timing effects still need to be simulated and analyzed even for TTA-compliant models. The process of adapting time-invariant control system models, through the inclusion of platform specifics, into TTA-based TrueTime models requires significant manual effort and detailed knowledge of the desired platform's execution semantics. In this paper, we present an extension of the Embedded Systems Modeling Language (ESMoL) tool chain that automatically synthesizes TTA-based TrueTime models. In our tools, timeinvariant Simulink models are imported into the ESMoL modeling environment where they are annotated with details of the desired deployment platforms. A constraint-based offline scheduler then generates the static TTA execution schedules. Finally, we synthesize new TrueTime models that encapsulate all of the TTA execution semantics. Using this approach it is possible to rapidly prototype, evaluate, and modify controller designs and their hardware platforms to better understand deployment induced performance and timing effects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Highly efficient forward two-dimensional DCT module architecture for H.264/SVC

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (329 KB) |  | HTML iconHTML  

    The emerging H.264 SVC (Scalable Video Coding) standard specifies an encoder solution responsible for generating a multi-layer stream, which provides extra flexibility for modern multimedia applications. The increased data dependency among different layers demands a significant overall encoder performance. Aiming the use of an SVC solution in real-time applications we propose an optimized hardware implementation of the computational forward two-dimensional discrete cosine transform module. The proposed solution introduces a fast pipelined architecture specially designed to explore hardware parallelism and to surmount memory access delays, processing efficiently up to eight pixel samples in a clock cycle. Practical results confirm the proposal as a high efficient solution able to speedup SVC encoder performance with reduced impacts in complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Umple: Towards combining model driven with prototype driven system development

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (233 KB) |  | HTML iconHTML  

    The emergence of model driven methodologies is bringing new challenges for software prototyping. Models tend to focus on the design of the system, and are less concerned with, or less able to, support prototype qualities like reuse, evolution, or weaving together independently designed parts. This paper presents a model-oriented prototyping tool called Umple that supports model driven engineering and overcomes some of the challenges related to prototyping in a modeling environment. Umple allows end users to quickly model class and state machine models and to incrementally embed implementation artifacts. At any point in the modeling process, users can quickly generate a fully functional prototype that exposes modeling implications on the user interface, and allows stakeholders to quickly get a feel of how the full system will behave. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance evaluation for passive-type Optical network-on-chip

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB) |  | HTML iconHTML  

    Optical networks-on-chip (ONoCs) represent an emerging technology for use as a communication platform for systems-on-chip (SoC). It is a novel on-chip communication system where information is transmitted in the form of light, as opposed to the conventional electrical network-on-chip (ENoC). This work studies the performance of a class of ONoCs that employ a single central passive-type optical router using wavelength division multiplexing (WDM) as a routing mechanism. The ONoC performance analysis has been carried out both at system-level (network latency and throughput) and at the physical level. In physical-level (optical) performance analysis of the ONoC, we study the communication reliability of the ONoC formulated by the signal-to-noise ratio (SNR) and the bit error rate (BER). Optical performance of the ONoC is carried out based on the system parameters, component characteristics and technology. The system-level analysis is carried out through simulation using flit-level-accurate SystemC model. Experimental results prove the scalability of the ONoC and demonstrate that the ONoC is able to deliver a comparable bandwidth or even better (in large network sizes) to the ENoC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reconfigurable router for dynamic Networks-on-Chip

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (831 KB) |  | HTML iconHTML  

    A reconfigurable router architecture for dynamic Networks-on-Chip (DyNoC) is presented. Dynamically placed modules cover several processing elements and routers of the DyNoC. These processing elements communicate over a second communication level using direct-links between neighbouring elements. Routers covered by modules are therefore useless. In this paper, several possibilities to use the router as additional resources to enhance complexity of modules are presented. The reconfigurable router is evaluated in terms of area, speed and latencies. A case-study where the router is used as a lookup-table demonstrates the feasibility of this approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An approach for rapidly adapting the demands of ISO/DIS 26262 to electric/electronic architecture modeling

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (560 KB) |  | HTML iconHTML  

    The draft international standard ISO 26262 describes a safety lifecycle for road vehicles and thereby influences all parts of development (specification, prototyping, implementation, integration, test, etc.). All functionalities affected by the standard, contain hierarchical electric and electronic systems. Starting from 2011, they should be designed, analyzed, assessed and documented strictly to the demands of ISO 26262. The adaption of the standard to the OEM's (original equipment manufacturer) existing development lifecycle comes along with numerous additional challenges and time-consuming activities. The rapid application and adaption of the ISO 26262 is imperative for OEMs and tier one suppliers to stay competitive and avoid the risk of delayed development kick-offs. The electric and electronic architecture (EEA) of a vehicle comprises the distributed automotive control system (electronic control units (ECU), sensors, actuators, etc.), and the computed functions. The EEA is designed and evaluated during the concept phase of the vehicle development. The EEA design has groundbreaking impact on succeeding development phases. Conformity of the EEA to the demands of the ISO 26262 and well-wrought design decisions enable for fast and safe progress of succeeding phases of the development lifecycle and thereby rapid development of intelligent and future-oriented vehicular systems. This article discusses impacts of the ISO 26262 to the EEA development and the handling of demanded safety requirements during the early phases of EEA development. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rapid specification of hardware-in-the-loop test systems in the automotive domain based on the electric / electronic architecture description of, vehicles

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (915 KB) |  | HTML iconHTML  

    The fast growth of complexity of modern cars, motorbikes and commercial vehicles continues. Although the number of applied Electronic Control Units (ECUs) decreases, they have to fulfill more and more functions concerning performance, comfort and safety. The electric and electronic architecture (EEA) of a vehicle forms the basis for those features and functionalities. An elaborated and evaluated EEA is developed in the concept phases of the vehicle development lifecycle. For a short time, the tool PREEvision offers the possibilities to model EEAs considering different views to the architecture (requirements, software, hardware, wiring harness, topology, etc.). For test and evaluation of the vehicle's functionalities, Hardware in the Loop (HiL) technology is utilized to cover the integration phase of hardware and software. The specification and design of HiL test systems (HiL-TS) is a complex and time-consuming procedure that can be supported by information about electric and electronic artifacts and their relationship, both available in the EEA model. This paper presents an approach for rapid specification, development and application of HiL-TSs as well as rapidly prototyping systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rapid prototyping and compact testing of CPU emulators

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (302 KB) |  | HTML iconHTML  

    In this paper, we propose a novel rapid prototyping technique to produce a high quality CPU emulator at reduced development cost. Specification mining from published CPU manuals, automated code generation of both the emulator and its test vectors from the mined CPU specifications, and a hardware-oracle based test strategy all work together to close the gaps between specification analysis, development and testing. The hardware-oracle is a program which allows controlled execution of one or more instructions on the CPU, so that its outputs can be compared to that of the emulator. The hardware-oracle eliminates any guesswork about the true behavior of an actual CPU, and it helps in the identification of several discrepancies between the published specifications vs. the actual processor behavior, which would be very hard to detect otherwise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance-cost analyses software for H.264 Forward/Inverse Integer Transform

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (319 KB) |  | HTML iconHTML  

    In the literature, Data Throughput rate per Unit Area (DTUA) has been used as the sole metric to evaluate the effectiveness of H.264 Forward/Inverse Integer Transform (FIT/IIT) designs. However, other than throughput and circuit area involved in DTUA, interconnection, power and delay are not considered. In this paper, we first summarize the Performance-Cost Metric (PCM) technique, where PCM is defined as the ratio of data throughput over the design cost including power, area, delay, and issues associated with interconnections in sub-micron designs. Compared to DTUA, PCM facilitates more comprehensive comparisons for VLSI designs, including FIT/IIT. The contribution of this paper is the software that helps facilitate the use of the PCM technique. When using this software, users are asked to enter some preliminary parameters of their design. Based on the given parameters and the reference designs managed using the software, it then analyzes and provides the possible boundaries of the users' design in order to have better PCMs compared to the reference designs. In addition, it can also export comparison results among different designs. The software is flexibly designed in order to facilitate the use of not only our PCM technique in FIT/IIT designs, but also different metrics in other architectures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.