By Topic

Adaptive Hardware and Systems (AHS), 2013 NASA/ESA Conference on

Date 24-27 June 2013

Filter Results

Displaying Results 1 - 25 of 39
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (408 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (49 KB)  
    Freely Available from IEEE
  • AHS 2013 - Table of contents

    Page(s): iii - v
    Save to Project icon | Request Permissions | PDF file iconPDF (150 KB)  
    Freely Available from IEEE
  • Freely Available from IEEE
  • Conference organizers

    Page(s): vii
    Save to Project icon | Request Permissions | PDF file iconPDF (85 KB)  
    Freely Available from IEEE
  • Program committee

    Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (82 KB)  
    Freely Available from IEEE
  • Keynote address I: The Space reference scenario: From the present solutions to the future challenges for Thales Alenia Space

    Page(s): ix
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (166 KB)  

    In the current Space reference scenario different missions' targets and needs are driving the capabilities required to the space actors (agencies, industries and suppliers) for providing the right solutions and meet the customers requirements in terms of performance, reliability, safety and costs. Missions are more and more challenging and ask for extended on-board capabilities (as well as ground facilities) to manage high and secure data transmission rates, accurate pointing, fast reconfiguration, autonomous management, failure detection, isolation and recovery. Moreover, the on-board HW is required to work in an hostile environment, considering for instance the wide range of temperatures and radiation, and has also to cope with physical constraints, such as mass, dimension, launch vibrations and limited resources in terms of power (relying mainly on Solar arrays and batteries). In the future, the exploration missions to remote planets and celestial bodies will be even more demanding with the need to cope with more stringent constraints, such as visibility, link budget, communication delays, energy, etc. The spacecrafts have therefore to be more autonomous and to implement capabilities to manage the planned operations but also to safely react to unplanned situations or to move in an unknown environment such as a planet terrain. This presentation will focus on the main Space missions that are currently the reference of the Thales Alenia Space activity. Thales Alenia Space, as one of the major payers in the Space Business, covers a wide range of applications: from Telecommunications to Navigation, from Observation to Science and Exploration. The activity is carried out in 10 industrial sites in Europe (France, Italy, Spain, Belgium e Germany) with more than 7.200 employees. The major successful achievements are presented, such as space systems, instruments and spacecrafis, focusing in particular on the activities carried out in Turin, where the AHS conference takes- place and where Thales Alenia Space Italy is present since years with a plant where many modules of the International Space Stations have been built, as well as National and International Scientific satellites, and where today the T AS leading activity in the Exploration and Science is performed. An outlook to the future is also presented addressing the main reference European missions in terms of exploration and identifying the major challenges for the on-board autonomy and HW /SW features and trends. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Keynote address II: Adaptive distributed systems for space exploration: Present and future

    Page(s): x
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (174 KB)  

    Summary form only given. In this talk, Dr. Quadrelli will emphasize the need for adaptivity in four key categories of distributed autonomous systems for space exploration (Multiple, Multi-physics, Mission-level, Multi-scale). One category has to do with adaptivity and system reconfigurations in robotic exploration of extreme environments with multiple assets. Another category deals with adaptivity by exploiting the material multi-physics interactions in the physical implementation for robotic manipulation tasks. Another category deals with mission-level adaptivity, and the best example of this is a complex space system interacting with the atmosphere until it lands autonomously on the surface. Finally, the last category deals with multi-scale system adaptivity that enables space science, through an innovative re-thinking of the way space science missions are done today. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Worst case error rate predictions and mitigation schemes for Virtex-4 FPGAs on solar orbiter

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (989 KB) |  | HTML iconHTML  

    The Data Processing Unit (DPU) for the Polarimetric and Helioseismic Imager (PHI) instrument on the ESA Solar Orbiter mission will use Xilinx Virtex-4 XQR4VSX55 FPGAs for high data rate acquisition and processing tasks. This paper discusses the feasibility of using this type of SRAM based FPGAs for these high data rate tasks. Firstly scenarios of the radiation environment are derived from the orbit description. Then relevant radiation effects on microelectronics are recapitulated and finally the resulting error rates under various conditions are estimated. The prediction of error rates is based on estimation of upset rates which in turn is used for a prediction of system error rates. The mitigation techniques Triple Modular Redundancy (TMR) and configuration memory scrubbing reduce the system error rate to achieve predicted rates that make the construction of this system feasible. Furthermore the system set up of the DPU with a radiation hardened control processor and fixed antifuse FPGA is such, that only tasks are loaded to the Virtex-4 FPGAs, for which sporadic failures can be detected and corrected. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the optimal reconfiguration times for TMR circuits on SRAM based FPGAs

    Page(s): 9 - 14
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (447 KB) |  | HTML iconHTML  

    Unreliable and harsh environmental conditions in avionics and space applications demand run-time adaptation capabilities to withstand environmental changes and radiation-induced faults. Modern SRAM-based FPGAs integrating high computational power with partial and dynamic reconfiguration abilities are a usual candidate for such systems. However, due to the vulnerability of these devices to Single Event Upsets (SEUs), designs need proper fault-handling mechanisms. In this work we propose a novel circuit instrumentation method for probing Triple Modular Redundancy (TMR) circuits for error detection at the granularity of individual domains and then use selective run-time dynamic reconfiguration for recovery. Error detection logic is inserted in the physical net-list to identify and localize faults. Moreover, selective domain reconfiguration is achieved by careful considerations in the placement phase on the FPGA reconfigurable area. The proposed technique is suitable for systems having hard real-time constraints. Our results demonstrate that this approach has an overhead of 2 LUTs per majority voter in internal partitions in terms of area when compared to the standard TMR circuits. In addition, it brings down the reconfiguration times of TMR circuits to a single domain and ensures a 100% availability of the device assuming the Single Event Upset fault model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive FDIR framework for payload data processing systems using reconfigurable FPGAs

    Page(s): 15 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (848 KB) |  | HTML iconHTML  

    In this paper, a Fault Detection, Isolation and Recovery approach to SRAM-based FPGAs in payload data processing applications on-board spacecraft is presented. The approach is able to support different reliability requirements through on-line configuration of the target system adapting it to new constraints in terms of reliability and power consumption. The core of the approach is a novel Distributed Failure Detection technique, aimed at Network-on-Chip implementations, which embeds failure detection mechanisms into the routing switches of the network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On self-adaptive resource allocation through reinforcement learning

    Page(s): 23 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1170 KB) |  | HTML iconHTML  

    Autonomic computing was proposed as a promising solution to overcome the complexity of modern systems, which is causing management operations to become increasingly difficult for human beings. This work proposes the Adaptation Manager, a comprehensive framework to implement autonomic managers capable of pursuing some of the objectives of autonomic computing (i.e., self-optimization and self-healing). The Adaptation Manager features an active performance monitoring infrastructure and two dynamic knobs to tune the scheduling decisions of an operating system and the working frequency of cores. The Adaptation Manager exploits artificial intelligence and reinforcement learning to close the Monitor-Plan-Analyze-Execute with Knowledge adaptation loop at the very base of every autonomic manager. We evaluate the Adaptation Manager, and especially the adaptation policies it learns by means of reinforcement learning, using a set of representative applications for multicore processors and show the effectiveness of our prototype on commodity computing systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolutionary algorithms that use runtime migration of detector processes to reduce latency in event-based systems

    Page(s): 31 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (744 KB) |  | HTML iconHTML  

    Event-based systems (EBS) are widely used to efficiently process massively parallel data streams. In distributed event processing the allocation of event detectors to machines is crucial for both the latency and efficiency, and a naive allocation may even cause a system failure. But since data streams, network traffic, and event loads cannot be predicted sufficiently well the optimal detector allocation cannot be found a-priori and must instead be determined at runtime. This paper describes how evolutionary algorithms (EA) can be used to minimize both network and processing latency by means of runtime migration of event detectors. The paper qualitatively evaluates the algorithms on synthetical data streams in a distributed event-based system. We show that some EAs work efficiently even with large numbers of event detectors and machines and that a hybrid of Cuckoo Search and Particle Swarm Optimization outperforms others. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hardware-based parallel firefly algorithm for embedded applications

    Page(s): 39 - 46
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (728 KB) |  | HTML iconHTML  

    The firefly algorithm (FA) is a new population-based metaheuristic bioinspired on the behavior of the flashing characteristics of fireflies. As a population-based algorithm, the FA suffers from large execution times specifically for embedded optimization problems with computational limitations. For reducing execution times we propose a hardware parallel architecture of the FA algorithm that facilitates the implementation in Field Programmable Gate Arrays (FPGAs). In addition, this work proposes the application of the opposition-based learning (OBL) approach to the FA algorithm. The respective hardware implementation (HPOFA) was mapped into a Virtex5 FPGA device and numerical experiments using four well-known benchmark problems demonstrate that the opposition-based approach allows the FA algorithm to improve its functionality, preserving the swarm diversity and avoiding the premature convergence problem. Synthesis results point out that the HPOFA architecture is effectively mapped in hardware and is suitable for embedded applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ant Colony Optimization for mapping, scheduling and placing in reconfigurable systems

    Page(s): 47 - 54
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (750 KB) |  | HTML iconHTML  

    Modern heterogeneous embedded platforms, composed of several digital signal, application specific and general purpose processors, also include reconfigurable devices supporting partial dynamic reconfiguration. These devices can change the behavior of some of their parts during execution, allowing hardware acceleration of more sections of the applications. Nevertheless, partial dynamic reconfiguration imposes severe overheads in terms of latency. For such systems, a critical part of the design phase is deciding on which processing elements (mapping) and when (scheduling) executing a task, but also how to place them on the reconfigurable device to guarantee the most efficient reuse of the programmable logic. In this paper we propose an algorithm based on Ant Colony Optimization (ACO) that simultaneously executes the scheduling, the mapping and the linear placing of tasks, hiding reconfiguration overheads through pre-fetching. Our heuristic gradually constructs solutions and then searches around the best ones, cutting out non-promising areas of the design space. We show how to consider the partial dynamic reconfiguration constraints in the scheduling, placing and mapping problems and compare our formulation to other heuristics that address the same problems. We demonstrate that our proposal is more general and robust, and finds better solutions (16.5% in average) with respect to competing solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementation of an initial-configuration based on self-reconfiguration for an on-board processor

    Page(s): 55 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3771 KB) |  | HTML iconHTML  

    Modern flexible space applications are based on an on-board processor in order to increase the overall system performance. Additionally, FPGA-based systems provide the advantage of reconfiguration during the satellite mission. The Fraunhofer On-Board Processor realizes a multi-FPGA platform for communication with four Virtex-5QV FPGAs. For this platform a high system reliability and an avoidance of single points of failure are required. In order to meet these requirements a new FPGA reconfiguration concept is necessary to guarantee a fail-safe reconfiguration. The focus of this paper is the implementation of the FPGA design which is based on an initial-configuration system and self-reconfiguration of the FPGAs. For this implementation the hardware platform of the Elegant Bread Board is used. The initial-configuration method is used in parallel with a regular one and results in an increase for the system reliability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reconfigurable platforms for Data Processing on scientific space instruments

    Page(s): 63 - 70
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (553 KB) |  | HTML iconHTML  

    The demand for increasing on-board processing power and reconfigurability is the driver for new approaches in the development of Data Processing Units (DPUs) for scientific instruments for upcoming and future space missions. The central part of a DPU is the actual processing element which has to reduce the raw data generated by the sensors to a down-linkable size with scientific meaningful content. With increasing raw data rates also more powerful, energy efficient and adaptable processing cores are required. Apparently, this can not be achieved by simple data compression but requires the execution of complex scientific algorithms - formerly a task for powerful commercial workstations on earth. Space qualified General Purpose Processors (GPPs) are not sufficient for such tasks, but with space qualified SRAM-based FPGAs (Field Programmable Gate Arrays) a technology is available and used within today's and upcoming processing platforms. The development challenges are to exploit the reconfigurability features of such devices in-flight, considering the harsh space environment. A feasible architecture is demonstrated with the Dynamically Reconfigurable Processing Module (DRPM) in the frame of an ESA study and currently adapted for the DPU within the Solar Orbiter PHI instrument. The demand for more computation power drives also recent developments of new qualified Many-Core processing cores, which will naturally provide also new options for future DPU architectures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A framework to model self-adaptive Computing Systems

    Page(s): 71 - 78
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (600 KB) |  | HTML iconHTML  

    This paper proposes a model for the specification and characterization of self-aware/adaptive systems. The model offers a rigorous support for the identification and management of the aspects that characterize context/self-awareness in terms of the relevant elements that determine the changes of context and the consequent related actions the system should take. These elements include the application, the architecture and the environment as perceived, also modeling the actions the system can perform to adapt to changes in the context. The proposal specifically targets the field of computing systems, however it is not limited to it. The proposed model is applied to a few self-adaptive scenarios to illustrate its potential. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A systematic generation of optimized heterogeneous 3D Networks-on-Chip architecture

    Page(s): 79 - 83
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (467 KB) |  | HTML iconHTML  

    In this paper, we present a novel algorithm which systematically generates heterogeneous three-dimensional Networks-on-Chips (3D NoCs) topologies for a given application such that the vertical connections as well as the communication energy is reduced while the NoC performance is maintained. The proposed algorithm analyzes the target application and generates heterogeneous architectures by efficiently redistributing the vertical links and buffer spaces based on the vertical link and buffer utilization. The algorithm has been evaluated by synthetic and various real-world traffic patterns. Experimental results show that the proposed algorithm generates optimized architectures with lower energy consumption and significant reduction in packet delays compared to the existing 3D NoC architectures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Runtime adaptation on dataflow HPC platforms

    Page(s): 84 - 91
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (795 KB) |  | HTML iconHTML  

    We are facing an ever growing quest for performance in High Performance Computing (HPC) systems. The growing concerns for the power budgets and overall deployment costs required to run these systems are opening new ways to novel high performance computing platforms. New paradigms and architectures are being developed to tackle these challenges. In this context, FPGA-based HPC platforms employed to accelerate algorithms expressed as data flow programs are a promising paradigm. One traditional limiting factor of FPGA technology is that the ever increasing complexity of the applications might require the designer to switch to a bigger device or, conversely, the same device might be underutilized due to difficulties at sharing the available logic. Partial Reconfiguration is the standard technique to overcome such limitations. This paper presents the research work done during the technology transfer to extend the Maxeler design flow to efficiently support Partial Reconfiguration (PR). In this work we focus on the design and development of a methodology to support the PR feature in the Maxeler design flow, a commercially successful FPGA-based HPC platform, showing the advantages of such an approach on the resulting platform. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Self-adaptation techniques for mixed-signal SiGe BiCMOS ICs

    Page(s): 92 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1212 KB) |  | HTML iconHTML  

    Self-adaptation and user-controlled adaptation are powerful techniques that can be used for stabilization and performance improvement of analog and mixed-signal ICs. Those techniques are particularly useful for silicon-germanium BiCMOS circuits that offer advanced operational characteristics desirable for ground-based and space-oriented electronics. This paper presents a set of self-adaptation techniques based on generating and combining currents that represent temperature-related and process-related parameter deviations of bipolar transistors and resistors in BiCMOS ICs. In combination with several well-known compensation methods, such as Beta compensation and duty-cycle stabilization, the proposed techniques lead to more stable characteristics of high-performance ICs with higher tolerance to variable external conditions. This is specifically important for space-related products that operate within harsh environments. The efficiency of the proposed self-adaptation design approach has been demonstrated through the development and successful testing of a high-speed delay line IC for clock and binary data signals. Application of the proposed approach is not limited to the presented example. It will be efficient for stabilization of any characteristic in silicon-germanium BiCMOS circuits that can be electrically manipulated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • AIDI: An adaptive image denoising FPGA-based IP-core for real-time applications

    Page(s): 99 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1978 KB) |  | HTML iconHTML  

    The presence of noise in images can significantly impact the performances of digital image processing and computer vision algorithms. Thus, it should be removed to improve the robustness of the entire processing flow. The noise estimation in an image is also a key factor, since, to be more effective, algorithms and denoising filters should be tuned to the actual level of noise. Moreover, the complexity of these algorithms brings a new challenge in real-time image processing applications, requiring high computing capacity. In this context, hardware acceleration is crucial, and Field Programmable Gate Arrays (FPGAs) best fit the growing demand of computational capabilities. This paper presents an Adaptive Image Denoising IP-core (AIDI) for real-time applications. The core first estimates the level of noise in the input image, then applies an adaptive Gaussian smoothing filter to remove the estimated noise. The filtering parameters are computed on-the-fly, adapting them to the level of noise in the image, and pixel by pixel, to preserve image information (e.g., edges or corners). The FPGA-based architecture is presented, highlighting its improvements w.r.t. a standard static filtering approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FPGA implementation of a lossy compression algorithm for hyperspectral images with a high-level synthesis tool

    Page(s): 107 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1129 KB) |  | HTML iconHTML  

    In this paper, we present an FPGA implementation of a novel adaptive and predictive algorithm for lossy hyperspectral image compression. This algorithm was specifically designed for on-board compression, where FPGAs are the most attractive and popular option, featuring low power and high-performance. However, the traditional RTL design flow is rather time-consuming. High-level synthesis (HLS) tools, like the well-known CatapultC, can help to shorten these times. Utilizing CatapultC, we obtain an FPGA implementation of the lossy compression algorithm directly from a source code written in C language with a double motivation: demonstrating how well the lossy compression algorithm would perform on an FPGA in terms of throughput and area; and at the same time showing how HLS is applied, in terms of source code preparation and CatapultC settings, to obtain an efficient hardware implementation in a relatively short time. The P&R on a Virtex 5 5VFX130 displays effective performance terms of area (maximum device utilization at 14%) and frequency (80 MHz). A comparison with a previous FPGA implementation of a lossless to near-lossless algorithm is also provided. Results on a Virtex 4 4VLX200 show less memory requirements and higher frequency for the LCE algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallelised fault-tolerant Integer KLT implementation for lossless hyperspectral image compression on board satellites

    Page(s): 115 - 122
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (956 KB) |  | HTML iconHTML  

    The Karhunen-Loève Transform (KLT) has been used as a spectral decorrelator in multi-component image compression. The Integer KLT is the integer approximation of KLT, which enables lossless hyperspectral image compression. In this paper, the effect of single-bit errors on the performance of the Integer KLT is investigated. An error detection and correction (EDAC) method that is based on the Freivald's simple checker is proposed for the matrix factorization part of the algorithm. A technique to reduce the computational complexity is proposed too, which is based on a fixed sampling of the covariance matrix calculation. The low-complexity fault-tolerant Integer KLT is implemented on an 8-core Texas Instruments DSP (TMS320C6678) using the OpenMP® environment. Experimental results on the compression performance, latency and power consumption are reported. The parallelized Integer KLT implementation processes a Hyperion spaceborne hyperspectral image in 3 seconds with a throughput of 66.9 Mbps. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying the adaptive Hybrid Flow-Shop scheduling method to schedule a 3GPP LTE physical layer algorithm onto many-core digital signal processors

    Page(s): 123 - 129
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (434 KB) |  | HTML iconHTML  

    Currently, Multicore Digital Signal Processor (DSP) platforms are commonly used in telecommunications baseband processing. In the next few years, high performance DSPs are likely to combine many more DSP cores for signal processing with some General-Purpose Processor (GPP) cores for application control. As the number of cores increases in new DSP platform designs, scheduling of applications is becoming a complex operation. Meanwhile, the variability of the scheduled applications also tends to increase as applications become more sophisticated. Such variations require runtime adaptivity of application scheduling. This paper extends the previous work on adaptive scheduling by using the Hybrid Flow-Shop (HFS) scheduling method, which enables the device architecture to be modeled as a pipeline of Processing Elements (PEs) with multiple alternate PEs for each pipeline stage. HFS scheduling is applied to the scheduling of 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) telecommunication standard Uplink Physical Layer data processing (PUSCH). The experiments, conducted on an ARM Cortex-A9 GPP, show that an HFS scheduling algorithm has an overhead that increases very slowly with the number of PEs. This makes the method suitable for executing the adaptive scheduling in less than 1 ms for the 501 actors of a LTE PUSCH dataflow description executed on a 256-core architecture. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.