By Topic

Real-Time Conference, 2007 15th IEEE-NPSS

Date April 29 2007-May 4 2007

Filter Results

Displaying Results 1 - 25 of 138
  • [Copyright notice]

    Publication Year: 2007 , Page(s): nil1
    Save to Project icon | Request Permissions | PDF file iconPDF (88 KB)  
    Freely Available from IEEE
  • A Versatile Sampling ADC System for On-Detector Applications and the AdvancedTCA Crate Standard

    Publication Year: 2007 , Page(s): 1 - 5
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3653 KB) |  | HTML iconHTML  

    A data acquisition system based on sampling analog to digital converters (ADCs) and data processing in field programmable gate arrays (FPGAs) is presented. Up to 32 ADC channels are combined with reconflgurable logic on a small mezzanine form factor card to get a handy module for analog data acquisition. This module can then be mounted either on a dedicated detector frontend or included in an Advanced Telecom Computing Architecture (ATCA) crate system. For on-detector usage the system provides several features for fail safe and also fail tolerant operation and data readout. This opens applications in high energy physics detectors with difficult access schemes to the electronic part. The crate system offers on the other hand a high channel density with moderate channel costs. Additionally, the ATCA standard provides enough data bandwidth for online data processing within the crate system. Due to the unified interface of the ADC mezzanine cards, the system cost can be further reduced by an easy up-or downgrade of the ADC sampling frequency or the channel count by exchanging just the mezzanine card and thus maintaining the whole surrounding infrastructure. This enables to serve different detector requirements with a limited number of different mezzanine card types and eases also further enhancements with new upcoming ADC or FPGA devices. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FPGA - Based Compute Nodes for the PANDA Experiment at FAIR

    Publication Year: 2007 , Page(s): 1 - 2
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3806 KB) |  | HTML iconHTML  

    PANDA is a new universal detector for antiproton physics at the HESR facility at FAIR/GSI. The PANDA data acquisition system has to handle interaction rates of the order of 10**7 /s and data rates of several 100 Gb Is. FPGA based compute nodes with multi-Gbit/s bandwidth capability using the ATCA architecture are designed to handle tasks such as event building, feature extraction and high level trigger processing. Each board is equipped with 5 Virtex4 FX60 FPGAs. High bandwidth connectivity is provided by four Gbit Ethernet links and 8 additional optical links connected to RocketIO ports. A single ATCA crate can host up to 14 boards which are interconnected via a full mesh backplane. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The ALICE-LHC Online Data Quality Monitoring Framework: Present and Future

    Publication Year: 2007 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6743 KB) |  | HTML iconHTML  

    ALICE is one of the experiments under installation at CERN Large Hadron Collider, dedicated to the study of heavy-ion collisions. The final ALICE data acquisition system has been installed and is being used for the testing and commissioning of detectors. The online data quality monitoring is an important part of the DAQ software framework (DATE). In this presentation we overview the implementation and usage experience of the interactive tool MOOD used for the commissioning period of ALICE and we present the architecture of the automatic data quality monitoring framework, a distributed application aimed to produce, collect, analyze, visualize and store monitoring data in a large, experiment wide scale. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The LHCb Farm Monitoring and Control System

    Publication Year: 2007 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2598 KB) |  | HTML iconHTML  

    The LHCb experiment at CERN will have an online trigger farm composed of up to 2000 PCs. In order to monitor and control each PC and to supervise the overall status of the farm, a Farm Monitoring and Control System (FMC) was developed. The FMC is based on DIM (Distributed Information Management System) as network communication layer, it is accessible both through a command line interface and through the PVSS (Prozessvisualisierung und Steuerung) graphical interface, and it is interfaced to the Finite State Machine (FSM) of the LHCb Experiment Control System (ECS) in order to manage anomalous farm conditions. The FMC is an integral part of the ECS, which is in charge of monitoring and controlling all on-line components; it uses the same tools (DIM, PVSS, FSM, etc.) to guarantee its complete integration and a coherent look and feel throughout the whole control system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • LHC Collimators Low Level Control System

    Publication Year: 2007 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7331 KB) |  | HTML iconHTML  

    The low level control system (LLCS) of the LHC collimators is responsible for accurate synchronization of -500 axes of motion at microsecond level. Stepping motors are used in open loop ensuring a high level of repeatability of the position. In addition, a position survey system based on resolver and LVDT sensors, verifies in real time the position of each axis at a frequency close to 100 Hz with some tens micrometers accuracy with respect to the expected position at a given time. The LLCS is characterized by several challenging requirements as high reliability, redundancy, strict timing constraints and compactness of the low level hardware because of the limited space available in the racks underground. The National Instruments PXI platform has been proposed and evaluated as real time low level hardware. In this paper the architecture of the LHC collimators low level control system is presented. The solution adopted for the motion control and positioning sensors reading implemented on the PXI platform are detailed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The OPERA Spectrometer Slow Control System

    Publication Year: 2007 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3702 KB) |  | HTML iconHTML  

    OPERA is a long-baseline neutrino experiment at the Gran Sasso underground laboratories (LNGS), designed to observe nu - tau appearance in a nu - mu neutrino beam shot from CERN. The detector has a modular structure and is composed of two identical supermodules, each consisting of a massive lead/nuclear emulsion target complemented by electronic detectors and a magnetic spectrometer. The two magnets are instrumented with around 1000 resistive plate chambers (RPCs) detectors covering a surface of about 3200 m2. The slow-control system has been designed to monitor and control all the critical parameters for a proper functioning of the spectrometer. The different hardware (high voltage power supplies, RPC current meters, RPC and magnet temperature sensors, timing boards) is read out via CANbus connections by several distributed clients. The clients write the data to a relational database (PostresSQL), which is the heart of the system: it gives persistency to the data and allows to perform correlations useful to debug possible system malfunctioning. Among the various tools (histogramming and XML configuration managers), a controller process checks for possible failures of the system using data from the database and generates warnings/alarms for the shifters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Standardized Communication in the Control System of the Experiment WENDELSTEIN 7-X

    Publication Year: 2007 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5578 KB) |  | HTML iconHTML  

    The superconducting stellarator experiment W7-X has a control system which has been designed for steady state operation as well as for pulsed operation. Each technical component and each "diagnostic" system including its data acquisition will have its own control system permitting autonomous operation for commissioning and testing. During the experimental sessions the activity of these components will be coordinated by a Central Control System and the machine runs more or less automatically with predefined programs. A local control component has a number of internal and external communication interfaces which are necessary for data exchange with the operational management system, the segment control system and with the safety system. These interfaces are used to send and receive messages of different types (commands, status information, raw data, and analyzed data). The paper presents a description of the structure of a local control component and a discussion of its communication interfaces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Remote Operations for LHC and CMS

    Publication Year: 2007 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2114 KB) |  | HTML iconHTML  

    Commissioning the Large Hadron Collider (LHC) and its experiments will be a vital part of the worldwide high energy physics program beginning in 2007. A remote operations center has been built at Fermilab to contribute to commissioning and operations of the LHC and the compact muon solenoid (CMS) experiment, and to develop new capabilities for real-time data analysis and monitoring for LHC, CMS, and grid computing. Remote operations will also be essential to a future International Linear Collider with its multiple, internationally distributed control rooms. In this paper we present an overview of Fermilab's LHC@FNAL remote operations center for LHC and CMS, describe what led up to the development of the center, and describe noteworthy features of the center. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Remote Control and Monitoring of Accelerators and Detectors in a Global Facility (GAN/GDN)

    Publication Year: 2007 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1134 KB) |  | HTML iconHTML  

    Future accelerators and experiments are operated by large international collaborations and typically by dispersed teams of experts. The facilities will be operated over many years and even decades. It will be prohibitive to maintain a complete staff of experts at the site. The load of regular visits of the site is tremendously eased when experts can take action remotely. The demand for reliable remote monitoring and control is thus paramount. This paper describes a comprehensive solution which is being provided by the GANMVL-collaboration within EuroTeV. An important feature of the toll is that the implementation details are well hidden from the user to increase the public acceptance. The tools employed are based on web-browsers granting access to VNC, to VRVS and in the future EVO, to facility internal web-pages, to cameras and allow to control instruments. They include Single Sign On authentication so that security requirements can be met. Using standard interfaces the user need not be concerned with the technology of the instruments. The paper will include a demonstration of the current status of development. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CDF Event Monitoring System and Operation

    Publication Year: 2007 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5467 KB) |  | HTML iconHTML  

    The foundation of the CDF Run II online event monitoring framework was implemented well before the start of the physics runs, allowing development of a coherent monitoring software across all the subsystems, consequently making maintenance and operation simple and efficient. Only one shift person is needed to monitor the entire CDF detector, including the trigger system. High data quality is assured in real time and well defined monitoring results are propagated coherently to offline data sets used for physics analysis. We describe the CDF Run II online event monitoring system and operation, including the remote monitoring shift operation started in November 2006. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Role Based Access Control in the ATLAS Experiment

    Publication Year: 2007 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2752 KB) |  | HTML iconHTML  

    The ATLAS experiment operates with a significant number of hardware and software resources. Their protection against misuse is an essential task to ensure a safe and optimal operation. To achieve this goal, the role based access control (RBAC) model has been chosen for its scalability, flexibility, ease of administration and usability from the lowest operating system level to the highest software application level. This paper presents the overall design of RBAC implementation in the ATLAS experiment and the enforcement solutions in different areas such as the system administration, control room desktops and the data acquisition software. The users and the roles are centrally managed using a directory service based on lightweight directory access protocol which is kept in synchronization with the human resources and IT databases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Beam Condition Monitoring with Diamonds at CDF

    Publication Year: 2007 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5748 KB) |  | HTML iconHTML  

    Particle physics collider experiments at the high energy frontier are being performed today and in the next decade in increasingly harsh radiation environments. While designing detector systems adequate for these conditions represents a challenge in itself, their safe operation relies heavily on fast, radiation-hard beam condition monitoring (BCM) systems to protect these expensive devices from beam accidents. The talk will present such a BCM system based on polycrystalline chemical vapor deposition (pCVD) diamond sensors designed for the Collider Detector at Fermilab (CDF) experiment operating at Fermilab's Tevatron proton-antiproton synchrotron. We report our operational experience with this system, which was commissioned in the spring of last year. The system currently represents the largest of its kind to be operated at a hadron collider. It is similar to designs being pursued by the next generation of hadron collider experiments at the Large Hadron Collider (LHC). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Configurable Interlock System for RF-Stations at XFEL

    Publication Year: 2007 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3952 KB) |  | HTML iconHTML  

    The main task of the interlock system is to prevent any damage to the cost expensive components of the RF station. The implementation of the interlock should guarantee a maximum of uninterrupted time of operation which includes the implementation of self diagnostic and repair strategies on module basis. Additional tasks include collection and temporary storage of status information of individual channels; transfer of this information to a higher level control system, but also the enactment of slow control functions. The interlock implementation is based on a 4U 19"-Crate which houses a controller and different slave modules which implement the interface to the components of the RF station. A dedicated, user defined backplane connects the controller to all slave modules. The Controller incorporates a 32-bit RISC NIOS-II processor inside a Cyclone-II FPGA device from ALTERA. The program running on this processor performs all necessary control and monitoring functions to all slave modules in the crate, but not the interlock function itself. The interlock function is implemented as hardwired logic and keeps working, even if the processor stops or the program hangs up. The software performs a system-test on power-up, to test the hardware functionality and the crate configuration. On success, the interlock hardware gets configured for operation and the crate is put into the working state. After initialization higher level applications get loaded. This covers the communication interface to the control system and a diagnostic interface, which is used during installation and trouble shooting. For this purpose, LabVIEW tools are used to present information. In addition, a HTTP server on the interlock controller provides the possibility to change configuration and view actual status information. It also implements tools which allow to reconfigure the whole FPGA design or to upload a new software version via Ethernet. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Building Integrated Remote Control Systems for Electronics Boards

    Publication Year: 2007 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3200 KB) |  | HTML iconHTML  

    This paper addresses several aspects of implementing a remote control system for a large number of electronics boards in order to perform remote field programmable gate array (FPGA) programming, hardware configuration, data register access, and monitoring, as well as interfacing it to a configuration database and an expert system. The paper presents a common strategy for the representation of the boards in the abstraction layer of the control system, and generic communication protocols for the access to the board resources. In addition, an implementation is proposed in which the mapping between the functional parameters and the physical registers of the different boards is represented by descriptors in the board representation such that the translation can be handled automatically by a generic translation manager. Using the distributed information management (DIM) package for the control communication with the boards, and the industry SCADA system PVSS II from ETM, a complete control system has been built for the timing and fast control (TFC) system of the LHCb experiment at CERN. It has been in use during the entire prototyping of the TFC system and the developments of the LHCb sub-detector electronics, and is now installed in the online system of the final experiment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TRACE - A System Wide Diagnostic Tool

    Publication Year: 2007 , Page(s): 1 - 3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (190 KB) |  | HTML iconHTML  

    TRACE is a system-wide diagnostic tool that allows one to gather timing information with a minimal impact on application(s) performance. TRACE supports a variety of architectures under Linux and VxWorks. This utility instruments code easily and is controllable through the /proc file system. It has hooks to be built into a larger monitoring/alarming/debugging framework as well as supporting architecture-dependent features such as performance measurement counters or registers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic Error Recovery in The ATLAS TDAQ System

    Publication Year: 2007 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3454 KB) |  | HTML iconHTML  

    This paper describes the new dynamic recovery mechanisms in the ATLAS Trigger and Data AcQuisition (TDAQ) system. The purpose is to minimize the impact certain errors and failures have on the system. The new recovery mechanisms are capable of analysing and recovering from a variety of errors, both software and hardware, without stopping the data gathering operations. It incorporates an expert system to perform the analysis of the errors and to decide what measures are needed. Due to the wide array of sub-systems there is also a need to optimize the way similar errors are handled for the different sub-systems. The main focus of the paper is to consider the design and implementation of the new recovery mechanisms and how expert knowledge is gathered from the different sub-systems and implemented in the recovery procedures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Process Manager in the ATLAS DAQ System

    Publication Year: 2007 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1973 KB) |  | HTML iconHTML  

    This paper describes the process manager in the ATLAS DAQ system. The purpose of the process manager is to perform basic process control on behalf of the software components of the DAQ system. It is able to create, destroy and monitor the basic status (e.g., running, exited, killed) of software components on the DAQ workstations and front-end processors. Section I gives a brief overview of the process manager functionalities. Section II focuses on the requirements the process manager system has to fulfil to be fully integrated in the DAQ system. Section III shows how the requirements are met by the current implementation. The communication schema between the different parts of the process manager system, the procedure to launch a process and the possible states in which a process can be are described in Sections IV, V and VI. Section VII deals with some consideration of the process manager performance while some conclusions are given in Section VIII. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using FPGAs to Generate Gigabit Ethernet Data Transfers and Studies of the Network Performance of DAQ Protocols

    Publication Year: 2007 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3331 KB) |  | HTML iconHTML  

    FPGA devices have become common in data acquisition (DAQ) systems for high energy particle physics experiments. The next generation of DAQ systems will use FPGAs with commercial networking components. The use of FPGAs to generate gigabit-Ethernet data streams has been investigated using a Virtex 4 development system to generate raw Ethernet packets over both fibre and copper links. Details of the firmware developed to drive the links are presented. Throughput and packet loss over standard Ethernet networks to PCs have been measured using different request-response protocols. Sequential request and group request DAQ data collection protocols have been implemented and initial scaling tests are reported. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Terabit/s Super-Fragment Builder and Trigger Throttling System for the Compact Muon Solenoid Experiment at CERN

    Publication Year: 2007 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3889 KB) |  | HTML iconHTML  

    The data acquisition system of the Compact Muon Solenoid experiment at the large hadron collider reads out event fragments of an average size of 2 kilobytes from around 650 detector front-ends at a rate of up to 100 kHz. The first stage of event-building is performed by the Super-Fragment Builder employing custom-built electronics and a Myrinet optical network. It reduces the number of fragments by one order of magnitude, thereby greatly decreasing the requirements for the subsequent event-assembly stage. By providing fast feedback from any of the front-ends to the trigger, the trigger throttling system prevents buffer overflows in the front-end electronics due to variations in the size and rate of events or due to backpressure from the down-stream event-building and processing. This paper reports on the recent successful integration of a scaled-down setup of the described system with the trigger and with front-ends of all major sub-detectors and discusses the ongoing commissioning of the full-scale system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance of the final Event Builder for the ATLAS Experiment

    Publication Year: 2007 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2041 KB) |  | HTML iconHTML  

    Event data from proton-proton collisions at the LHC will be selected by the ATLAS experiment in a three level trigger system, which reduces the initial bunch crossing rate of 40 MHz at its first two trigger levels (LVL1+LVL2) to ~3 kHz. At this rate the Event-Builder collects the data from all read-out system PCs (ROSs) and provides fully assembled events to the the event-filter (EF), which is the third level trigger, to achieve a further rate reduction to ~ 200 Hz for permanent storage. The event-builder is based on a farm of O(100) PCs, interconnected via gigabit Ethernet to O(150) ROSs. These PCs run Linux and multi-threaded software applications implemented in C++. All the ROSs and one third of the event-builder PCs are already installed and commissioned. We report on performance tests on this initial system, which show promising results to reach the final data throughput required for the ATLAS experiment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mobile-Host-Centric Transport Protocol for EAST Experiment

    Publication Year: 2007 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2359 KB) |  | HTML iconHTML  

    Some physics researchers retrieve EAST experiment data using TCP in wireless local area network (WLAN). TCP is the most commonly used transport control protocol. It assumes that every packet loss is caused by network congestion and invokes congestion control and avoidance. TCP's blind congestion control results in degraded performance in the lossy wireless networks. In a wireless network, mobile hosts have first-hand knowledge of the lossy wireless links; therefore, mobile stations can make better transmission control based on the known status of wireless link. In this paper, we proposed a new mobile-host-centric transport protocol (MCP) that integrates the characteristics of sender-centric and receiver-centric transport control schemes. MCP shifts most control policies to the mobile host side. The general behavior of MCP is similar to the TCP, but by utilizing the local information collecting from the mobile node in WLAN, MCP allows for better congestion control and loss recovery. Specifically, we designed a cross-layer implementation of MCP and ran it on NS-2. With valuable MAC (Medium Access Control) layer feedback information, MCP is able to distinguish packet loss caused by wireless random errors from network congestion more clearly and can perform a more accurate congestion control. We did extensive simulations of MCP on various WLAN scenarios, and the results show that the throughput of MCP with cross-layer feedback is higher than that of TCP Reno and Westwood. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effects of Adaptive Wormhole Routing in Event Builder Networks

    Publication Year: 2007 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3638 KB) |  | HTML iconHTML  

    The data acquisition system of the CMS experiment at the Large Hadron Collider features a two-stage event builder, which combines data from about 500 sources into full events at an aggregate throughput of 100 GByte/s. To meet the requirements, several architectures and interconnect technologies have been quantitatively evaluated. Both gigabit Ethernet and Myrinet networks will be employed during the first run. Nearly full bi-section throughput can be obtained using a custom software driver for Myrinet based on barrel shifter traffic shaping. This paper discusses the use of Myrinet dual-port network interface cards supporting channel bonding to achieve virtual 5 GBit/s links with adaptive routing to alleviate the throughput limitations associated with wormhole routing. Adaptive routing is not expected to be suitable for high-throughput event builder applications in high-energy physics. To corroborate this claim, results from the CMS event builder pre-series installation at CERN are presented and the problems of wormhole routing networks are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CMS DAQ Event Builder Based on Gigabit Ethernet

    Publication Year: 2007 , Page(s): 1 - 5
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2146 KB) |  | HTML iconHTML  

    The CMS Data Acquisition System is designed to build and Alter events originating from 476 detector data sources at a maximum trigger rate of 100 KHz. Different architectures and switch technologies have been evaluated to accomplish this purpose. Events will be built in two stages: the first stage will be a set of event builders called FED Builders. These will be based on Myrinet technology and will pre-assemble groups of about 8 data sources. The second stage will be a set of event builders called Readout Builders. These will perform the building of full events. A single Readout Builder will build events from 72 sources of 16 KB fragments at a rate of 12.5 KHz. In this paper we present the design of a Readout Builder based on TCP/IP over Gigabit Ethernet and the optimization that was required to achieve the design throughput. This optimization includes architecture of the Readout Builder, the setup of TCP/IP, and hardware selection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A framework for constructing adaptive and reconfigurable systems

    Publication Year: 2007 , Page(s): 1 - 3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (689 KB) |  | HTML iconHTML  

    This paper presents a software approach to augmenting existing real-time systems with self-adaptation capabilities. In this approach, based on the control loop paradigm commonly used in industrial control, self-adaptation is decomposed into observing system events, inferring necessary changes based on a functional model, and activating appropriate adaptation procedures. The solution adopts an architectural decomposition that emphasizes independence and separation of concerns. It encapsulates observation, analysis and correction into separate modules to allow for easier customization of the adaptive behaviors and flexibility in selecting implementation technologies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.