By Topic

Real Time Conference, 2009. RT '09. 16th IEEE-NPSS

Date 10-15 May 2009

Filter Results

Displaying Results 1 - 25 of 120
  • [Title page]

    Publication Year: 2009 , Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (8 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2009 , Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE
  • Conference overview

    Publication Year: 2009 , Page(s): i - v
    Save to Project icon | Request Permissions | PDF file iconPDF (51 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • [Front cover]

    Publication Year: 2009 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (204 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2009 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | PDF file iconPDF (64 KB)  
    Freely Available from IEEE
  • ITER CODAC status and implementation plan

    Publication Year: 2009 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (924 KB) |  | HTML iconHTML  

    CODAC (Control, Data Access and Communication) is the central control system responsible for operating the ITER device. CODAC interfaces to more than 160 plant systems containing actuators, sensors and local control. CODAC is responsible for coordinating and orchestrating the operation of these plant systems including plasma feedback control. CODAC is developed by ITER Organization, while plant systems are developed by the seven ITER parties (China, Europe, India, Japan, Korea, Russia and United States). This procurement model poses enormous challenges, has a big impact on architecture design and requires a strong standardization for better integration and future maintenance. In this paper we briefly describe the CODAC conceptual design, elaborate on the actions taken by the CODAC team to move from conceptual to engineering design during the last year and outline the plans ahead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Advances in developing next-generation electronics standards for physics

    Publication Year: 2009 , Page(s): 7 - 15
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3427 KB) |  | HTML iconHTML  

    The Advanced Telecom Computing Architecture (ATCA) open standard developed by an industry consortium is beginning to find new applications in non-telecom fields including accelerators, HEP detectors, medical physics, astrophysics, fusion and similar applications. At the same time the broad physics community needs to modernize the capabilities of standard platforms for the future. This paper describes the formation of a lab-industry collaborative effort to extend ATCA specifications into the physics field for greater reliability and availability of next-generation machines and detectors, to improve the interoperability of instruments developed at different laboratories, and to take advantage of the potentially broad base of ATCA industry support for physics products. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Machine protection system (MPS) for the XFEL

    Publication Year: 2009 , Page(s): 16 - 21
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (623 KB)  

    For the operation of a machine like the 3 km long linear accelerator XFEL at DESY Hamburg, a safety system keeping the beam from damaging components is obligatory. This machine protection system (MPS) must detect failures of the RF system, magnets, and other critical components in various sections of the XFEL as well as monitor beam and dark current losses, and react in an appropriate way by limiting average beam power, dumping parts of the macropulse, or-in the worst case-shutting down the whole accelerator. It has to consider the influence of various machine modes selected by the timing system. The MPS provides the operators with clear indications of error sources, and offers the possibility to mask any input channel to facilitate the operation of the machine. In addition, redundant installation of critical MPS components will help to avoid unneccessary downtime. This document summarizes the requirements on the machine protection system and includes plans for its architecture and for needed hardware components. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MicroTCA implementation of synchronous Ethernet-Based DAQ systems for large scale experiments

    Publication Year: 2009 , Page(s): 22 - 27
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1584 KB) |  | HTML iconHTML  

    Large LAr TPCs are among the most powerful detectors to address open problems in particle and astro-particle physics, such as CP violation in leptonic sector, neutrino properties and their astrophysical implications, proton decay search etc. The scale of such detector implies severe constraints on their readout and DAQ system. In this article we describe a data acquisition scheme for this new generation of large detectors. The main challenge is to propose a scalable and easy to use solution able to manage a large number of channels at the lowest cost. It is interesting to note that these constraints are very similar to those existing in Network Telecommunication Industry. We propose to study how emerging technologies like ATCA and muTCA could be used in neutrino experiments. We describe the design of an Advanced Mezzanine Board (AMC) including 32 ADC channels. This board receives 32 analogical channels at the front panel and sends the formatted data through the muTCA backplane using a Gigabit Ethernet link. The gigabit switch of the MCH is used to centralize and to send the data to the event building computer. The core of this card is a FPGA (ARIA-GX from ALTERA) including the whole system except the memories. A hardware accelerator has been implemented using a NIOS II muP and a Gigabit MAC IP. Obviously, in order to be able to reconstruct the tracks from the events a time synchronisation system is mandatory. We decided to implement the IEEE1588 standard also called Precision Timing Protocol, another emerging and promising technology in Telecommunication Industry. In this article we describe a Gigabit PTP implementation using the recovered clock of the gigabit link. By doing so the drift is directly cancelled and the PTP will be used only to evaluate and to correct the offset. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ATCA advanced control and data acquisition systems for fusion experiments

    Publication Year: 2009 , Page(s): 28 - 34
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (799 KB) |  | HTML iconHTML  

    The next generation of large-scale physics experiments will raise new challenges in the field of control and automation systems and will demand well integrated, interoperable set of tools with a high degree of automation. Fusion experiments will face similar needs and challenges. In nuclear fusion experiments e.g. JET and other devices, the demand has been to develop front-end electronics with large output bandwidth and data processing, multiple-input-multiple-output (MIMO) controllers with efficient resource sharing between control tasks on the same unit and massive parallel computing capabilities. Future systems, such as ITER, are envisioned to be more than an order of magnitude larger than those of today. Fast-control plant systems based on embedded technology with higher sampling rates and more stringent realtime requirements (feedback loops with sampling rates > 1 kHz) will be demanded. Furthermore, in ITER, it is essential to ensure that control loss is a very unlikely event thus more challenging will be providing robust, fault tolerant, reliable, maintainable, secure and operable control systems. ATCA is the most promising architecture to substantially enhance the performance and capability of existing standard systems providing high throughput as well as high availability. Leveraging on ongoing activities at European fusion facilities, e.g. JET, COMPASS, this contribution will detail the control and data acquisition needs and challenges of the fusion community, justify the option for the ATCA standard and, in the process, build-up the case for the need of establishing ATCA as an instrumentation standard. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intelligent Platform Management Controller for ATCA Compute Nodes

    Publication Year: 2009 , Page(s): 35 - 37
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2775 KB) |  | HTML iconHTML  

    A main building block of the trigger and data acquisition in the PANDA experiment at FAIR will be FPGA based Compute Nodes in the ATCA (Advanced Telecom Computing Architecture) standard. As ATCA takes advantage of the Intelligent Platform Management Interface (IPMI) the existence of a dedicated controller on each of the boards is necessary. A custom implementation of such a controller, a micro-controller utilizing add-on card is presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Overview of the T2K Fine Grained Detector data acquisition at J-Parc

    Publication Year: 2009 , Page(s): 38 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (330 KB) |  | HTML iconHTML  

    The Fine Grained Detector (FGD) is one of the sub-systems of the T2K experiment at J-PARC. The trajectories of charged particles produced by neutrinos interacting within the FGD will be reconstructed and used to determine the neutrino interaction point and the neutrino energy, together with other detectors. The FGD also provides some particle identification capabilities by measuring energy loss. The FGD consists of 8448 scintillator bars read out by wavelength-shifting fiber coupled to Multi-Pixel Photon Counters (MPPC). The FGD active volume is surrounded by 3 different boards: Frontend Boards (FEB) which use the SCA ASIC (AFTER) for waveform recording, the Crate Master Board (CMB) for local data collection, and the Light Pulser Boards for calibration purposes are surrounding the active volume of the FGD. The data are then sent by optical links to the Data Concentrator Cards (DCC), then by Ethernet link to the back-end computer. The FGD has the capability to generate a trigger which is sent through LVDS links to the main Trigger module. The monitoring of all the on-detector electronics is managed by means of an independent data path using the Midas Slow-Control Bus (MSCB). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Architecture and implementation of the front-end electronics of the time projection chambers in the T2K experiment

    Publication Year: 2009 , Page(s): 43 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (413 KB) |  | HTML iconHTML  

    The tracker of the near detector in the T2K neutrino oscillation experiment comprises three time projection chambers based on micro-pattern gaseous detectors. A new readout system is being developed to amplify condition and acquire in real time the data produced by the 124,000 detector channels. The cornerstone of the system is a 72-channel application specific integrated circuit which is based on a switched capacitor array. Using analog memories combined to digitization deferred in time enables reducing the initial burstiness of traffic from 50 Tbps to 400 Gbps in a practical manner and with a very low power budget. Modern field programmable gate arrays coupled to commercial digital memories are the next elements in the chain. Multi-gigabit optical links provide 140 Gbps of aggregate bandwidth to carry data outside of the magnet surrounding the detector to concentrator cards that pack data and provide the interface to commercial PCs via a standard Gigabit Ethernet network. We describe the requirements and constraints for this application and justify our technical choices. We detail the design and the performance of several key elements and show the deployment of the front-end electronics on the first time projection chamber where the final tests before installation onsite are being conducted. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A design for large-area fast photo-detectors with transmission-line readout and waveform sampling

    Publication Year: 2009 , Page(s): 49 - 61
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2475 KB) |  | HTML iconHTML  

    We present a preliminary design and the results of simulation for a photo-detector module to be used in applications requiring the coverage of areas of many square meters with time resolutions less than 10 picoseconds and position resolutions of less than a millimeter for charged particles. The source of light is Cherenkov light in a radiator/window; the amplification is provided by panels of micro-pores functionalized to act as microchannel plates (MCPs). The good time and position resolution stems from the use of an array of parallel 50 Omega transmission lines (strips) as the collecting anodes. The anode strips feed multi-GS/sec sampling chips which digitize the pulse waveform at each end of the strip, allowing a measurement of the time from the average of the two ends, and a 2-dimensional position measurement from the difference of times on a strip, and, in the orthogonal direction, the strip number, or a centroid of the charges deposited on adjacent strips. The module design is constructed so that large areas can be `tiled' by an array of modules. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Position sensing using pico-second timing with Micro-Channel Plate devices and waveform sampling

    Publication Year: 2009 , Page(s): 62 - 69
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1177 KB) |  | HTML iconHTML  

    Micro-Channel Plate devices (MCP) provide fast signals with rise-times in the 100 ps range, allowing a measurement of the time-of-arrival of light pulses with a few pico-seconds accuracy using either constant fraction discrimination followed by time-to-digital conversion, or waveform sampling processed after digitization. The coupling of the MCP anode plane to 50 Omega transmission lines allows both a precise time measurement and additionally the potential for a position measurement with an accuracy of a few hundred microns for large-area devices. We present position measurement results obtained with MCPs from Photonis, 10-cm transmission lines, and a calibrated laser source followed by sampling and digitization, and waveform analysis. A detailed simulation of the transmission lines is presented. We compare the measured position resolution for two identical setups but with MCPs of different pore diameters: 25 and 10 microns. Simulation results are also presented for transmission lines up to 1-m in length. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The design and initial testing of the beam phase and energy measurement system for DTL in the proton accelerator of CSNS

    Publication Year: 2009 , Page(s): 70 - 75
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (682 KB) |  | HTML iconHTML  

    China Spallation Neutron Source (CSNS) is now in the process of research and design, in which the proton accelerator is an important part. This beam phase and energy measurement system imports the signal from the Drift Tube Linac (DTL) and calculates its phase and energy, which is feedback to tune the beam. The signals received from fast current transformer (FCT) are modulated pulses of high frequency (repetition rate is 352.2 MHz for ADS and 324 MHz for CSNS, while the leading edge is only hundreds of ps), and the dynamic range of the signal amplitude varies from 20 mv to 900 mv (peak to peak, before transmitted through cables); therefore, special techniques are required to obtain the phase information. Corresponding simulations and initial tests have been conducted, with the result of a phase resolution better than 0.1 degree over 57 dB input signal amplitude range (with the bandwidth of 367 kHz). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of the ATLAS RPCs ROD timing performance with an embedded microprocessor

    Publication Year: 2009 , Page(s): 76 - 83
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2523 KB) |  | HTML iconHTML  

    The readout data of the ATLAS' RPC Muon Spectrometer are collected by the front-end electronics and transferred via optical fibres to the Read Out Driver (ROD) boards in the counting room. Each ROD arranges all the data fragments of one sector of the spectrometer in a unique event. This is made by the Event Builder logic, a cluster of Finite State Machines that parses the fragments, checks their syntax and builds an event containing all the sector data. In this paper we describe the Builder Monitor, developed to analyze the Event Builder timing performance. It is designed around a 32-bit softcore microprocessor embedded in the same FPGA hosting the Builder logic. This approach makes it possible to track the algorithm execution in the field. The Monitor performs real time and statistical analysis of the state machine dynamics. The microprocessor is interfaced with custom peripherals which read out the state registers, fill histograms and transfer them via DMA to the processor memory. The Builder Monitor also measures the elapsed time for each event, its length and keeps track of status and error words. We describe the hardware-software co-design of the Builder Monitor and the role played by the custom peripherals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Emulating the GLink chip-set with FPGA serial transceivers in the ATLAS Level-1 Muon trigger

    Publication Year: 2009 , Page(s): 84 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1593 KB) |  | HTML iconHTML  

    Many High Energy Physics experiments based their serial links on the Agilent HDMP-1032/34A serializer/deserializer chip-set (or GLink). This success was mainly due to the fact that this pair of chips was able to transfer data at ~1 Gb/s with a deterministic latency, fixed after each power up or reset of the link. Despite this unique timing feature, Agilent discontinued the production and no compatible commercial off-the-shelf chip-sets are available. The ATLAS Level-1 Muon trigger includes some serial links based on GLink in order to transfer data from the detector to the counting room. The transmission side of the links will not be upgraded, however a replacement for the receivers in the counting room in case of failures is needed. In this paper, we present a solution to replace GLink transmitters and receivers. Our design is based on the gigabit serial IO (GTP) embedded in a Xilinx Virtex 5 Field Programmable Gate Array (FPGA). We present the architecture and we discuss parameters of the implementation such as latency and resource occupation. We compare the GLink chip-set and the GTP-based emulator in terms of latency, eye diagram and power dissipation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel computer with 10GBit data links

    Publication Year: 2009 , Page(s): 89 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (689 KB) |  | HTML iconHTML  

    A massively parallel computer is being built in a collaboration between IBM, German and Italian universities and FZ Juelich. Each computing node consists of an enhanced Cell processor and a network processor which provides 6 bidirectional communication links. The communication links are based on 10 GBit Ethernet technology. The computer will be used for QCD calculations, but other applications are considered. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Receiver assistant congestion control in high speed and lossy networks

    Publication Year: 2009 , Page(s): 91 - 95
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (218 KB) |  | HTML iconHTML  

    Many applications require fast data transfer in high speed wireless networks. A representative example is that EAST experiment data are retrieved by some physics researchers using the TCP (transmission control protocol). However, due to the limitation in its conservative congestion control algorithm, TCP can not effectively utilize the network capacity. Furthermore, TCP assumes that every packet loss is caused by network congestion and invokes congestion control and avoidance. TCP's blind congestion control aggravates the performance degradation in high speed and lossy wireless networks. In this paper, we propose a receiver assistant congestion control mechanism (RACC), in which the sender still performs loss-based control, while the receiver performs delay-based control. The receiver measures the network bandwidth based on the interpacket delay gaps, and computes an appropriate congestion window size according to the measured bandwidth and then feedbacks the value to the sender. The sender adjusts the congestion window size based on the value informed by the receiver and the AIMD (additive-increase multiplicative-decrease) mechanism. By integrating the loss-based and delay-based congestion controls, our mechanism can mitigate the effect of wireless losses, alleviate the timeout effect, and therefore make better use of network bandwidth. The simulation results in various scenarios show that our mechanism can have better performance than conventional TCP in high speed and lossy wireless environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Characterizing jitter performance of multi gigabit FPGA-embedded serial transceivers

    Publication Year: 2009 , Page(s): 96 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2816 KB) |  | HTML iconHTML  

    High-speed serial links are a key component of data acquisition systems for High Energy Physics. They carry physics events data and often also clock, trigger and fast control signals. For the latter applications, the jitter on the clock recovered from the serial stream is a critical parameter since it directly affects the timing performance of data acquisition and trigger systems. Latest Field Programmable Gate Arrays (FPGAs) include multigigabit serial transceivers, which are configurable with various options and support many data encodings. However, an in-depth jitter characterization of those devices is not available yet. In this paper we present measurements of the jitter on the clock recovered by a GTP transceiver (embedded in a Xilinx Virtex 5 FPGA) as a function of the data pattern, coding and logic activity on the transmitter and receiver FPGAs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online equilibrium reconstruction for EAST plasma discharge

    Publication Year: 2009 , Page(s): 102 - 105
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (569 KB) |  | HTML iconHTML  

    The online equilibrium reconstruction, based on the offline version of the EFIT code and MPI library, can finish the calculation between two pulses. It combined the online data acquisition, parallel calculation, and data storing together. The program on the master node of the cluster detect the end of the shot actively, read diagnostic data from EAST mdsplus server once the data storing ends, and write the results to EFIT mdsplus server. These process runs automatically on IBM blade center with 9 nodes. The total time is about 1 seconds to server minutes, which depends on the length of the plasma discharge. With the results stored in mdsplus server, it is convenient for operator and physicist to analyze the status of plasma discharge using visualization tools. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Handling online information in the LHCb experiment

    Publication Year: 2009 , Page(s): 106 - 109
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (516 KB) |  | HTML iconHTML  

    The LHCb experiment is a complex particle physics detector, large amount of information is needed for its run time configuration, control and monitoring. All these data are stored in the three main logical databases of the online system: configuration database, archiving database and conditions database. The configuration database contains information needed by the online hardware and software components, like for example the electronics boards, high voltage and low voltage power supplies and trigger algorithms, to be configured, according to the partitioning mode (which components are needed) and running mode(which data are being produced: physics, cosmic, test, etc.). The archiving database contains all data read from hardware used for the monitoring and debugging of the experiment, like for example temperature readings. The third online database, the conditions database, contains a subset of the monitoring data, read from hardware, that are needed for physics processing and also some configuration data, like for example, the trigger settings. The interfaces to these databases have been developed as a component of the LHCb control framework and they are based on a SCADA (supervisory control and data acquisition) system called PVSSII. The implemented solution was explained in detail, from the motivation of all choices, in term of design and implementation, to the integration of the databases in the online system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Slow Control System for a NEXT-TPC prototype

    Publication Year: 2009 , Page(s): 110 - 112
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (235 KB) |  | HTML iconHTML  

    NEXT is a double beta decay experiment that will be installed in Canfranc Underground Laboratory. The precise monitoring of a number of variables such as temperatures, relative and absolute pressure, gas flows, high voltage, etc is required for the proper operation of the detector and to perform the adequate data corrections. For this purpose a complete Slow Control System using LabVIEW has been developed and is now under operation for TPC studies and characterization. Each sensor can be connected to a main PLC (Programmable Logic Controller) with Ethernet. The PLC provides modularity and allows the increase of the number of sensors if needed, while the Ethernet port is a flexible interface for remote control and monitoring. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparing the performance of EPICS Channel Access with a new implementation based on ICE (the Internet Communications Engine)

    Publication Year: 2009 , Page(s): 113 - 116
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (175 KB) |  | HTML iconHTML  

    The current Experimental Physics and Industrial Control System (EPICS) Channel Access (CA) protocol has been developed and used in the EPICS community for nearly 20 years. It has the advantage of stability and high performance. However, it is hard to maintain and extend. Despite being open source, in practice only the original author makes changes to it. The argument for a replacement based on modern communication technology is compelling. The Internet Communications Engine (ICE) is a communications middleware platform that is being widely used. This paper will present the implementation of a replacement for EPICS Channel Access based on ICE. The performance comparison with the original will also be presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.