By Topic

Nuclear Science, IEEE Transactions on

Issue 2 • Date April 2002

Filter Results

Displaying Results 1 - 25 of 38
  • An online neural network triggering system for the Tile Calorimeter

    Publication Year: 2002 , Page(s): 369 - 376
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (309 KB) |  | HTML iconHTML  

    For the hadronic calorimeter of ATLAS, TileCal, neural processing is used to establish an efficient methodology for the online particle identification in beam tests of calorimeter prototypes. Although beam purity is usually very good for a selected particle type, background from wrong-type particles cannot be avoided and is routinely identified in the offline analysis. The proposed neural system is trained online to identify electrons, pions, and muons at different energy levels and it achieves more than 90% efficiency in terms of particle identification. The neural system is being implemented by integrating it to the readout drive (ROD) of the TileCal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Conference author index

    Publication Year: 2002 , Page(s): 537 - 538
    Save to Project icon | Request Permissions | PDF file iconPDF (143 KB)  
    Freely Available from IEEE
  • A remote control system for FPGA-embedded modules in radiation environments

    Publication Year: 2002 , Page(s): 501 - 506
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (271 KB) |  | HTML iconHTML  

    A remote control system has been developed for versa module Eurocard (VME) modules located in a radiation environment. Two new VME modules-the remote controller (RC) and local interface modules-are introduced to mediate between the local host and remote slave modules. These two modules are connected with optical links and the local host can master the remote VME bus to access the slave modules through these intermediate modules. This control system can perform watchdog for field programmable gate array (FPGA)-embedded modules whose configuration data are susceptible to single event upsets (SEUs). The architectural study and first prototyping of this system are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of parallel versus hierarchical systems for data processing in distributed sensor networks

    Publication Year: 2002 , Page(s): 394 - 400
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (319 KB) |  | HTML iconHTML  

    Distributed sensor networks (DSN) often lead to high volumes of data to acquire and the implementation of the data acquisition is highly dependable on the application. In this paper, we define a merit factor (MF), which allows for quantitative comparison of different possible implementations of data acquisition system. Results of the application of this factor to high-energy physics experiments are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A general-purpose Java tool for action dispatching and supervision in nuclear fusion experiments

    Publication Year: 2002 , Page(s): 469 - 473
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB) |  | HTML iconHTML  

    In nuclear fusion experiments, the plasma discharge requires a preparation sequence followed by a data acquisition phase. During these phases, the control and data acquisition system is required to carry out a sequence of operations for set up of the various devices, data readout, and on-line computation. An action dispatcher tool must comply with several requirements such as the support for a distributed and heterogeneous environment, a comprehensive user interface for the supervision of the whole sequence, and the need for web-based support. The paper describes the architecture of a general-purpose Java-based tool for action dispatching. The use of the platform-independent Java framework, combined with the generic approach in the architecture definition, satisfies the above requirements. The Java framework has been chosen for the implementation because of its platform-independence, network, and multithreading support. The architecture of the tool has been kept quite generic, thus making the tool adaptable to a variety of operating environments with minimal changes in the application code View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MUSE: an integrated trigger and readout control system for CHIMERA

    Publication Year: 2002 , Page(s): 334 - 338
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (193 KB) |  | HTML iconHTML  

    The CHIMERA 4π detector trigger system is described. The trigger decisions are based on a combination of geometrical multiplicity of detected particles and other logic signals. The trigger can manage the buffer memory of the used analog to digital converters. This allows performance of parallel data conversion and readout and substantial improvement of the acquisition system dead-time performances. The trigger module generates all the necessary gate signals for the converters and the control signals necessary to synchronize the readout. It also allows the remote control of the whole system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Advanced digital processing for amplitude and range determination in optical RADAR systems [fusion reactor inspection]

    Publication Year: 2002 , Page(s): 417 - 422
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (253 KB) |  | HTML iconHTML  

    An amplitude modulated laser radar has been developed by the Italian Agency for New Technologies, Energy and the Environment (ENEA) for periodic in-vessel inspection in large fusion machines. The viewing system is based on a transceiving optical radar using a radio frequency (RF) modulated single-mode 840-nm wavelength laser beam. The sounding beam is transmitted through a coherent optical fiber to a probe, on the tip of which a focusing optics and suitable scanning system, using a silica prism, steers the laser beam in order to obtain a complete 3-D mapping of the in-vessel surface. This paper describes the digital signal processing system used to modulate the laser beam, as well as to measure both the amplitude of the backscattered laser beam and the phase difference between it and the modulation signal. This information, together with the information on the scanning system position, are acquired and then used by the visualization system to produce both 2-D and 3-D images. The system is based on VME boards and directly acquires and processes in real-time three 79.5-MHz RF signals by using a digital receiver and four digital signal processors. The system principles, the mathematical algorithm, and the system architecture are described hereafter View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A parallel optical link architecture using FPGAs

    Publication Year: 2002 , Page(s): 507 - 512
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (246 KB) |  | HTML iconHTML  

    Novel chip-sets allow designing parallel optical links for short-haul applications in the range of several gigabyte/s. This aggregate bandwidth is shared across multiple fibers and the system clock required to cope with such a data rate falls into the last generation FPGAs' capabilities. In this paper, we present a point-to-point parallel optical link architecture based on the INFINEONs PAROLI-DC devices and Xilinx field programmable gate arrays (FPGAs). The system works at 160 MHz and sustains a payload transfer rate higher than 240 megabyte/s. The link exhibits a low and deterministic latency in order to be used in critical real-time environments as trigger and data acquisition system (DAQ) systems in high-energy physics experiments. We examine the transport and data-link layers, focusing on both the system performances and the FPGAs' layout View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ethernet-based real-time control data bus

    Publication Year: 2002 , Page(s): 478 - 482
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (193 KB) |  | HTML iconHTML  

    Wendelstein 7-X is designed as a steady-state experiment to demonstrate the fusion reactor relevance of the advanced stellarator concept. The experiment's control and data acquisition will be performed by a distributed system of computers and programmable logic controllers (PLCs). Data of several systems have to be combined flexibly to control the machine, requiring a data exchange on a millisecond time scale between several connected units. A discharge can last up to half an hour, thus, the connections may vary during a discharge. Hence, it is desired to provide control relevant data, e.g., measurands, set points, interlock signals, cyclically via a bus system. The paper will analyze the special quality of control data streams and deduce the basic requirements for a real-time data bus. An Ethernet is a candidate for the data bus since it is a widely used broad-band bus with a foreseeable potential for development. Because of its nondeterministic arbitration algorithm Ethernet is generally considered not to be suited for hard real-time applications. It is possible to circumvent this disadvantage of Ethernet either by using switching techniques or a software token to obtain a reliable base for hard real-time data transport View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hardware preprocessing for the H1-Level 2 neural network trigger upgrade

    Publication Year: 2002 , Page(s): 362 - 368
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (345 KB) |  | HTML iconHTML  

    The H1-Level 2 neural network trigger has been running successfully at Deutsches Elektronen Synchrotron (DESY) for four years. In order to provide increased selectivity at the higher luminosity planned for the HERA upgrade, an improved "intelligent" preprocessing has been devised. This system extracts complementary physics information from the Level 1 trigger stream and furnishes it to the L2 neural network in order to improve its decision. A new preprocessing board (the Data Distribution Board Version 2-DDB2) is currently being designed at the Max Planck Institute for Physics, Munich, Germany, in order to implement the necessary algorithms in fast field programmable gate arrays (FPGA), taking advantage of parallelism and pipelined structures in order to meet the timing requirement of 8 μs. We present the different algorithmic steps and report on the current status of the DDB2 hardware upgrade View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A parallel systolic array ASIC for real-time execution of the Hough transform

    Publication Year: 2002 , Page(s): 339 - 346
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (306 KB) |  | HTML iconHTML  

    Many pattern recognition problems can be solved by mapping the input data into an n-dimensional feature space in which a vector indicates a set of attributes. One powerful pattern recognition method is the Hough transform. In reducing the n-dimensional feature space to two dimensions, the coordinate transform can be executed by a systolic array consisting of time-delay processing elements and adders. The application-specific integrated circuit (ASIC) implementation of the Hough transform as a systolic array for real-time recognition of curved tracks in multiwire drift chambers is presented. The array can handle 32 parallel input data streams. It mainly consists of 512 identical programmable processing elements. Sixteen histogram pixels in the feature space are produced in parallel per clock cycle. The ASIC is implemented in 0.6 μm CMOS, two-metal layer technology (CUB) from Austria Micro Systems (AMS) and operates with a clock frequency of 100 MHz. The interconnectivity pattern of the processing elements required to initialize the chip according to the pattern recognition task is computed on the host computer using the Hough-transform equations. This pattern is then downloaded to the chip via the data input lines. The Hough-transform ASIC is suitable for a wide range of pattern recognition applications. The integrated circuit is a powerful building block for systems requiring real-time execution of the Hough transform View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Controlling front-end electronics boards using commercial solutions

    Publication Year: 2002 , Page(s): 474 - 477
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (225 KB) |  | HTML iconHTML  

    LHCb is a dedicated B-physics experiment under construction at CERN's large hadron collider (LHC) accelerator. This paper will describe the novel approach LHCb is taking toward controlling and monitoring of electronics boards. Instead of using the bus in a crate to exercise control over the boards, we use credit-card sized personal computers (CCPCs) connected via Ethernet to cheap control PCs. The CCPCs will provide a simple parallel, I2C, and JTAG buses toward the electronics board. Each board will be equipped with a CCPC and, hence, will be completely independently controlled. The advantages of this scheme versus the traditional bus-based scheme will be described. Also, the integration of the controls of the electronics boards into a commercial supervisory control and data acquisition (SCADA) system will be shown View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis of the ATLAS Second-Level Trigger software

    Publication Year: 2002 , Page(s): 383 - 388
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (255 KB) |  | HTML iconHTML  

    In this paper, we analyze the performance of the prototype software developed for the ATLAS Second-Level Trigger. The OO framework written in C++ has been used to implement a distributed system which collects (simulated) detector data on which it executes event selection algorithms. The software has been used on testbeds of up to 100 nodes with various interconnect technologies. The final system will have to sustain traffic of ~40 Gb/s and require an estimated number of ~750 processors. Timing measurements are crucial for issues such as trigger decision latency, assessment of required CPU and network capacity, scalability, and load-balancing. In addition, final architectural and technological choices, code optimization, and system tuning require a detailed understanding of both CPU utilization and trigger decision latency. In this paper, we describe the instrumentation used to disentangle effects due to such factors as OS system intervention, blocking on interlocks (applications are multithreaded), multiple CPUs, and I/O. This is followed by an analysis of the measurements and concluding with suggestions for improvements to the ATLAS Trigger/DAQ dataflow components in the next phase of the project View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An object-oriented network-transparent data transportation framework

    Publication Year: 2002 , Page(s): 455 - 459
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (187 KB) |  | HTML iconHTML  

    An object-oriented data transportation framework based upon the publisher-subscriber (producer-consumer) principle has been developed that transparently incorporates a network transport mechanism independently of the underlying network technology and protocol View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Designing an S-LINK to PCI interface using an IP core

    Publication Year: 2002 , Page(s): 513 - 515
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (178 KB) |  | HTML iconHTML  

    The S-LINK is a standard that defines the source and destination interfaces of a point-to-point data link. This standard is chosen for data transmission between front-end electronics and readout systems of some ongoing and future experiments at CERN, Geneva, Switzerland. This work presents the S32PCI64 interface that can move data from a 32-bit S-LINK Destination Card to any 32-bit or 64-bit PCI bus running at 33 MHz View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data monitoring in high-performance clusters for computing applications

    Publication Year: 2002 , Page(s): 525 - 531
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (269 KB) |  | HTML iconHTML  

    The shared memory in a LAN-like environment (SMiLE) project at Lehrstuhl fur Rechnertechnik und Rechnerorganisation, Technical University of Munich (LRR-TUM) investigates in high-performance cluster computing using system area networks. In the context of this project, a hardware monitor is being developed to observe the system area network (SAN) traffic. This hardware monitor is, therefore, capable of delivering detailed information about the run-time communication behavior of applications running on SMiLE clusters. The central part of this monitor consists of a content-addressable counter array managing a small working set of the most recently referenced memory regions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing ethernet networks for the ATLAS data collection system

    Publication Year: 2002 , Page(s): 516 - 520
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (231 KB) |  | HTML iconHTML  

    This paper reports recent work on ethernet traffic generation and analysis. We use gigabit ethernet network interface cards (NICs) running customized embedded software and custom-built 32-port fast ethernet boards based on field programmable gate arrays (FPGAs) to study the behavior of large ethernet networks. The traffic generation software is able to accommodate many traffic distributions with the ultimate goal of generating traffic that resembles the data collection system of the ATLAS experiment at CERN, Geneva, Switzerland. Each packet is time stamped with a global clock value and, therefore, we are able to compute an accurate measure of the network latency. Various other information collected from the boards is displayed in real time on a graphical interface. This work provides the tools to study a test bed representing a fraction of the 1600 ATLAS detector readout buffers and 600 Level 2 trigger central processing units (CPUs) using a combination of the fast ethernet boards and the gigabit ethernet NICs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The distributed control and data system in HT-7 tokamak

    Publication Year: 2002 , Page(s): 496 - 500
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (266 KB) |  | HTML iconHTML  

    A control and data processing system for HT-7, the first super-conducting tokamak in China, is developed in order to both control the experimental system and process the huge amount of experimental data (600 MB/shot). A fully distributed structure is adopted. The distributed control system (DCS) includes several subsystems, such as those for the main control, synchronization, safety, interlock, data acquisition and data analysis, physical data management, remote control facility via networks, etc., The basic element of the DCS is the personal computer (PC) with a Fiber Distributed Data Interface based on a fast network. The system uses multiple data-transfer paths in parallel and categorizes the entire computer functions into the servers. The subsystems for main control, communication, and data management are described in detail View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Track finding at 10-MHz hadronic event rate

    Publication Year: 2002 , Page(s): 347 - 356
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (441 KB) |  | HTML iconHTML  

    Hera-B is a fixed target experiment using a halo target inside the HERA proton ring to generate B mesons in 920 GeV proton-nucleus interactions. The first-level trigger (FLT) of the experiment has to reduce the primary 10-MHz input rate by a factor 200 in less than 10 μs to make it acceptable to the second level trigger (SLT). The trigger strategy is based on the tracking of charged particles and on the reconstruction of their kinematic parameters. The combination of track pairs can also be used for the final decision. A parallel and pipelined set of approximately 60 dedicated boards was designed and built to perform this job. In this paper, the working principle of the system and some results obtained by analyzing the data collected during the run in the year 2000 are described View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of SCI as a fabric for a computer-based pattern recognition trigger running at 1.17 MHz

    Publication Year: 2002 , Page(s): 389 - 393
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (217 KB) |  | HTML iconHTML  

    The future CERN experiment, LHCb, has the need for a high-speed first-level vertex trigger. The silicon vertex trigger of LHCb processes events at a rate of 1.17 MHz for a total of 3 GB/s of data. To handle this amount of data at the given rate, the plan is to build a cluster of approximately 250 nodes connected by a low-latency network. We have evaluated SCI as a possible candidate for this network. It is especially appropriate due to its low latency and overhead. The memory-mapped character of the connection makes it well-suited for applications relying on device-to-device copy mechanisms. We will present the results obtained with the Dolphin 66/64 PCI cards built around the PSB66 and LC3 chips. The behavior of two different topologies, two-dimensional (2-D) torus and ring, have been studied in detail. Results of DAQ/realtime relevant scenarios have been obtained using transport methods as programmed I/O, DMA, and device-to-device copy mechanism. Analysis of both hardware and software has been used to obtain a detailed picture of the traffic patterns on the buses involved View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The VINCI instrument software in the very large telescope environment

    Publication Year: 2002 , Page(s): 483 - 490
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (302 KB) |  | HTML iconHTML  

    The European Southern Observatory (ESO) very large telescope interferometer (VLTI) got first fringes on March 17, 2001 at Mount Paranal in Chile. The VINCI instrument has played a key role in the achievement of this important milestone and is a fundamental component for the current VLTI operations. This paper, after a brief introduction of the VLTI and the instrument itself, will focus mainly on control software aspects. It describes the VINCI hardware and software architecture in the context of the whole VLT control concept. Particular emphasis is given to real-time control aspects, data acquisition, distribution of control over several hardware platforms, networks, standardization of hardware and software components, and software configuration control management View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The COMPASS data acquisition system

    Publication Year: 2002 , Page(s): 443 - 447
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (217 KB) |  | HTML iconHTML  

    A fully pipelined and massively parallel data acquisition system has been developed for the COMPASS experiment at CERN. The main requirements are to read 250000 detector channels at a trigger rate of up to 100 kHz. Such high rates are only possible when using a hit selection mechanism on the front-end combined with dead-time free readout. For this purpose, a time-to-digital converter (TDC) chip has been developed and is used for all time measurement applications in COMPASS. Distributed, field programmable gate array (FPGA)-based readout-driver modules handle parallel front-end initialization, synchronous trigger and control-signal distribution, and local event building at a processing speed of 160 Mbyte/s. Each of the 160 readout-driver modules connects to 16 front-end boards through independent twisted pair cables (CAT 7, 600 MHz) or optical fibers using an industrial (ESCON), self synchronizing link at 40 Mbyte/s. Automatic configuration through unique module and link identification ensures the flexibility and scalability to very large detector systems. The preprocessed data are transmitted through optical fibers at 160 Mbyte/s to the master event building system. Here global event building is realized on high-performance personal computers (PCs) and Gigabit ethernet network components. The complete events are sent to the central data recording at the CERN main site at an average rate of 40 Mbyte/s and stored in an object oriented database. A reduced system was set up for the commissioning run of COMPASS in 2000. Operation of the full system starts in July 2001 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-level triggers in ATLAS

    Publication Year: 2002 , Page(s): 377 - 382
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB) |  | HTML iconHTML  

    The trigger and data-acquisition system of ATLAS, a general-purpose experiment at the Large Hadron Collider (LHC), will be based on three levels of online selection. Starting from the bunch-crossing rate of 40 MHz (an interaction rate of 1 GHz at design luminosity-~ 1034 cm-2s-1), the first level trigger (LVL1) will reduce the rate to about 75 kHz using purpose-built hardware. An additional factor of about 103 in rate reduction is to be provided by the high-level triggers (HLTs) system, with two main functional components: the second-level trigger (LVL2) and the event filter(EF). LVL2 has to provide a fast decision (guided by the information from LVL1), using only a fraction of the full event, however, already at full granularity and can combine all subdetectors. At the EF, a refined selection is made with the. capability of full event reconstruction and the use of detailed calibration and alignment parameters. The HLT software architecture will provide a common and rather "lightweight" framework, able to execute the various selection algorithms and to control the sequence of execution according to the event properties and configuration parameters. System flexibility is a strong requirement in order to adapt to changes, e.g., in luminosity and background conditions. This paper will present the approach chosen for the software design of the HLT selection framework and of the algorithm interface, giving examples for selection sequences and algorithms. Based on currently existing prototypes, results for both the expected physics (signal efficiency, background rejection) and system (execution time) performance will also be shown View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A readout unit for high rate applications

    Publication Year: 2002 , Page(s): 448 - 454
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (303 KB) |  | HTML iconHTML  

    The LHCb readout unit (RU) is a custom entry stage to the readout network of a data-acquisition or trigger system. It performs subevent building from multiple link inputs toward a readout network via a PCI network interface or alternatively toward a high-speed link, via an S-link interface. Incoming event fragments are derandomized, buffered and assembled into single subevents. This process is based on a low-overhead framing convention and matching of equal event numbers. Programmable logic is used both in the input and output stages of the RU module, which may be configured either as a data-link multiplexer or as entry stage to a readout or trigger network. All FPGAs are interconnected via the PCI bus, which is hosted by a networked microprocessor card. Its main tasks are remote FPGA configuration and initialization of the PCI cards. The RU hardware architecture has been optimized for a throughput of up to 200 MB/s at a 1 MHz trigger rate, as required by the most demanding application, the LHCb level-1 trigger network. A custom traffic-scheduling link is available for applications like pipelined destination address allocation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Go4 multitasking class library with ROOT

    Publication Year: 2002 , Page(s): 521 - 524
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB) |  | HTML iconHTML  

    In the situation of monitoring an experiment, it is often necessary to control several independently running tasks from one graphical user interface (GUI). Such a GUI must be able to execute commands in the tasks even if they are busy, i.e., getting data, analyzing data, or waiting for data. Moreover, the tasks, being controlled by data streams (i.e., event data samples or slow control data), must be able to send data asynchronously to the GUI for visualization. A multitasking package (C++ class library) that meets these demands has been developed at the Gesellschaft fur Schwerionenforschung (GSI), Darmstadt, Germany, in the framework of a new analysis system, Go4, which is based on the ROOT system [CERN, R. Brun et al]. The package provides a thread manager, a task handler, and asynchronous intertask communication between threads through sockets. Hence, objects can be sent at any time from a task to the GUI or vice versa. At the GUI side, an incoming object is accepted by a thread and processed. In a task, an incoming command is queued by the accepting thread and executed in the execution thread. Utilizing the package one can implement nonblocking GUIs to control one or several tasks processing data in parallel and updating graphical elements in the GUI. The package could also be useful in building data dispatchers or in slow control applications. All components have been tested with Go4 analysis tasks and a very preliminary GUI View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Nuclear Science focuses on all aspects of the theory and applications of nuclear science and engineering, including instrumentation for the detection and measurement of ionizing radiation; particle accelerators and their controls; nuclear medicine and its application; effects of radiation on materials, components, and systems; reactor instrumentation and controls; and measurement of radiation in space.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Paul Dressendorfer
11509 Paseo del Oso NE
Albuquerque, NM  87111  USA