By Topic

The Data-Logging System of the Trigger and Data Acquisition for the ATLAS Experiment at CERN

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

6 Author(s)
Battaglia, Andreas ; Lab. for High Energy Phys., Univ. of Bern, Bern ; Beck, H.P. ; Dobson, M. ; Gadomski, Szymon
more authors

The ATLAS experiment is getting ready to observe collisions between protons at a centre of mass energy of 14 TeV. These will be the highest energy collisions in a controlled environment to-date, to be provided by the Large Hadron Collider at CERN by mid 2008. The ATLAS Trigger and Data Acquisition (TDAQ) system selects events online in a three level trigger system in order to keep those events promising to unveil new physics at a budgeted rate of ~200 Hz for an event size of ~1.5 MB. This paper focuses on the data-logging system on the TDAQ side, the so-called ldquoSub-Farm Outputrdquo (SFO) system. It takes data from the third level trigger, and it streams and indexes the events into different files, according to each event's trigger path. The data files are moved to CASTOR, the central mass storage facility at CERN. The final TDAQ data-logging system has been installed using 6 Linux PCs, holding in total 144 disks of 500 GB each, managed by three RAID controllers on each PC. The data-writing is managed in a controlled round-robin way among three independent filesystems associated to a distinct set of disks, managed by a distinct RAID controller. This novel design allows fast I/O, which together with a high speed network permits to minimize the number of SFO nodes. We report here on the functionality and performance requirements on the system, our experience with commissioning it and on the performance achieved.

Published in:

Nuclear Science, IEEE Transactions on  (Volume:55 ,  Issue: 5 )