CERN Accelerating science

ATLAS Slides
Report number ATL-DAQ-SLIDE-2015-126
Title ATLAS TDAQ System Administration:
Author(s) Lee, Christopher Jon (University of Johannesburg, Department of Physics) ; Brasolin, Franco (INFN Bologna and Universita' di Bologna, Dipartimento di Fisica e Astronomia) ; Bogdanchikov, Alexander (Budker Institute of Nuclear Physics, Siberian Branch of Russian Academy of Sciences) ; Ballestrero, Sergio (University of Johannesburg, Department of Physics) ; Contescu, Alexandru Cristian (University Politehnica Bucharest) ; Dubrov, Sergei (Budker Institute of Nuclear Physics, Siberian Branch of Russian Academy of Sciences) ; Fazio, Daniel (CERN) ; Korol, Aleksandr (Budker Institute of Nuclear Physics, Siberian Branch of Russian Academy of Sciences) ; Scannicchio, Diana (University of California, Irvine) ; Twomey, Matthew Shaun (Department of Physics, University of Washington, Seattle) ; Voronkov, Artem (Budker Institute of Nuclear Physics, Siberian Branch of Russian Academy of Sciences)
Corporate author(s) The ATLAS collaboration
Submitted to 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015
Submitted by [email protected] on 01 Apr 2015
Subject category Particle Physics - Experiment
Accelerator/Facility, Experiment CERN LHC ; ATLAS
Free keywords TDAQ ; System Administration ; SysAdmin
Abstract The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of ̃3000 servers, processing the data readout from ̃100 million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. During the data taking only critical security updates are applied and broken hardware is replaced to ensure a stable operational environment. The LS1 provided an excellent opportunity to look into new technologies and applications that would help to improve and streamline the daily tasks of not only the System Administrators, but also of the scientists who will be working during the upcoming data taking period (Run-II). The OS version has been upgraded to SLC6; for the largest part of the farm, which is composed by netbooted nodes, this required a completely new design of the netbooting system. In parallel, the migration to Puppet of the Configuration Management systems has been completed for both netbooted and localbooted hosts; the Post-Boot Scripts system and Quattor have been consequently dismissed. Various new ATCA-based readout systems, with specific network requirements, have also been integrated into the overall system. Virtual Machine (VM) usage has been investigated and tested and many of our core servers are now running on VMs. This provides us with the functionality of rapidly replacing them in case of failures and increasing the number of servers when needed. Virtualization has also been used to adapt the HLT farm as a batch system, which has been used for running Monte Carlo production jobs that are mostly CPU and not I/O bound. In Run-II this feature could be exploited during the downtimes of the LHC. A new Satellite Control Room (SCR) has also been commissioned and in the ATLAS Control Room (ACR) the PC-over-IP network connections have been upgraded to a fully redundant network. The migration to SLC6 has also had an impact on the Control Room Desktop (CRD), the in house KDE-based desktop environment designed to enforce access policies while fulfilling the needs of the people working in the ACR and the SCR. Finally, monitoring the health and the status of ̃3000 machines in the experimental area is obviously of the utmost importance, so the obsolete Nagios v2 has been replaced with Icinga, complemented by Ganglia for performance data. This paper serves for reporting “What”, “Why” and “How” we did in order to improve and produce a system capable of performing for the next 3 years of ATLAS data taking.



 記錄創建於2015-04-01,最後更新在2016-12-20


全文:
ATLAS_TDAQ_SysAdmins_-_CHEP2015v4 - Download fulltextPPTX
ATL-DAQ-SLIDE-2015-126 - Download fulltextPDF
External link:
Download fulltextOriginal Communication (restricted to ATLAS)