CERN Accelerating science

ATLAS Slides
Report number ATL-SOFT-SLIDE-2015-078
Title Design, Results, Evolution and Status of the ATLAS simulation in Point1 project.
Author(s) Ballestrero, Sergio (University of Johannesburg, Department of Physics) ; Fressard-Batraneanu, Silvia Maria (CERN) ; Brasolin, Franco (INFN Bologna and Universita' di Bologna, Dipartimento di Fisica e Astronomia) ; Contescu, Alexandru Cristian (University Politehnica Bucharest) ; Fazio, Daniel (CERN) ; Di Girolamo, Alessandro (CERN) ; Lee, Christopher Jon (University of Johannesburg, Department of Physics) ; Pozo Astigarraga, Mikel Eukeni (CERN) ; Scannicchio, Diana (University of California, Irvine) ; Sedov, Alexey (Barcelona Tier-1) ; Twomey, Matthew Shaun (Department of Physics, University of Washington, Seattle) ; Wang, Fuquan (Department of Physics, University of Wisconsin) ; Zaytsev, Alexander (Brookhaven National Laboratory (BNL))
Corporate author(s) The ATLAS collaboration
Submitted by [email protected] on 20 Mar 2015
Subject category Particle Physics - Experiment
Accelerator/Facility, Experiment CERN LHC ; ATLAS
Free keywords (T07) Computing facilities and infrastructures ; (T15) Cloud computing ; (T16) Virtualization, Grid computing, Openstack
Abstract During the LHC long shutdown period (LS1), that started in 2013, the simulation in Point1 (Sim@P1) project takes advantage in an opportunistic way of the trigger and data acquisition (TDAQ) farm of the ATLAS experiment. The farm provides more than 1500 computer nodes, and they are particularly suitable for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2500 virtual machines (VM) provided with 8 CPU cores each, for a total of up to 20000 parallel running jobs. This contribution gives a thorough review of the design, the results and the evolution of the Sim@P1 project operating a large scale Openstack based virtualized platform deployed on top of the ATLAS TDAQ farm computing resources. During LS1, Sim@P1 was one of the most productive GRID sites: it delivered more than 50 million CPU-hours and it generated more than 1.7 billion Monte Carlo events to various analysis communities within the ATLAS collaboration. The particular design aspects are presented: the virtualization platform exploited by the Sim@P1 project permits to avoid interferences with TDAQ operations and, more important, it guarantees the security and the usability of the ATLAS private network. The Cloud infrastructure allows to decouple the needed support on both infrastructural (hardware, virtualization layer) and logical (Grid site support and handling the job lifecycle) levels. In particular in this note we focus on the operational aspects of such a large system for the upcoming LHC Run 2 period: customized, simple, reliable and efficient tools are needed to quickly switch from Sim@P1 to TDAQ mode and vice versa to exploit the TDAQ resources when they are not used for the data acquisition, even for short period. We also describe the evolution of the central Openstack infrastructure as it was upgraded from Folsom to Icehouse release and the scalability issues we have addressed. The success of the Sim@P1 project is due to the continuous combined efforts of the ATLAS TDAQ SysAdmins and NetAdmins teams, CERN IT and the RHIC & ATLAS Computing Facility (RACF) at BNL.



 記錄創建於2015-03-20,最後更新在2016-12-20


全文:
ATL-SOFT-SLIDE-2015-078 - Download fulltextPDF
CHEP2015_SP1_poster_v22 - Download fulltextPDF
External link:
Download fulltextOriginal Communication (restricted to ATLAS)