Report number
| ATL-SOFT-PROC-2015-046 |
Title
| Design, Results, Evolution and Status of the ATLAS Simulation at Point1 Project |
Author(s)
|
Brasolin, Franco (INFN, Bologna) ; Fressard-Batraneanu, Silvia Maria (CERN) ; Ballestrero, Sergio (Johannesburg U.) ; Contescu, Alexandru Cristian (CERN ; Bucharest U.) ; Fazio, Daniel (CERN) ; Di Girolamo, Alessandro (CERN) ; Lee, Christopher Jon (Johannesburg U. ; CERN) ; Pozo Astigarraga, Mikel Eukeni (CERN) ; Scannicchio, Diana (UC, Irvine) ; Sedov, Alexey (PIC, Bellaterra ; Barcelona, IFAE) ; Twomey, Matthew Shaun (Washington U., Seattle) ; Wang, Fuquan (Wisconsin U., Madison) ; Zaytsev, Alexander (BNL, NSLS) The ATLAS collaboration Show all 13 authors |
Publication
| 2015 |
Imprint
| 17 May 2015 |
Number of pages
| 7 |
In:
| J. Phys.: Conf. Ser. 664 (2015) 022008 |
In:
| 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015, pp.022008 |
DOI
| 10.1088/1742-6596/664/2/022008
|
Subject category
| Particle Physics - Experiment ; Computing and Computers |
Accelerator/Facility, Experiment
| CERN LHC ; ATLAS |
Free keywords
| (T07) Computing facilities and infrastructures ; (T15) Cloud computing ; (T16) Virtualization, Grid computing, Openstack |
Abstract
| Abstract. During the LHC Long Shutdown 1 period (LS1), that started in 2013, the Simulation at Point1 (Sim@P1) Project takes advantage, in an opportunistic way, of the TDAQ (Trigger and Data Acquisition) HLT (High Level Trigger) farm of the ATLAS experiment. This farm provides more than 1300 compute nodes, which are particularly suited for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2700 virtual machines (VMs) provided with 8 CPU cores each, for a total of up to 22000 parallel running jobs. This contribution gives a review of the design, the results, and the evolution of the Sim@P1 Project; operating a large scale OpenStack based virtualized platform deployed on top of the ATLAS TDAQ HLT farm computing resources. During LS1, Sim@P1 was one of the most productive ATLAS sites: it delivered more than 50 million CPU-hours and it generated more than 1.7 billion Monte Carlo events to various analysis communities. The design aspects are presented: the virtualization platform exploited by Sim@P1 avoids interferences with TDAQ operations and it guarantees the security and the usability of the ATLAS private network. The cloud mechanism allows the separation of the needed support on both infrastructural (hardware, virtualization layer) and logical (Grid site support) levels. In this paper we focus on the operational aspects of such a large system for the upcoming LHC Run 2 period: simple, reliable, and efficient tools are needed to quickly switch from Sim@P1 to TDAQ mode and back, to exploit the resources when they are not used for the data acquisition, even for short periods. The evolution of the central OpenStack infrastructure is described, as it was upgraded from Folsom to the Icehouse release, including the scalability issues we have addressed. |
Copyright/License
| publication: © 2015-2025 The Author(s) (License: CC-BY-3.0) |