CERN Accelerating science

ATLAS Note
Report number ATL-SOFT-PROC-2020-006
Title Managing the ATLAS Grid through Harvester
Author(s) Barreiro Megino, Fernando Harald (The University of Texas at Arlington) ; Alekseev, Aleksandr (Universidad Andres Bello) ; Berghaus, Frank (University of Victoria) ; Cameron, David (University of Oslo) ; De, Kaushik (The University of Texas at Arlington) ; Filipcic, Andrej (Jozef Stefan Institute) ; Glushkov, Ivan (The University of Texas at Arlington) ; Lin, Fahui (Academia Sinica, Taipei) ; Maeno, Tadashi (Brookhaven National Laboratory (BNL)) ; Magini, Nicolo (Iowa State University)
Corporate Author(s) The ATLAS collaboration
Collaboration ATLAS Collaboration
Publication 2020
Imprint 11 Feb 2020
Number of pages 7
In: EPJ Web Conf. 245 (2020) 03010
In: 24th International Conference on Computing in High Energy and Nuclear Physics, Adelaide, Australia, 4 - 8 Nov 2019, pp.03010
DOI 10.1051/epjconf/202024503010
Subject category Particle Physics - Experiment
Accelerator/Facility, Experiment CERN LHC ; ATLAS
Free keywords Harvester ; Grid ; PanDA ; Distributed Computing ; Batch
Abstract ATLAS Computing Management has identified the migration of all computing resources to Harvester, PanDA’s new workload submission engine, as a critical milestone for Run 3 and 4. This contribution will focus on the Grid migration to Harvester. We have built a redundant architecture based on CERN IT’s common offerings (e.g. Openstack Virtual Machines and Database on Demand) to run the necessary Harvester and HTCondor services, capable of sustaining the load of O(1M) workers on the grid per day. We have reviewed the ATLAS Grid region by region and moved as much possible away from blind worker submission, where multiple queues (e.g. single core, multi core, high memory) compete for resources on a site. Instead we have migrated towards more intelligent models that use information and priorities from the central PanDA workload management system and stream the right number of workers of each category to a unified queue while keeping late binding to the jobs. We will also describe our enhanced monitoring and analytics framework. Worker and job information is synchronized with minimal delays to a CERN IT provided ElasticSearch repository, where we can interact with dashboards to follow submission progress, discover site issues (e.g. broken Compute Elements) or spot empty workers. The result is a much more efficient usage of the Grid resources with smart, built-in monitoring of resources.
Copyright/License © 2020-2025 The Authors (License: CC-BY-4.0)

Corresponding record in: Inspire


 Journalen skapades 2020-02-11, och modifierades senast 2021-03-08


Fulltext from publisher:
Download fulltextPDF
(ytterligare filer)
External link:
Download fulltextOriginal Communication (restricted to ATLAS)