1.
|
|
2.
|
CMS Tier-0 data processing during the detector commissioning in Run-3
/ Amado Valderrama, Jhonatan Andres (Nebraska U.) ; Eysermans, Jan (MIT) ; Giraldo Villa, German Felipe (CERN) ; Hufnagel, Dirk (Fermilab) ; Kovalskyi, Dmytro (MIT) ; Linares Sancho, Antonio (Wisconsin U., Madison)
/CMS Collaboration
The CMS Tier-0 system is responsible for the prompt processing and distribution of the data collected by the CMS Experiment. A number of upgrades were implemented during the long shutdown 2 of the Large Hadron Collider, which improved the performance and reliability of the system. [...]
CMS-CR-2023-129.-
Geneva : CERN, 2024 - 9 p.
- Published in : EPJ Web Conf. 295 (2024) 03007
Fulltext: PDF;
In : 26th International Conference on Computing in High Energy & Nuclear Physics, Norfolk, Virginia, Us, 8 - 12 May 2023
|
|
3.
|
|
4.
|
|
5.
|
HPC resource integration into CMS Computing via HEPCloud
/ Hufnagel, Dirk (Fermilab) ; Holzman, Burt (Fermilab) ; Mason, David (Fermilab) ; Mhashilkar, Parag (Fermilab) ; Timm, Steven (Fermilab) ; Tiradani, Anthony (Fermilab) ; Khan, Farrukh Aftab (Fermilab) ; Gutsche, Oliver (Fermilab) ; Bloom, Kenneth (U. Nebraska, Lincoln)
The higher energy and luminosity from the LHC in Run 2 have put increased pressure on CMS computing resources. Extrapolating to even higher luminosities (and thus higher event complexities and trigger rates) beyond Run3, it becomes clear that simply scaling up the the current model of CMS computing alone will become economically unfeasible. [...]
CMS-CR-2018-283; FERMILAB-CONF-18-630-CD.-
Geneva : CERN, 2019 - 8 p.
- Published in : EPJ Web Conf. 214 (2019) 03031
CMS Note: PDF; Fulltext from publisher: PDF;
In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018, pp.03031
|
|
6.
|
Connecting restricted, high-availability, or low-latency resources to a seamless Global Pool for CMS
/ Balcas, J (Caltech) ; Bockelman, B (Nebraska U.) ; Hufnagel, D (Fermilab) ; Hurtado Anampa, K (Notre Dame U.) ; Jayatilaka, B (Fermilab) ; Khan, F (NCP, Islamabad) ; Larson, K (Fermilab) ; Letts, J (UC, San Diego) ; Mascheroni, M (Fermilab) ; Mohapatra, A (Wisconsin U., Madison) et al.
/CMS
The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. [...]
2017 - 8 p.
- Published in : J. Phys.: Conf. Ser. 898 (2017) 052037
Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.052037
|
|
7.
|
|
8.
|
Stability and scalability of the CMS Global Pool: Pushing HTCondor and glideinWMS to new limits
/ Balcas, J (Caltech) ; Bockelman, B (Nebraska U.) ; Hufnagel, D (Fermilab) ; Hurtado Anampa, K (Notre Dame U.) ; Aftab Khan, F (NCP, Islamabad) ; Larson, K (Fermilab) ; Letts, J (UC, San Diego) ; Marra da Silva, J (Sao Paulo, IFT) ; Mascheroni, M (Fermilab) ; Mason, D (Fermilab) et al.
The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. [...]
2017 - 7 p.
- Published in : J. Phys.: Conf. Ser. 898 (2017) 052031
Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.052031
|
|
9.
|
Experience in using commercial clouds in CMS
/ Bauerdick, L (Fermilab) ; Bockelman, B (Nebraska U.) ; Dykstra, D (Fermilab) ; Fuess, S (Fermilab) ; Garzoglio, G (Fermilab) ; Girone, M (CERN) ; Gutsche, O (Fermilab) ; Holzman, B (Fermilab) ; Hufnagel, D (Fermilab) ; Kim, H (Fermilab) et al.
/CMS
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. [...]
2017 - 8 p.
- Published in : J. Phys.: Conf. Ser. 898 (2017) 052019
Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.052019
|
|
10.
|
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
/ Holzman, Burt (Fermilab) ; Bauerdick, Lothar A.T. (Fermilab) ; Bockelman, Brian (Nebraska U.) ; Dykstra, Dave (Fermilab) ; Fisk, Ian (New York U.) ; Fuess, Stuart (Fermilab) ; Garzoglio, Gabriele (Fermilab) ; Girone, Maria (CERN) ; Gutsche, Oliver (Fermilab) ; Hufnagel, Dirk (Fermilab) et al.
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. [...]
arXiv:1710.00100; FERMILAB-PUB-17-092-CD.-
2017-09-29 - 15 p.
- Published in : Comput. Softw. Big Sci. 1 (2017) 1
Fulltext: arxiv:1710.00100 - PDF; fermilab-pub-17-092-cd - PDF; External link: Fermilab Accepted Manuscript
|
|