CERN Accelerating science

CERN Document Server Намерени са 50 записа  1 - 10следващкрай  отиване на запис: Търсенето отне 0.54 секунди. 
1.
The U.S. CMS HL-LHC R and D Strategic Plan / CMS Collaboration
The HL-LHC run is anticipated to start at the end of this decade and will pose a significant challenge for the scale of the HEP software and computing infrastructure. The mission of the U.S. [...]
arXiv:2312.00772; FERMILAB-CONF-23-531-CSAID-PPD; CMS-CR-2023-131.- Geneva : CERN, 2024 - 8 p. - Published in : EPJ Web Conf. 295 (2024) 04050 Fulltext: 2312.00772 - PDF; document - PDF; ba611daf6b6077be106a727f4c947dce - PDF; CR2023_131 - PDF; External link: Fermilab Library Server
In : 26th International Conference on Computing in High Energy & Nuclear Physics, Norfolk, Virginia, Us, 8 - 12 May 2023
2.
CMS Tier-0 data processing during the detector commissioning in Run-3 / Amado Valderrama, Jhonatan Andres (Nebraska U.) ; Eysermans, Jan (MIT) ; Giraldo Villa, German Felipe (CERN) ; Hufnagel, Dirk (Fermilab) ; Kovalskyi, Dmytro (MIT) ; Linares Sancho, Antonio (Wisconsin U., Madison) /CMS Collaboration
The CMS Tier-0 system is responsible for the prompt processing and distribution of the data collected by the CMS Experiment. A number of upgrades were implemented during the long shutdown 2 of the Large Hadron Collider, which improved the performance and reliability of the system. [...]
CMS-CR-2023-129.- Geneva : CERN, 2024 - 9 p. - Published in : EPJ Web Conf. 295 (2024) 03007 Fulltext: PDF;
In : 26th International Conference on Computing in High Energy & Nuclear Physics, Norfolk, Virginia, Us, 8 - 12 May 2023
3.
Extending the distributed computing infrastructure of the CMS experiment with HPC resources / CMS Collaboration
Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. [...]
2023 - 7 p. - Published in : J. Phys. : Conf. Ser. 2438 (2023) 012039 Fulltext: aaea79f26e7b0ef455f52cb4f510d3d1 - PDF; document - PDF; External link: Fermilab Library Server
In : 20th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2021), Daejeon, Korea, 29 Nov - 3 Dec 2021, pp.012039
4.
Development of the CMS detector for the CERN LHC Run 3 / CMS Collaboration
Since the initial data taking of the CERN LHC, the CMS experiment has undergone substantial upgrades and improvements. This paper discusses the CMS detector as it is configured for the third data-taking period of the CERN LHC, Run 3, which started in 2022. [...]
arXiv:2309.05466; CMS-PRF-21-001; CERN-EP-2023-136; CMS-PRF-21-001-003.- Geneva : CERN, 2024-05-23 - 257 p. - Published in : JINST 19 (2024) P05064 Fulltext: PDF; Fulltext from Publisher: PDF; External links: Additional information for the analysis; CMS AuthorList
In : The Large Hadron Collider and The Experiments for Run 3
5.
HPC resource integration into CMS Computing via HEPCloud / Hufnagel, Dirk (Fermilab) ; Holzman, Burt (Fermilab) ; Mason, David (Fermilab) ; Mhashilkar, Parag (Fermilab) ; Timm, Steven (Fermilab) ; Tiradani, Anthony (Fermilab) ; Khan, Farrukh Aftab (Fermilab) ; Gutsche, Oliver (Fermilab) ; Bloom, Kenneth (U. Nebraska, Lincoln)
The higher energy and luminosity from the LHC in Run 2 have put increased pressure on CMS computing resources. Extrapolating to even higher luminosities (and thus higher event complexities and trigger rates) beyond Run3, it becomes clear that simply scaling up the the current model of CMS computing alone will become economically unfeasible. [...]
CMS-CR-2018-283; FERMILAB-CONF-18-630-CD.- Geneva : CERN, 2019 - 8 p. - Published in : EPJ Web Conf. 214 (2019) 03031 CMS Note: PDF; Fulltext from publisher: PDF;
In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018, pp.03031
6.
Connecting restricted, high-availability, or low-latency resources to a seamless Global Pool for CMS / Balcas, J (Caltech) ; Bockelman, B (Nebraska U.) ; Hufnagel, D (Fermilab) ; Hurtado Anampa, K (Notre Dame U.) ; Jayatilaka, B (Fermilab) ; Khan, F (NCP, Islamabad) ; Larson, K (Fermilab) ; Letts, J (UC, San Diego) ; Mascheroni, M (Fermilab) ; Mohapatra, A (Wisconsin U., Madison) et al. /CMS
The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. [...]
2017 - 8 p. - Published in : J. Phys.: Conf. Ser. 898 (2017) 052037 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.052037
7.
CMS use of allocation based HPC resources / Hufnagel, Dirk (Fermilab) /CMS
The higher energy and luminosity from the LHC in Run2 has put increased pressure on CMS computing resources. Extrapolating to even higher luminosities (and thus higher event complexities and trigger rates) in Run3 and beyond, it becomes clear the current model of CMS computing alone will not scale accordingly. [...]
2017 - 7 p. - Published in : J. Phys.: Conf. Ser. 898 (2017) 092050 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.092050
8.
Stability and scalability of the CMS Global Pool: Pushing HTCondor and glideinWMS to new limits / Balcas, J (Caltech) ; Bockelman, B (Nebraska U.) ; Hufnagel, D (Fermilab) ; Hurtado Anampa, K (Notre Dame U.) ; Aftab Khan, F (NCP, Islamabad) ; Larson, K (Fermilab) ; Letts, J (UC, San Diego) ; Marra da Silva, J (Sao Paulo, IFT) ; Mascheroni, M (Fermilab) ; Mason, D (Fermilab) et al.
The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. [...]
2017 - 7 p. - Published in : J. Phys.: Conf. Ser. 898 (2017) 052031 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.052031
9.
Experience in using commercial clouds in CMS / Bauerdick, L (Fermilab) ; Bockelman, B (Nebraska U.) ; Dykstra, D (Fermilab) ; Fuess, S (Fermilab) ; Garzoglio, G (Fermilab) ; Girone, M (CERN) ; Gutsche, O (Fermilab) ; Holzman, B (Fermilab) ; Hufnagel, D (Fermilab) ; Kim, H (Fermilab) et al. /CMS
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. [...]
2017 - 8 p. - Published in : J. Phys.: Conf. Ser. 898 (2017) 052019 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.052019
10.
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation / Holzman, Burt (Fermilab) ; Bauerdick, Lothar A.T. (Fermilab) ; Bockelman, Brian (Nebraska U.) ; Dykstra, Dave (Fermilab) ; Fisk, Ian (New York U.) ; Fuess, Stuart (Fermilab) ; Garzoglio, Gabriele (Fermilab) ; Girone, Maria (CERN) ; Gutsche, Oliver (Fermilab) ; Hufnagel, Dirk (Fermilab) et al.
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. [...]
arXiv:1710.00100; FERMILAB-PUB-17-092-CD.- 2017-09-29 - 15 p. - Published in : Comput. Softw. Big Sci. 1 (2017) 1 Fulltext: arxiv:1710.00100 - PDF; fermilab-pub-17-092-cd - PDF; External link: Fermilab Accepted Manuscript

CERN Document Server : Намерени са 50 записа   1 - 10следващкрай  отиване на запис:
Виж също: автори с подобни имена
17 Hufnagel, Dirk
Интересувате ли се от уведомяване за нови резултати по тази заявка?
Set up a personal email alert or subscribe to the RSS feed.
Не намерихте, каквото търсите? Опитайте търсене в други сървъри:
Hufnagel, D в Amazon
Hufnagel, D в CERN EDMS
Hufnagel, D в CERN Intranet
Hufnagel, D в CiteSeer
Hufnagel, D в Google Books
Hufnagel, D в Google Scholar
Hufnagel, D в Google Web
Hufnagel, D в IEC
Hufnagel, D в IHS
Hufnagel, D в INSPIRE
Hufnagel, D в ISO
Hufnagel, D в KISS Books/Journals
Hufnagel, D в KISS Preprints
Hufnagel, D в NEBIS
Hufnagel, D в SLAC Library Catalog
Hufnagel, D в Scirus