CERN Accelerating science

CERN Document Server Znaleziono 2,026 rekordów  1 - 10następnykoniec  skocz do rekordu: Szukanie trwało 0.96 sekund. 
1.
Summer Student report - Continuous Integration at DIRAC / Lu, Anton
The DIRAC is a software framework for distributed computing providing a complete solution to one or more user community requiring access to distributed resources [...]
CERN-STUDENTS-Note-2019-130.
- 2019
Access to fulltext
2.
Utilizing Distributed Heterogeneous Computing with PanDA in ATLAS / ATLAS Collaboration
In recent years, advanced and complex analysis workflows have gained increasing importance in the ATLAS experiment at CERN, one of the large scientific experiments at the Large Hadron Collider (LHC). Support for such workflows has allowed users to exploit remote computing resources and service providers distributed worldwide, overcoming limitations on local resources and services. [...]
ATL-SOFT-SLIDE-2023-156.- Geneva : CERN, 2023 - 14 p. Fulltext: PDF; External link: Original Communication (restricted to ATLAS)
In : 26th International Conference on Computing in High Energy & Nuclear Physics, Norfolk, Virginia, Us, 8 - 12 May 2023
3.
Utilizing Distributed Heterogeneous Computing with PanDA in ATLAS / ATLAS Collaboration
In recent years, advanced and complex analysis workflows have gained increasing importance in the ATLAS experiment at CERN, one of the large scientific experiments at LHC. [...]
ATL-SOFT-PROC-2023-022.
- 2024 - 8.
Original Communication (restricted to ATLAS) - Full text
4.
LHCb: Pilot Framework and the DIRAC WMS
Reference: Poster-2009-100
Created: 2009. -1 p
Creator(s): Graciani, R; Tsaregorodtsev, A; Casajus, A

DIRAC, the LHCb community Grid solution, has pioneered the use of pilot jobs in the Grid. Pilot jobs provide a homogeneous interface to an heterogeneous set of computing resources. At the same time, pilot jobs allow to delay the scheduling decision to the last moment, thus taking into account the precise running conditions at the resource and last moment requests to the system. The DIRAC Workload Management System provides one single scheduling mechanism for jobs with very different profiles. To achieve an overall optimisation, it organizes pending jobs in task queues, both for individual users and production activities. Task queues are created with jobs having similar requirements. Following the VO policy a priority is assigned to each task queue. Pilot submission and subsequent job matching are based on these priorities following a statistical approach. Details of the implementation and the security aspects of this framework will be discussed.

Related links:
Computing in High Energy and Nuclear Physics (CHEP 2009)
© CERN Geneva

Fulltext
5.
Addressing Scalability with Message Queues: Architecture and Use Cases for DIRAC Interware
Reference: Poster-2018-646
Created: 2018. -1 p
Creator(s): Krzemien, Wojciech Jan

The Message Queue architecture is an asynchronous communication scheme that provides an attractive solution for certain scenarios in the distributed computing model. The introduction of the intermediate component (queue) in-between the interacting processes, allows to decouple the end-points making the system more flexible and providing high scalability and redundancy. The message queue brokers such as RabbitMQ, ActiveMQ or Kafka are proven technologies widely used nowadays. DIRAC is a general-purpose Interware software for distributed computing systems, which offers a common interface to a number of heterogeneous providers and guarantees transparent and reliable usage of the resources. The DIRAC platform has been adapted by several scientific projects, including High Energy Physics communities like LHCb, the Linear Collider and Belle2. A Message Queue generic interface has been incorporated into the DIRAC framework to help solving the scalability challenges that must be addressed during LHC Run3 starting in 2021. It allows to use the MQ scheme for the message exchange among the DIRAC components, or to communicate with third-party services. Within this contribution we will describe the integration of MQ systems with DIRAC, and several use cases will be shown. The focus will be put on the incorporation of MQ into the pilot logging system. Message Queues are also foreseen to be used as a backbone of the DIRAC component logging system, and monitoring. The results of the first performance tests will be presented.

© CERN Geneva

Access to files
6.
Integration of Cloud resources in the LHCb Distributed Computing / Ubeda Garcia, Mario (CERN) ; Mendez Munoz, V (PIC, Bellaterra ; Barcelona, IFAE) ; Stagni, Federico (Humboldt U., Berlin) ; Cabarrou, Baptiste (CERN) ; Rauschmayr, Nathalie (CERN) ; Charpentier, Philippe (CERN) ; Closier, Joel (CERN)
This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. [...]
2014 - 7 p. - Published in : J. Phys.: Conf. Ser. 513 (2014) 032099
In : 20th International Conference on Computing in High Energy and Nuclear Physics 2013, Amsterdam, Netherlands, 14 - 18 Oct 2013, pp.032099
7.
Evolution of the CS3APIS and the Interoperability Platform / Arora, Ishank (speaker) (CERN)
CS3APIs started out as a means to offer seamless integration of applications and storage providers, to solve the issue of fragmentation of services. Its reference implementation - Reva, has evolved over time to effortlessly allow plugging in numerous authentication mechanisms, storage mounts and application handlers through dynamic rule-based registries. [...]
2021 - 567. HEP Computing; CS3 2021- Cloud Storage Synchronization and Sharing External links: Talk details; Event details In : CS3 2021- Cloud Storage Synchronization and Sharing
8.
Total Cost of Ownership and Evaluation of Google Cloud Resources for the ATLAS Experiment at the LHC / South, David (Deutsches Elektronen-Synchrotron (DE)) ; Merino Arevalo, Gonzalo (The Barcelona Institute of Science and Technology (BIST) (ES)) /ATLAS Collaboration
The ATLAS Google Project was established as part of an ongoing evaluation of the use of commercial clouds by the ATLAS Collaboration, in anticipation of the potential future adoption of such resources by WLCG grid sites to fulfil or complement their computing pledges. Seamless integration of Google cloud resources into the worldwide ATLAS distributed computing infrastructure was achieved at large scale and for an extended period of time, and hence cloud resources are shown to be an effective mechanism to provide additional, flexible computing capacity to ATLAS. [...]
ATL-SOFT-SLIDE-2024-544.- Geneva : CERN, 2024 - 30 p. Fulltext: PDF; External link: Original Communication (restricted to ATLAS)
9.
The DIRAC interware: current, upcoming and planned capabilities and technologies / Stagni, Federico (CERN) ; Tsaregorodtsev, Andrei (Marseille, CPPM) ; Sailer, André (CERN) ; Haen, Christophe (CERN)
Efficient access to distributed computing and storage resources is mandatory for the success of current and future High Energy and Nuclear Physics Experiments. DIRAC is an interware to build and operate distributed computing systems. [...]
2020 - 9 p. - Published in : EPJ Web Conf. 245 (2020) 03035 Fulltext from publisher: PDF;
In : 24th International Conference on Computing in High Energy and Nuclear Physics, Adelaide, Australia, 4 - 8 Nov 2019, pp.03035
10.
ARC and gLite Interoperability in ATLAS Sites / Filipcic, A (Stefan Inst., Ljubljana) ; Gadomski, S (Geneva U.) ; Haug, S (Bern U., LHEP) /for the ATLAS Collaboration
In the Worldwide Large Hadron Collider Computing Grid several sites deploy both gLite and ARC middlewares. In this manner they are simultaneously providing resources to more than one federation of sites. [...]
ATL-SOFT-SLIDE-2010-382.- Geneva : CERN, 2010 Fulltext: PDF; External link: Original Communication (restricted to ATLAS)
In : Conference on Computing in High Energy and Nuclear Physics 2010, Taipei, Taiwan, 18 - 22 Oct 2010

Jeżeli nie znaleziono tego czego szukasz, spróbuj szukać na innych serwerach:
recid:2687451 w Amazon
recid:2687451 w CERN EDMS
recid:2687451 w CERN Intranet
recid:2687451 w CiteSeer
recid:2687451 w Google Books
recid:2687451 w Google Scholar
recid:2687451 w Google Web
recid:2687451 w IEC
recid:2687451 w IHS
recid:2687451 w INSPIRE
recid:2687451 w ISO
recid:2687451 w KISS Books/Journals
recid:2687451 w KISS Preprints
recid:2687451 w NEBIS
recid:2687451 w SLAC Library Catalog