Author(s)
|
Balcas, J (Vilnius U.) ; Belforte, S (INFN, Trieste ; Trieste U.) ; Bockelman, B (Nebraska U.) ; Colling, D (Imperial Coll., London) ; Gutsche, O (Fermilab) ; Hufnagel, D (Fermilab) ; Khan, F (Quaid-i-Azam U.) ; Larson, K (Fermilab) ; Letts, J (UC, San Diego) ; Mascheroni, M (INFN, Milan Bicocca ; Milan Bicocca U.) ; Mason, D (Fermilab) ; McCrea, A (UC, San Diego) ; Piperov, S (Brown U.) ; Saiz-Santos, M (UC, San Diego) ; Sfiligoi, I (UC, San Diego) ; Tanasijczuk, A (UC, San Diego) ; Wissing, C (DESY) |
Abstract
| CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers were made available to CMS, which further complicated the operations of the submission infrastructure. In this presentation we will discuss the CMS effort to adopt and deploy the glideinWMS system as a common resource provisioning layer to grid, cloud, local batch, and opportunistic resources and sites. We will address the challenges associated with integrating the various types of resources, the efficiency gains and simplifications associated with using a common resource provisioning layer, and discuss the solutions found. We will finish with an outlook of future plans for how CMS is moving forward on resource provisioning for more heterogenous architectures and services. |