Title
| Running Jobs in the Vacuum |
Author(s)
| McNab, A ; Stagni, F (CERN) ; Garcia, M Ubeda (CERN) |
Publication
| 2014 |
Number of pages
| 7 |
In:
| J. Phys.: Conf. Ser. 513 (2014) 032065 |
In:
| 20th International Conference on Computing in High Energy and Nuclear Physics 2013, Amsterdam, Netherlands, 14 - 18 Oct 2013, pp.032065 |
DOI
| 10.1088/1742-6596/513/3/032065
|
Subject category
| Computing and Computers |
Abstract
| We present a model for the operation of computing nodes at a site using Virtual Machines (VMs), in which VMs are created and contextualized for experiments by the site itself. For the experiment, these VMs appear to be produced spontaneously 'in the vacuum' rather having to ask the site to create each one. This model takes advantage of the existing pilot job frameworks adopted by many experiments. In the Vacuum model, the contextualization process starts a job agent within the VM and real jobs are fetched from the central task queue as normal. An implementation of the Vacuum scheme, Vac, is presented in which a VM factory runs on each physical worker node to create and contextualize its set of VMs. With this system, each node's VM factory can decide which experiments' VMs to run, based on site-wide target shares and on a peer-to-peer protocol in which the site's VM factories query each other to discover which VM types they are running. A property of this system is that there is no gate keeper service, head node, or batch system accepting and then directing jobs to particular worker nodes, avoiding several central points of failure. Finally, we describe tests of the Vac system using jobs from the central LHCb task queue, using the same contextualization procedure for VMs developed by LHCb for Clouds. |
Copyright/License
| publication: (License: CC-BY) |