001607090 001__ 1607090
001607090 005__ 20141028091110.0
001607090 0248_ $$aoai:cds.cern.ch:1607090$$pcerncds:FULLTEXT
001607090 037__ $$aATL-DAQ-SLIDE-2013-814
001607090 041__ $$aeng
001607090 088__ $$9ATL-COM-DAQ-2013-099
001607090 100__ $$aBallestrero, S$$uUniversity of Johannesburg, South Africa
001607090 110__ $$aThe ATLAS collaboration
001607090 245__ $$aDesign and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm
001607090 260__ $$c2013
001607090 269__ $$aGeneva$$bCERN$$c09 Oct 2013
001607090 300__ $$a1 p
001607090 520__ $$aWith the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of the system on both infrastructural (hardware, virtualization platform) and logical (site support and job execution) levels are also discussed. The project is a result of combined effort of the ATLAS TDAQ SysAdmins and NetAdmins teams, CERN IT-SDC-OL Department and RHIC and ATLAS Computing Facility at BNL. The experience obtained while operating Sim@P1 infrastructure over the last 3.5 months shows that the virtualized infrastructure deployed on top of the ATLAS HLT farm is capable of contributing to the ATLAS MC production on a level of computing power of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.
001607090 594__ $$aSLIDE
001607090 595__ $$aCERN CDS-Invenio WebSubmit
001607090 65017 $$2SzGeCERN$$aDetectors and Experimental Techniques
001607090 65027 $$2SzGeCERN$$aDAQ and Trigger
001607090 6531_ $$9CERN$$aSimulation
001607090 6531_ $$9CERN$$aSim@P1
001607090 6531_ $$9CERN$$aATLAS
001607090 6531_ $$9CERN$$aPoint1
001607090 6531_ $$9CERN$$aTDAQ
001607090 690C_ $$aCERN
001607090 690C_ $$aINTNOTE
001607090 690C_ $$aPRIVATLAS
001607090 690C_ $$aPUBLATLASSLIDE
001607090 693__ $$aCERN LHC$$eATLAS
001607090 700__ $$aBatraneanu, S M$$uUniversity of California, Irvine, USA
001607090 700__ $$aBrasolin, F$$uIstituto Nazionale di Fisica Nucleare Sezione di Bologna, Italy
001607090 700__ $$aContescu, C$$uCERN and Polytechnic University of Bucharest, Romania
001607090 700__ $$aDi Girolamo, A$$uCERN IT Experiment Support Group (CERN IT ES)
001607090 700__ $$aLee, C J$$uCERN and University of Johannesburg, South Africa
001607090 700__ $$aPozo Astigarraga, M E$$uCERN
001607090 700__ $$aScannicchio, D A$$uUniversity of California, Irvine, USA
001607090 700__ $$aTwomey, M S$$uUniversity of Washington Department of Physics, USA
001607090 700__ $$aZaytsev, A$$uBrookhaeven National Laboratory (BNL), USA
001607090 710__ $$5PH-EP
001607090 859__ [email protected]
001607090 859__ [email protected]
001607090 859__ [email protected]
001607090 859__ [email protected]
001607090 8564_ $$uhttps://cds.cern.ch/record/1604213$$yOriginal Communication (restricted to ATLAS)
001607090 8564_ $$uhttps://cds.cern.ch/record/1607090/files/ATL-DAQ-SLIDE-2013-814.pdf
001607090 916__ $$sn$$w201370
001607090 960__ $$a91
001607090 963__ $$aPUBLIC
001607090 970__ $$a000734799CER
001607090 980__ $$aPUBLATLASSLIDE