CERN Accelerating science

Published Articles
Title Disk storage at CERN
Author(s) Mascetti, L (CERN) ; Cano, E (CERN) ; Chan, B (CERN) ; Espinal, X (CERN) ; Fiorot, A (CERN) ; Labrador, H Gonz (CERN) ; Iven, J (CERN) ; Lamanna, M (CERN) ; Presti, G Lo (CERN) ; Mościcki, JT (CERN) ; Peters, AJ (CERN) ; Ponce, S (CERN) ; Rousseau, H (CERN) ; van der Ster, D (CERN)
Publication 2015
Number of pages 7
In: J. Phys.: Conf. Ser. 664 (2015) 042035
In: 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015, pp.042035
DOI 10.1088/1742-6596/664/4/042035
Subject category Computing and Computers
Abstract CERN IT DSS operates the main storage resources for data taking and physics analysis mainly via three system: AFS, CASTOR and EOS. The total usable space available on disk for users is about 100 PB (with relative ratios 1:20:120). EOS actively uses the two CERN Tier0 centres (Meyrin and Wigner) with 50:50 ratio. IT DSS also provide sizeable on-demand resources for IT services most notably OpenStack and NFS-based clients: this is provided by a Ceph infrastructure (3 PB) and few proprietary servers (NetApp). We will describe our operational experience and recent changes to these systems with special emphasis to the present usages for LHC data taking, the convergence to commodity hardware (nodes with 200-TB each with optional SSD) shared across all services. We also describe our experience in coupling commodity and home-grown solution (e.g. CERNBox integration in EOS, Ceph disk pools for AFS, CASTOR and NFS) and finally the future evolution of these systems for WLCG and beyond.
Copyright/License publication: © 2015-2025 The Author(s) (License: CC-BY-3.0)

Corresponding record in: Inspire


 Записът е създаден на 2016-02-26, последна промяна на 2022-08-10


IOP Open Access article:
Сваляне на пълен текст
PDF