Abstract
| The ATLAS Experiment in the HL-LHC area is expected to deliver an unprecedented amount of scientific data. As the demand for disk storage capacity in ATLAS continues to rise steadily, the BNL Scientific Data and Computing Center (SDCC) faces challenges in terms of cost implications for maintaining multiple disk copies and adapting to the coming ATLAS storage requirements. To address these challenges, , the SDCC Storage team has undertaken a thorough analysis of the ATLAS experiment’s requirements, matching them to the suitable storage options and strategies, and has explored alternatives to enhance or replace the current storage solution. This paper aims to present the main challenges encountered while supporting big data experiments such as ATLAS. We describe the experiment's specific requirements and priorities, particularly focusing on the critical storage system characteristics needed for the high-luminosity run and how the key storage components provided by the Storage team work together: the dCache disk storage system; its archival back-end, HPSS, and its OS-level backend Storage. Specifically, we investigate a novel approach to integrate Lustre and XRootD. In this setup, Lustre serves as backend storage and XRootD acts as an access layer frontend, supporting various grid access protocols. Additionally, we also describe the validation and commissioning tests, including a comparison between dCache and XRootd in performance. Furthermore, we provide a performance and cost analysis comparing OpenZFS and LINUX MD RAID, evaluate different storage software stacks, and showcase stress tests conducted to validate Third Party Copy (TPC) functionality. |