1.
|
|
2.
|
Multicore job scheduling in the Worldwide LHC Computing Grid
/ Forti, A (Manchester U.) ; Yzquierdo, A P (PIC, Bellaterra ; Madrid, CIEMAT) ; Hartmann, T (KIT, Karlsruhe, SCC) ; Alef, M (KIT, Karlsruhe, SCC) ; Lahiff, A (Rutherford) ; Templon, J (NIKHEF, Amsterdam) ; Pra, S Dal (INFN, CNAF) ; Gila, M (Zurich, ETH-CSCS/SCSC) ; Skipsey, S (Glasgow U.) ; Acosta-Silva, C (PIC, Bellaterra ; Barcelona, IFAE) et al.
After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. [...]
2015 - 8 p.
- Published in : J. Phys.: Conf. Ser. 664 (2015) 062016
IOP Open Access article: PDF;
In : 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015, pp.062016
|
|
3.
|
Extending DIRAC File Management with Erasure-Coding for efficient storage
/ Skipsey, Samuel Cadellin (Glasgow U.) ; Todev, Paulin (Glasgow U.) ; Britton, David (Glasgow U.) ; Crooks, David (Glasgow U.) ; Roy, Gareth (Glasgow U.)
The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. [...]
arXiv:1510.09117.-
2015
External link: Preprint
In : 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015, pp.042051
|
|
4.
|
Analysis and improvement of data-set level file distribution in Disk Pool Manager
/ Skipsey, Samuel Cadellin ; Purdie, Stuart ; Britton, David ; Mitchell, Mark ; Bhimji, Wahid ; Smith, David (CERN)
Of the three most widely used implementations of the WLCG Storage Element specification, Disk Pool Manager[1, 2] (DPM) has the simplest implementation of file placement balancing (StoRM doesn't attempt this, leaving it up to the underlying filesystem, which can be very sophisticated in itself). DPM uses a round-robin algorithm (with optional filesystem weighting), for placing files across filesystems and servers. [...]
2014 - 5 p.
- Published in : J. Phys.: Conf. Ser. 513 (2014) 042042
In : 20th International Conference on Computing in High Energy and Nuclear Physics 2013, Amsterdam, Netherlands, 14 - 18 Oct 2013, pp.042042
|
|
5.
|
Analysing I/O bottlenecks in LHC data analysis on grid storage resources
/ Bhimji, W (Edinburgh U.) ; Clark, P (Edinburgh U.) ; Doidge, M (Lancaster U.) ; Hellmich, M P (CERN) ; Skipsey, S (Glasgow U.) ; Vukotic, I (Chicago U.)
We describe recent I/O testing frameworks that we have developed and applied within the UK GridPP Collaboration, the ATLAS experiment and the DPM team, for a variety of distinct purposes. These include benchmarking vendor supplied storage products, discovering scaling limits of SRM solutions, tuning of storage systems for experiment data analysis, evaluating file access protocols, and exploring I/O read patterns of experiment software and their underlying event data models. [...]
2012 - 9 p.
In : Computing in High Energy and Nuclear Physics 2012, New York, NY, USA, 21 - 25 May 2012, pp.042010
|
|
6.
|
Testing performance of Standards-based protocols in DPM
/ Skipsey, Samuel (Glasgow U.) ; Bhimji, Wahid (Edinburgh U.) ; Rocha, Ricardo (CERN)
In the interests of the promotion of the increased use of non-proprietary protocols in grid storage systems, we perform tests on the performance of WebDAV and pNFS transport with the DPM storage solution. We find that the standards-based protocols behave similarly to the proprietary standards currently in use, despite encountering some issues with the state of the implementation itself. [...]
2012 - 6 p.
In : Computing in High Energy and Nuclear Physics 2012, New York, NY, USA, 21 - 25 May 2012, pp.052064
|
|
7.
|
Multi-core job submission and grid resource scheduling for ATLAS AthenaMP
/ Crooks, D ; Calafiura, P ; Harrington, R ; Jha, M ; Maeno, T ; Purdie, S ; Severini, H ; Skipsey, S ; Tsulaia, V ; Walker, R et al.
AthenaMP is the multi-core implementation of the ATLAS software framework and allows the efficient sharing of memory pages between multiple threads of execution. This has now been validated for production and delivers a significant reduction on overall memory footprint with negligible CPU overhead. [...]
ATL-SOFT-SLIDE-2012-242.-
Geneva : CERN, 2012
Fulltext: PDF; External link: Original Communication (restricted to ATLAS)
In : Computing in High Energy and Nuclear Physics 2012, New York, NY, USA, 21 - 25 May 2012
|
|
8.
|
|
Multi-core job submission and grid resource scheduling for ATLAS AthenaMP
/ Crooks, D (Edinburgh U.) ; Calafiura, P (LBNL, Berkeley) ; Harrington, R (Edinburgh U.) ; Purdie, S (Edinburgh U.) ; Severini, H (Oklahoma U.) ; Skipsey, S (Glasgow U.) ; Tsulaia, V (LBNL, Berkeley) ; Washbrook, A (Edinburgh U.)
AthenaMP is the multi-core implementation of the ATLAS software framework and allows the efficient sharing of memory pages between multiple threads of execution. [...]
ATL-SOFT-PROC-2012-029.
-
2012.
Original Communication (restricted to ATLAS) - Full text
|
|
9.
|
|