CERN Accelerating science

CERN Document Server 找到 9 笔记录  检索需时 0.57 秒. 
1.
A Roadmap for HEP Software and Computing R&D for the 2020s / HEP Software Foundation Collaboration
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. [...]
arXiv:1712.06982; HSF-CWP-2017-01; HSF-CWP-2017-001; FERMILAB-PUB-17-607-CD.- 2019-03-20 - 49 p. - Published in : Comput. Softw. Big Sci. 3 (2019) 7 Fulltext: 1712.06982 - PDF; fermilab-pub-17-607-cd - PDF; Fulltext from Publisher: PDF; Preprint: PDF; External link: Fermilab Library Server (fulltext available)
2.
Multicore job scheduling in the Worldwide LHC Computing Grid / Forti, A (Manchester U.) ; Yzquierdo, A P (PIC, Bellaterra ; Madrid, CIEMAT) ; Hartmann, T (KIT, Karlsruhe, SCC) ; Alef, M (KIT, Karlsruhe, SCC) ; Lahiff, A (Rutherford) ; Templon, J (NIKHEF, Amsterdam) ; Pra, S Dal (INFN, CNAF) ; Gila, M (Zurich, ETH-CSCS/SCSC) ; Skipsey, S (Glasgow U.) ; Acosta-Silva, C (PIC, Bellaterra ; Barcelona, IFAE) et al.
After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. [...]
2015 - 8 p. - Published in : J. Phys.: Conf. Ser. 664 (2015) 062016 IOP Open Access article: PDF;
In : 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015, pp.062016
3.
Extending DIRAC File Management with Erasure-Coding for efficient storage / Skipsey, Samuel Cadellin (Glasgow U.) ; Todev, Paulin (Glasgow U.) ; Britton, David (Glasgow U.) ; Crooks, David (Glasgow U.) ; Roy, Gareth (Glasgow U.)
The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. [...]
arXiv:1510.09117.- 2015 External link: Preprint
In : 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015, pp.042051
4.
Analysis and improvement of data-set level file distribution in Disk Pool Manager / Skipsey, Samuel Cadellin ; Purdie, Stuart ; Britton, David ; Mitchell, Mark ; Bhimji, Wahid ; Smith, David (CERN)
Of the three most widely used implementations of the WLCG Storage Element specification, Disk Pool Manager[1, 2] (DPM) has the simplest implementation of file placement balancing (StoRM doesn't attempt this, leaving it up to the underlying filesystem, which can be very sophisticated in itself). DPM uses a round-robin algorithm (with optional filesystem weighting), for placing files across filesystems and servers. [...]
2014 - 5 p. - Published in : J. Phys.: Conf. Ser. 513 (2014) 042042
In : 20th International Conference on Computing in High Energy and Nuclear Physics 2013, Amsterdam, Netherlands, 14 - 18 Oct 2013, pp.042042
5.
Analysing I/O bottlenecks in LHC data analysis on grid storage resources / Bhimji, W (Edinburgh U.) ; Clark, P (Edinburgh U.) ; Doidge, M (Lancaster U.) ; Hellmich, M P (CERN) ; Skipsey, S (Glasgow U.) ; Vukotic, I (Chicago U.)
We describe recent I/O testing frameworks that we have developed and applied within the UK GridPP Collaboration, the ATLAS experiment and the DPM team, for a variety of distinct purposes. These include benchmarking vendor supplied storage products, discovering scaling limits of SRM solutions, tuning of storage systems for experiment data analysis, evaluating file access protocols, and exploring I/O read patterns of experiment software and their underlying event data models. [...]
2012 - 9 p.

In : Computing in High Energy and Nuclear Physics 2012, New York, NY, USA, 21 - 25 May 2012, pp.042010
6.
Testing performance of Standards-based protocols in DPM / Skipsey, Samuel (Glasgow U.) ; Bhimji, Wahid (Edinburgh U.) ; Rocha, Ricardo (CERN)
In the interests of the promotion of the increased use of non-proprietary protocols in grid storage systems, we perform tests on the performance of WebDAV and pNFS transport with the DPM storage solution. We find that the standards-based protocols behave similarly to the proprietary standards currently in use, despite encountering some issues with the state of the implementation itself. [...]
2012 - 6 p.

In : Computing in High Energy and Nuclear Physics 2012, New York, NY, USA, 21 - 25 May 2012, pp.052064
7.
Multi-core job submission and grid resource scheduling for ATLAS AthenaMP / Crooks, D ; Calafiura, P ; Harrington, R ; Jha, M ; Maeno, T ; Purdie, S ; Severini, H ; Skipsey, S ; Tsulaia, V ; Walker, R et al.
AthenaMP is the multi-core implementation of the ATLAS software framework and allows the efficient sharing of memory pages between multiple threads of execution. This has now been validated for production and delivers a significant reduction on overall memory footprint with negligible CPU overhead. [...]
ATL-SOFT-SLIDE-2012-242.- Geneva : CERN, 2012 Fulltext: PDF; External link: Original Communication (restricted to ATLAS)
In : Computing in High Energy and Nuclear Physics 2012, New York, NY, USA, 21 - 25 May 2012
8.
Multi-core job submission and grid resource scheduling for ATLAS AthenaMP / Crooks, D (Edinburgh U.) ; Calafiura, P (LBNL, Berkeley) ; Harrington, R (Edinburgh U.) ; Purdie, S (Edinburgh U.) ; Severini, H (Oklahoma U.) ; Skipsey, S (Glasgow U.) ; Tsulaia, V (LBNL, Berkeley) ; Washbrook, A (Edinburgh U.)
AthenaMP is the multi-core implementation of the ATLAS software framework and allows the efficient sharing of memory pages between multiple threads of execution. [...]
ATL-SOFT-PROC-2012-029.
- 2012.
Original Communication (restricted to ATLAS) - Full text
9.
Establishing Applicability of SSDs to LHC Tier-2 Hardware Configuration / Skipsey, Samuel C (Glasgow U.) ; Bhimji, Wahid (Edinburgh U.) ; Kenyon, Mike (CERN)
Solid State Disk technologies are increasingly replacing high-speed hard disks as the storage technology in high-random-I/O environments. There are several potentially I/O bound services within the typical LHC Tier-2 - in the back-end, with the trend towards many-core architectures continuing, worker nodes running many single-threaded jobs and storage nodes delivering many simultaneous files can both exhibit I/O limited efficiency [...]
arXiv:1102.3114; GLAS-PPE-2010-??.- 2011 - 6 p.
Fulltext: PDF; External link: Preprint
In : Conference on Computing in High Energy and Nuclear Physics 2010, Taipei, Taiwan, 18 - 22 Oct 2010, pp.052019

参见:相似的作者
3 Skipsey, Sam
1 Skipsey, Samuel
1 Skipsey, Samuel C
3 Skipsey, Samuel Cadellin
您想得到有关这检索条件的最新结果吗?
建立您的 电邮警报 或订阅 RSS feed.
没有寻找到什么? 尝试在以下的服务器查寻:
Skipsey, S 在 Amazon
Skipsey, S 在 CERN EDMS
Skipsey, S 在 CERN Intranet
Skipsey, S 在 CiteSeer
Skipsey, S 在 Google Books
Skipsey, S 在 Google Scholar
Skipsey, S 在 Google Web
Skipsey, S 在 IEC
Skipsey, S 在 IHS
Skipsey, S 在 INSPIRE
Skipsey, S 在 ISO
Skipsey, S 在 KISS Books/Journals
Skipsey, S 在 KISS Preprints
Skipsey, S 在 NEBIS
Skipsey, S 在 SLAC Library Catalog
Skipsey, S 在 Scirus