1.
|
|
2.
|
Request for All - A Generalized Request Framework for PhEDEx
/ Huang, C -H (Fermilab) ; Wildish, T (Princeton U.) ; Ratnikova, N (Fermilab) ; Sanchez-Hernandez, A (CINVESTAV, IPN) ; Zhang, X (Beijing, Inst. High Energy Phys.) ; Magini, N (CERN)
PhEDEx has been serving CMS community since 2004 as the data broker. Every PhEDEx operation is initiated by a request, e.g. [...]
FERMILAB-CONF-14-497-CD.-
2014 - 6 p.
- Published in : J. Phys.: Conf. Ser. 513 (2014) 032043
Fulltext: PDF; External links: FERMILABCONF; Fulltext
In : 20th International Conference on Computing in High Energy and Nuclear Physics 2013, Amsterdam, Netherlands, 14 - 18 Oct 2013, pp.032043
|
|
3.
|
Integrating Network-Awareness and Network-Management into PhEDEx
/ Lapadatescu, Vlad (Caltech) ; Wildish, Tony (Princeton U.) ; Ball, Bob (Michigan U.) ; Barczyk, Artur (Caltech) ; Batista, Jorge (Michigan U.) ; De, Kaushik (Texas U., Arlington) ; Mckee, Shawn (Michigan U.) ; Melo, Andrew (Vanderbilt U.) ; Newman, Harvey (Caltech) ; Petrosyan, Artem (Texas U., Arlington) et al.
SISSA, 2016
- Published in : PoS ISGC2015 (2016) 018
Fulltext: PDF; External link: Published version from PoS
In : International Symposium on Grids and Clouds 2015, Taipei, Taiwan, 15-20 Mar 2015, pp.018
|
|
4.
|
Exploring Patterns and Correlations in CMS Computing Operations Data with Big Data Analytics Techniques
/ Bonacorsi, Daniele (Bologna U. ; INFN, Bologna) ; Kuznetsov, Valentin (Cornell U.) ; Wildish, Tony (Princeton U.) ; Giommi, Luca (Bologna U.)
SISSA, 2015
- Published in : PoS ISGC2015 (2015) 008
Fulltext: PDF; External link: Published version from PoS
In : International Symposium on Grids and Clouds 2015, Taipei, Taiwan, 15-20 Mar 2015, pp.008
|
|
5.
|
AsyncStageOut: Distributed User Data Management for CMS Analysis
/ Riahi, H (CERN) ; Wildish, T (Princeton U.) ; Ciangottini, D (INFN, Perugia ; Perugia U.) ; Hernández, J M (Madrid, CIEMAT) ; Andreeva, J (CERN) ; Balcas, J (Vilnius U.) ; Karavakis, E (CERN) ; Mascheroni, M (INFN, Milan Bicocca) ; Tanasijczuk, A J (UC, San Diego) ; Vaandering, E W (Fermilab)
AsyncStageOut (ASO) is a new component of the distributed data analysis system of CMS, CRAB, designed for managing users' data. It addresses a major weakness of the previous model, namely that mass storage of output data was part of the job execution resulting in inefficient use of job slots and an unacceptable failure rate at the end of the jobs. [...]
FERMILAB-CONF-15-605-CD.-
2015 - 9 p.
- Published in : J. Phys.: Conf. Ser. 664 (2015) 062052
Fulltext: PDF; IOP Open Access article: PDF; External link: FERMILABCONF
In : 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015, pp.062052
|
|
6.
|
The production deployment of IPv6 on WLCG
/ Bernier, J (CC, Villeurbanne) ; Campana, S (CERN) ; Chadwick, K (Fermilab) ; Chudoba, J (Prague, Inst. Phys.) ; Dewhurst, A (Rutherford) ; Eliáš, M (Prague, Inst. Phys.) ; Fayer, S (Imperial Coll., London) ; Finnern, T (DESY) ; Grigoras, C (CERN) ; Hartmann, T (KIT, Karlsruhe, IKP) et al.
The world is rapidly running out of IPv4 addresses, the number of IPv6 end systems connected to the internet is increasing, WLCG and the LHC experiments may soon have access to worker nodes and/or virtual machines (VMs) possessing only an IPv6 routable address. The HEPiX IPv6 Working Group has been investigating, testing and planning for dual-stack services on WLCG for several years. [...]
2015 - 8 p.
- Published in : J. Phys.: Conf. Ser. 664 (2015) 052018
IOP Open Access article: pdf - PDF; The production deployment of IPv6 on WLCG - PDF;
In : 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015, pp.052018
|
|
7.
|
Integrating network and transfer metrics to optimize transfer efficiency and experiment workflows
/ McKee, S (Michigan U., MCTP) ; Babik, M (CERN) ; Campana, S (CERN) ; Girolamo, A Di (CERN) ; Wildish, T (CERN) ; Closier, J (CERN) ; Roiser, S (CERN) ; Grigoras, C (CERN) ; Vukotic, I (CERN) ; Salichos, M (CERN) et al.
The Worldwide LHC Computing Grid relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion, traffic routing, etc. The WLCG Network and Transfer Metrics project aims to integrate and combine all network-related monitoring data collected by the WLCG infrastructure. [...]
2015 - 8 p.
- Published in : J. Phys.: Conf. Ser. 664 (2015) 052003
IOP Open Access article: Integrating network and transfer metrics to optimize transfer efficiency and experiment workflows - PDF; pdf - PDF;
In : 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015, pp.052003
|
|
8.
|
Comprehensive Monitoring for Heterogeneous Geographically Distributed Storage
/ Ratnikova, N (Fermilab) ; Karavakis, E (CERN) ; Lammel, S (Fermilab) ; Wildish, T (Princeton U.)
Storage capacity at CMS Tier-1 and Tier-2 sites reached over 100 Petabytes in 2014, and will be substantially increased during Run 2 data taking. The allocation of storage for the individual users analysis data, which is not accounted as a centrally managed storage space, will be increased to up to 40%. [...]
FERMILAB-PUB-15-596-CD.-
2015 - 6 p.
- Published in : J. Phys.: Conf. Ser. 664 (2015) 042055
IOP Open Access article: PDF;
In : 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015, pp.042055
|
|
9.
|
Monitoring data transfer latency in CMS computing operations
/ Bonacorsi, D (Bologna U.) ; Diotalevi, T (Bologna U.) ; Magini, N (Fermilab) ; Sartirana, A (Ecole Polytechnique) ; Taze, M (Bordeaux U.) ; Wildish, T (Princeton U.)
During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, and to predict the completion time for the transfer of a data set. [...]
2015 - 9 p.
- Published in : J. Phys.: Conf. Ser. 664 (2015) 032033
IOP Open Access article: PDF;
In : 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015, pp.032033
|
|
10.
|
Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond
/ Bonacorsi, D (Bologna U.) ; Boccali, T (INFN, Pisa) ; Giordano, D (CERN) ; Girone, M (CERN) ; Neri, M (Bologna U.) ; Magini, N (CERN) ; Kuznetsov, V (Cornell U.) ; Wildish, T (Princeton U.)
During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. [...]
2015 - 9 p.
- Published in : J. Phys.: Conf. Ser. 664 (2015) 032003
IOP Open Access article: PDF;
In : 21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa, Japan, 13 - 17 Apr 2015, pp.032003
|
|