Author(s)
|
Bauer, Gerry (MIT) ; Behrens, Ulf (DESY) ; Bowen, Matthew (CERN) ; Branson, James G (UC, San Diego) ; Bukowiec, Sebastian (CERN) ; Cittolin, Sergio (UC, San Diego) ; Coarasa, J A (CERN) ; Deldicque, Christian (CERN) ; Dobson, Marc (CERN) ; Dupont, Aymeric (CERN) ; Erhan, Samim (UCLA) ; Flossdorf, Alexander (DESY) ; Gigi, Dominique (CERN) ; Glege, Frank (CERN) ; Gomez-Reino, R (CERN) ; Hartl, Christian (CERN) ; Hegeman, Jeroen (Princeton U.) ; Holzner, André (UC, San Diego) ; Y L Hwong (CERN) ; Masetti, Lorenzo (CERN) ; Meijers, Frans (CERN) ; Meschi, Emilio (CERN) ; Mommsen, R K (Fermilab) ; O'Dell, Vivian (Fermilab) ; Orsini, Luciano (CERN) ; Paus, Christoph (MIT) ; Petrucci, Andrea (CERN) ; Pieri, Marco (UC, San Diego) ; Polese, Giovanni (CERN) ; Racz, Attila (CERN) ; Raginel, Olivier (MIT) ; Sakulin, Hannes (CERN) ; Sani, Matteo (UC, San Diego) ; Schwick, Christoph (CERN) ; Shpakov, Dennis (Fermilab) ; Simon, M (CERN) ; Spataru, A C (CERN) ; Sumorok, Konstanty (MIT) |
Abstract
| The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed. |