Abstract
| Today's large-scale science projects have always encountered challenges in processing large data flow from the experiments, the ATLAS detector records proton-proton collisions provided by the Large Hadron Collider (LHC) at CERN every 50 ns which results in a total data flow of 10 Pb/s. These data must be reduced to the science data product for further analysis, thus a very fast decisions need to be executed, to modify this large amounts of data at high rates. The capabilities required to support this scale of data movement is development and improvement of high-throughput electronics. The upgraded LHC will provide collisions at rates that will be at least 10 times higher than those of today due to it's luminosity by 2022. This will require a complete redesign of the read-out electronics and Processing Units (PU) in the Tile-calorimeter (TileCal) of the ATLAS experiment. A general purpose, high-throughput PU has been developed for the TileCal at CERN, by using several ARM-processors in cluster configuration. The PU is capable of handling large data throughput and apply advanced operations at high rates. This system has been proposed for the fixed target experiment at NICA complex to handle the first level processes and event building. The aim of this work is to have a look at the architecture of the data acquisition system (DAQ) of the fixed target experiment at the NICA complex at JINR, by compiling the data-flow requirements of all the subcomponents. Furthermore, the VME DAQ modules characteristics to control, triggering and data acquisition will be described in order to define the DAQ with maximum readout efficiency, no dead time and data selection and compression. |