CERN Accelerating science

If you experience any problem watching the video, click the download button below
Download Embed
Preprint
Report number arXiv:2206.07527 ; FERMILAB-CONF-22-471-SCD
Title QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Related titleQONNX: Representing Arbitrary-Precision Quantized Neural Networks
Author(s) Pappalardo, Alessandro (Unlisted, IE) ; Umuroglu, Yaman (Unlisted, IE) ; Blott, Michaela (Unlisted, IE) ; Mitrevski, Jovan (Fermilab) ; Hawks, Ben (Fermilab) ; Tran, Nhan (Fermilab) ; Loncar, Vladimir (MIT, LNS) ; Summers, Sioni (CERN) ; Borras, Hendrik (U. Heidelberg) ; Muhizi, Jules (Harvard U. (main)) ; Trahms, Matthew (Washington U., Seattle) ; Hsu, Shih-Chieh (Washington U., Seattle) ; Hauck, Scott (Washington U., Seattle) ; Duarte, Javier (UC, San Diego (main))
Imprint 2022-06-15
Number of pages 9
Note 9 pages, 5 figures, Contribution to 4th Workshop on Accelerated Machine Learning (AccML) at HiPEAC 2022 Conference
Subject category stat.ML ; Mathematical Physics and Mathematics ; cs.PL ; Computing and Computers ; cs.AR ; Computing and Computers ; cs.LG ; Computing and Computers
Abstract We present extensions to the Open Neural Network Exchange (ONNX) intermediate representation format to represent arbitrary-precision quantized neural networks. We first introduce support for low precision quantization in existing ONNX-based quantization formats by leveraging integer clipping, resulting in two new backward-compatible variants: the quantized operator format with clipping and quantize-clip-dequantize (QCDQ) format. We then introduce a novel higher-level ONNX format called quantized ONNX (QONNX) that introduces three new operators -- Quant, BipolarQuant, and Trunc -- in order to represent uniform quantization. By keeping the QONNX IR high-level and flexible, we enable targeting a wider variety of platforms. We also present utilities for working with QONNX, as well as examples of its usage in the FINN and hls4ml toolchains. Finally, we introduce the QONNX model zoo to share low-precision quantized neural networks.
Other source Inspire
Copyright/License preprint: (License: CC BY 4.0)



 
 Record creato 2022-06-24, modificato l'ultima volta il 2024-06-27


Testo completo:
2206.07527 - Scarica documentoPDF
08cf2a26cd4a0b032ccea5f048982831 - Scarica documentoPDF
Collegamento esterno:
Scarica documentoFermilab Library Server