Hailo Model Zoo v2.14.0
Hailo Model Zoo v2.14.0
Release v2.14.0
1 January 2025
Table of Contents
2 Changelog 5
3 Getting Started 14
3.1 System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Install Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4 Usage 17
4.1 Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4 Profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.5 Compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.6 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.7 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.8 Info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.9 Compile multiple networks together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.10 TFRecord to NPY conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5 Model Optimization 22
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.2 Optimization Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.3 Citations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6 Hailo Models 25
6.1 License Plate Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.2 Licesen Plate Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.3 Person-Face Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.4 Person-ReID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.5 Vehicle Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7 Datasets 45
7.1 ImageNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.2 COCO2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.3 Cityscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.4 WIDERFACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.5 VisDrone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.6 Pascal VOC augmented dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7.7 D2S augmented dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7.8 NYU Depth V2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7.9 AFLW2k3d and 300W-LP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7.10 Hand Landmark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.11 Market1501 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.12 PETA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7.13 CelebA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Page i Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
7.14 LFW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.15 BSD100 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.16 CLIP_CIFAR100 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.17 LOL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.18 BSD68 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.19 CBSD68 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.20 KITTI_STEREO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.21 KINETICS400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.22 NUSCENES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
8 Benchmarks 61
8.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
8.2 Using Datasets from the Hailo Model Zoo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Page ii Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Copyright
No part of this document may be reproduced or transmitted in any form without the expressed, written permission
of Hailo. Nothing contained in this document should be construed as granting any license or right to use proprietary
information for that matter, without the written permission of Hailo.
General Notice
Hailo, to the fullest extent permitted by law, provides this document “as-is” and disclaims all warranties, either ex-
press or implied, statutory or otherwise, including but not limited to the implied warranties of merchantability, non-
infringement of third parties’ rights, and fitness for particular purpose.
Although Hailo used reasonable efforts to ensure the accuracy of the content of this document, it is possible that
this document may contain technical inaccuracies or other errors. Hailo assumes no liability for any error in this
document, and for damages, whether direct, indirect, incidental, consequential or otherwise, that may result from
such error, including, but not limited to loss of data or profits.
The content in this document is subject to change without prior notice and Hailo reserves the right to make changes
to content of this document without providing a notification to its users.
Page 1 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
python 3.8 | 3.9 | 3.10 Tensorflow 2.12.0 CUDA 11.8 Hailo Dataflow Compiler 3.28.0
HailoRT (optional) 4.18.0 License MIT
The Hailo Model Zoo provides pre-trained models for high-performance deep learning applications. Using the Hailo
Model Zoo you can measure the full precision accuracy of each model, the quantized accuracy using the Hailo Emula-
tor and measure the accuracy on the Hailo-8 device. Finally, you will be able to generate the Hailo Executable Format
(HEF) binary file to speed-up development and generate high quality applications accelerated with Hailo-8. The Hailo
Model Zoo also provides re-training instructions to train the models on custom datasets and models that were trained
for specific use-cases on internal datasets.
Models Hailo provides different pre-trained models in ONNX / TF formats and pre-compiled HEF (Hailo Executable
Format) binary file to execute on the Hailo devices.
• HAILO MODELS which were trained in-house for specific use-cases on internal datasets.
Each Hailo Model is accompanied with retraining instructions.
1.1. Retraining
Hailo also provides RETRAINING INSTRUCTIONS to train a network from the Hailo Model Zoo with custom dataset.
Page 2 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
1.2. Benchmarks
• Install Hailo Dataflow Compiler and enter the virtualenv. In case you are not Hailo customer please contact
hailo.ai
• Install HailoRT (optional). Required only if you want to run on Hailo-8. In case you are not Hailo customer please
contact hailo.ai
• Run the Hailo Model Zoo. For example, print the information of the MobileNet-v1 model:
For full functionality please see the INSTALLATION GUIDE page (full install instructions and usage examples). The Hailo
Model Zoo is using the Hailo Dataflow Compiler for parsing, model optimization, emulation and compilation of the
deep learning models. Full functionality includes:
• Parse: model translation of the input model into Hailo’s internal representation.
• Profiler: generate profiler report of the model. The report contains information about your model and
expected performance on the Hailo hardware.
• Optimize: optimize the deep learning model for inference and generate a numeric translation of the input
model into a compressed integer representation.
For further information please see our OPTIMIZATION page.
• Compile: run the Hailo compiler to generate the Hailo Executable Format file (HEF) which can be executed on
the Hailo hardware.
• Evaluate: infer the model using the Hailo Emulator or the Hailo hardware and produce the model accuracy.
For further information about the Hailo Dataflow Compiler please contact hailo.ai.
1.4. License
The Hailo Model Zoo is released under the MIT license. Please see the LICENSE file for more information.
Page 3 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
CKPT / ONNX
Profiler
Report
Profile Parse
HAR1 Calibration
HAR1 Data
Optimize
Dataset
HAR2
HAR2
Compile
HEF
Validation
HEF Data
Eval
Evaluation Results
1.5. Support
If you need support, please post your question on our Hailo community Forum for assistance.
1.6. Changelog
Page 4 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
2. Changelog
v2.14
– Currently supports PETRv2, bird-eye-view network for 3D object detection, see petrv2_repvggB0.
yaml for configurations.
both petrv2_repvggB0_backbone_pp_800x320
– The user needs existing hars/hefs: &
petrv2_repvggB0_transformer_pp_800x320
– full_precision evaluation: hailomz cascade eval petrv2
– hardware evaluation: hailomz cascade eval petrv2 --override target=hardware
• New task:
• New Models:
– CLIP ViT-Large-14-Laion2B - Contrastive Language-Image Pre-training model [H15H and H10H only]
– DaViT - tiny - Dual Attention Vision Transformer classification model [H15H and H10H only]
– R3D_18 - r3d_18 - Video Classification network for Human Action Recognition [H8 only]
• Bug fixes
v2.13
• Using jit_compile which reduces dramatically the emulation inference time of the Hailo Model Zoo models.
• New tasks:
• New Models:
Page 5 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
– --ap-per-class for measuring average-precision per-class. Relevant for object detection and in-
stance segmentation tasks.
• Bug fixes
v2.12
• New Models:
– Original ViT models - tiny, small, base - Transformer based classification models
• Bug fixes
v2.11
• New Models:
– nanodet
These flags simplify the process of compiling models generated from our retrain dockers.
• Bug fixes
v2.10
Page 6 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
– yolov8
• Profiler change:
– Removal of --mode flag from hailomz profile command, which generates a report according to
provided HAR state.
• CLI change:
• New Models:
• Bug fixes
v2.9
• A new CLI-compatible API that allows users to incorporate format conversion and reshaping capabilities into
the input:
• New Models:
– scdepthv3 - depth-estimation
v2.8
• The Hailo Model Zoo now supports the following vision transformers models:
– vit_tiny / vit_small / vit_base - encoder based transformer with batchnorm for classification
Page 7 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
– yolov5
– yolox
– ssd
– efficientdet
– yolov7
• New Models:
– yolov8
– yolov8_seg
• Bug fixes
v2.7
• Examples for using HailoRT-pp - support for seamless integration of models and their corresponding postpro-
cessing
– yolov5m_hpp
• Configuration YAMLs and model-scripts for networks with YUY2 input format
• Bug fixes
v2.6.1
• Bug fixes
v2.6
Page 8 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
• ViT (Vision Transformer) - new classification network with transformers-encoder based architecture
– yolov5n_seg
– yolov5s_seg
– yolov5m_seg
– yolov5l_seg
– yolov7e6
– yolov5n6_6.1
– yolov5s6_6.1
– yolov5m6_6.1
v2.5
– yolact_regnetx_800mf
– yolact_regnetx_1.6gf
• Bug fixes
v2.4
• Required FPS was moved from models YAML into the models scripts
• New models:
• New tasks:
1. Super-Resolution
2. Face Recognition
1. arcface_r50
2. arcface_mobilefacenet
Page 9 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
v2.3
• New models:
– yolov6n
– yolov7 / yolov7-tiny
– nanodet_repvgg_a1_640
• New tasks:
• Bug fixes
v2.2
• CLI change:
– Hailo model zoo CLI is now working with an entry point - hailomz
• New models:
– yolov5xs_wo_spp_nms - a model which contains bbox decoding and confidence thresholding on Hailo-8
– yolov5m_6.1 - yolov5m network from the latest tag of the repo (6.1) including silu activation
• New tasks:
NOTE: Ubuntu 18.04 will be deprecated in Hailo Model Zoo future version
NOTE: Python 3.6 will be deprecated in Hailo Model Zoo future version
v2.1
• New models:
v2.0
Page 10 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
• Updated to use Dataflow Compiler v3.16 (developer-zone) with TF version 2.5 which require CUDA11.2
• Retraining Dockers - each retraining docker has a corresponding README file near it. New retraining dockers:
– SSD
– YOLOX
– FCN
• New models:
– yolov5l
• Introducing Hailo Models, in-house pretrained networks with compatible Dockerfile for retraining
v1.5
• Retraining Dockers
– YOLOv3
– NanoDet
– CenterPose
– Yolact
• New models:
– unet_mobilenet_v2
– yolov5m_wo_spp_60p
– centerpose_repvgg_a0
• Improvements:
• New Tasks:
v1.4
• Introducing Hailo Models - in house pretrained networks with compatible Dockerfile for easy retraining:
Page 11 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
– tddfa_mobilenet_v1
• New features:
• Retraining Guide:
v1.3
– fast_depth
• New models:
– yolox_l_leaky
• Improvements:
– Model Optimization parameters can be updated using the networks’ model script files (*.alls)
• Training Guide: new training guide for yolov5 with compatible Dockerfile
v1.2
• New features:
– yolact_mobilenet_v1 (coco)
– yolact_regnetx_800mf_20classes (coco)
– yolact_regnetx_600mf_31classes (d2s)
• New models:
– nanodet_repvgg
– centernet_resnet_v1_50_postprocess
– yolox_s_wide_leaky
– deeplab_v3_mobilenet_v2_dilation
Page 12 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
– centerpose_repvgg_a0
• Improvements:
– tiny_yolov4
– yolov4
• Bug fixes
v1.1
– centerpose_regnetx_200mf_fpn
– centerpose_regnetx_800mf
– centerpose_regnetx_1.6gf_fpn
– lightfaceslim
– retinaface_mobilenet_v1
• New models:
– hardnet39ds
– hardnet68
– yolox_tiny_leaky
– yolox_s_leaky
– deeplab_v3_mobilenet_v2
• Use your own network manual for YOLOv3, YOLOv4_leaky and YOLOv5.
v1.0
• Initial release
Page 13 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
3. Getting Started
This document provides install instructions and basic usage examples of the Hailo Model Zoo.
• HailoRT 4.20.0 (Obtain from hailo.ai) - required only for inference on Hailo-8.
• The Hailo Model Zoo supports Hailo-8 / Hailo-10H connected via PCIe only.
• Nvidia’s Pascal/Turing/Ampere GPU architecture (such as Titan X Pascal, GTX 1080 Ti, RTX 2080 Ti, or RTX A4000)
• CUDA 11.8
• CUDNN 8.9
The model requires the corresponding Dataflow Compiler version, and the optional HailoRT version. Therefore it is
recommended to use the Hailo Software Suite, that includes all of Hailo’s SW components and insures compatibility
across products versions.
The Hailo Software Suite is composed of the Dataflow Compiler, HailoRT, TAPPAS and the Model Zoo (see diagram
below).
1. Install the Hailo Dataflow compiler and enter the virtualenv (visit hailo.ai for further instructions).
2. Install the HailoRT - required only for inference on Hailo-8 / Hailo-10H (visit hailo.ai for further instructions).
6. Verify Hailo-8 / Hailo-10 is connected via PCIe (required only to run on Hailo-8 / Hailo-10. Full-precision / emu-
lation run on GPU.)
Note: hailortcli is HailoRT command-line tool for interacting with Hailo devices.
Page 14 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Integration pyHailoRT
CLI
Tool (Python API)
Hailo Dataflow Compiler (SDK)
C/C++ API and Library
Python API CLI tools
Hailo Driver
Model Parser OS IP Stack
Model Optimizer
Ethernet PCIe Integrated
Resource Allocator Profiler
Emulator
NN Core
Compiler
(part of Hailo Vision Processor or AI Accelerator)
In preview
Page 15 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Expected output:
If you want to upgrade to a specific Hailo Model Zoo version within a suite or on top of a previous installation not in
the suite.
Page 16 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
4. Usage
The following scheme shows high-level view of the model-zoo evaluation process, and the different stages in between.
CKPT / ONNX
Profiler
Report
Profile Parse
HAR1 Calibration
HAR1 Data
Optimize
Dataset
HAR2
HAR2
Compile
HEF
Validation
HEF Data
Eval
Evaluation Results
By default, each stage executes all of its previously necessary stages according to the above diagram. The post-parsing
stages also have an option to start from the product of previous stages (i.e., the Hailo Archive (HAR) file), as explained
below. The operations are configured through a YAML file that exist for each model in the cfg folder. For a description
of the YAML structure please see YAML.
NOTE: Hailo Model Zoo provides the following functionality for Model Zoo models only. If you wish to use your custom
model, use the Dataflow Compiler directly.
4.2. Parsing
The pre-trained models are stored on AWS S3 and will be downloaded automatically when running the model zoo
into your data directory. To parse models into Hailo’s internal representation and generate the Hailo Archive (HAR)
file:
• The default compilation target is Hailo-8. To compile for different architecture (Hailo-15H for example), use
--hw_arch hailo15h as CLI argument:
hailomz parse <model_name> --hw-arch hailo15h
Page 17 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
4.3. Optimization
To optimize models, convert them from full precision into integer representation and generate a quantized Hailo
Archive (HAR) file:
You can use your own images by giving a directory path to the optimization process, with the following supported
formats (.jpg,.jpeg,.png):
• This step requires data for calibration. For additional information please see OPTIMIZATION.
The flag will be ignored on models that do not support this feature. The default and performance model scripts are
located on hailo_model_zoo/cfg/alls/
To add input conversion to the model, use the input conversion flag:
• Do not use the flag if an input conversion already exist in the alls or in the YAML.
• Do not use the flag if resize already exist in the alls or in the YAML.
• Use this flag only if post-process exists in the alls or in the YAML.
4.4. Profiling
• When profiling a Quantized HAR file (the result of the optimization process), the report contains information
about your model and accuracy.
• When profiling a Compiled HAR file (the result of the compilation process), the report contains the expected
performance on the Hailo hardware (as well as the accuracy information).
Page 18 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
4.5. Compilation
To run the Hailo compiler and generate the Hailo Executable Format (HEF) file:
By default the compilation target is Hailo-8. To compile for a different architecture use --hw-arch command line
argument:
• When working with a generated HAR, the previously chosen architecture will be used.
The flag will be ignored on models that do not support this feature. The default and performance model scripts are
located on hailo_model_zoo/cfg/alls/
To add input conversion to the model, use the input conversion flag:
Do not use the flag if an input conversion already exist in the alls or in the YAML.
Do not use the flag if resize already exist in the alls or in the YAML.
4.6. Evaluation
To evaluate models starting from a previously generated Hailo Archive (HAR) file:
To evaluate models with the Hailo emulator (after quantization to integer representation - fast_numeric):
If multiple devices are available, it’s possible to select a specific one. Make sure to run on a device compatible to the
compiled model.
Page 19 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
To limit the number of images for evaluation use the following flag:
To eval model with additional input conversion, use the input conversion flag:
Do not use the flag if an input conversion already exist in the alls or in the YAML.
Do not use the flag if resize already exist in the alls or in the YAML.
To explore other options (for example: changing the default batch-size) use:
4.7. Visualization
4.8. Info
You can easily print information of any network exists in the model zoo, to get a sense of its input/output shape,
parameters, operations, framework etc.
Expected output:
Page 20 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
license_url: https://github.com/tensorflow/models/blob/v1.13.0/LICENSE
We can use multiple disjoint models in the same binary. This is useful for running several small models on the device.
In some situations you might want to convert the tfrecord file to npy file (for example, when explicitly using the
Dataflow Compiler for quantization). In order to do so, run the command:
Page 21 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
5. Model Optimization
5.1. Introduction
Model optimization is the stage of converting a full precision model to an optimized model which will be compiled to
the Hailo device.
This stage includes numeric translation of the input model into a compressed integer representation as well as
optimizing the model architecture to best fit the Hailo hardware.
Compressing deep learning model induce degradation for the model’s accuracy.
For example, in the following table we compare the accuracy of full precision ResNet V1 18 with the compressed
8-bits weights and activation representation on ImageNet-1K:
Precision Top-1
Full precision 68.85
8-bit (emulator) 68.54
The main goal of the model optimization step is to prepare the model for compilation with minimum degradation as
possible.
The model optimization has two main steps: full precision optimization and quantization optimization.
• Full precision optimization includes any changes to the model in the floating point precision domain, for exam-
ple, Equalization (Meller2019), TSE (Vosco2021) and pruning.
• Quantization includes compressing the model from floating point to integer representation of the weights and
activations (4/8/16 bits) and algorithms to improve the model’s accuracy, such as IBC (Finkelstein2019) and QFT
(‘Finkelstein2022‘_).
Both steps may degrade the model accuracy, therefore, evaluation is needed to verify the model accuracy. This
workflow is depicted in the following diagram:
NOTE: Hailo Model Zoo provides the following functionality for Model Zoo models only. If you wish to use your custom
model, use the Dataflow Compiler directly.
1. First step includes full precision validation. This step is important to make sure parsing was successful and we
built the pre/post processing and evaluation of the model correctly. In the Hailo Model Zoo, we can execute
the following which will infer a specific model in full precision to verify that the accuracy is correct (this will be
our baseline for measuring degradation):
2. Next, we call the model optimization API to generate an optimized model. Note, it is recommended to run this
step on a GPU machine with dataset size of at least 1024 images.
3. Lastly, we verify the accuracy of the optimized model. In case the results are not good enough we should repeat
the process with different configurations of the optimization/compression levels:
Once optimization is finished and met our accuracy requirements, we can compile the optimized model. For example:
Page 22 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Accuracy is good
No
enough?
Yes
Page 23 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
5.3. Citations
5.3.1. Vosco2021
@InProceedings{Vosco2021,
title = {Tiled Squeeze-and-Excite: Channel Attention With Local Spatial Context},
author = {Niv Vosco and Alon Shenkler and Mark Grobman},
booktitle = {ICCV},
year = {2021}
}
5.3.2. Finkelstein2019
@InProceedings{Finkelstein2019,
title = {Fighting Quantization Bias With Bias},
author = {Alexander Finkelstein and Uri Almog and Mark Grobman},
booktitle = {CVPR},
year = {2019}
}
5.3.3. Meller2019
@InProceedings{Meller2019,
title = {Same, Same But Different - Recovering Neural Network Quantization Error�
,→Through Weight Factorization},
author = {Eldad Meller and Alexander Finkelstein and Uri Almog and Mark Grobman},
booktitle = {ICML},
year = {2019}
}
Page 24 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
6. Hailo Models
Here, we give the full list of models trained in-house for specific use-cases. Each model is accompanied with its own
README, retraining docker and retraining guide.
– Person Re-Identification
Important: Retraining is not available inside the docker version of Hailo Software Suite. In case you use it, clone
the hailo_model_zoo outside of the docker, and perform the retraining there: git clone https://github.
com/hailo-ai/hailo_model_zoo.git
1. Object Detection
Network Name mAP* Input Resolution (HxWxC) Params (M) FLOPs (G)
yolov5m_vehicles 46.5 640x640x3 21.47 25.63
tiny_yolov4_license_plates 73.45 416x416x3 5.87 3.4
yolov5s_personface 47.5 640x640x3 7.25 8.38
Network Name Accuracy* Input Resolution (HxWxC) Params (M) FLOPs (G)
lprnet 99.96 75x300x3 7.14 18.29
3. Person Re-ID
Network Name Accuracy* Input Resolution (HxWxC) Params (M) FLOPs (G)
repvgg_a0_person_reid_512 89.9 256x128x3 7.68 0.89
repvgg_a0_person_reid_2048 90.02 256x128x3 9.65 0.89
* Evaluated on Market-1501
Page 25 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Hailo’s license detection network (tiny_yolov4_license_plates) is based on Tiny-YOLOv4 and was trained in-house using
Darknet with a single class. It expects a single vehicle and can work under various weather and lighting conditions,
on different vehicle types and numerous camera angles.
Architecture
• Tiny-YOLOv4
• GMACS: 3.4
Inputs
Outputs
• The above 6 values per anchor are concatenated into the 18 output channels
Page 26 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Download
Here we describe how to finetune Hailo’s license plate detection network on your own custom dataset.
Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
Environment Preparations
cd hailo_model_zoo/hailo_models/license_plate_detection/
docker build --build-arg timezone=`cat /etc/timezone` -t license_plate_
,→detection:v0 .
• This command will build the docker image with the necessary requirements using the Dockerfile exists in
this directory.
Page 27 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
• Update data/obj.data with paths to your training and validation .txt files, which contain the list
of the image paths*.
classes = 1
train = data/train.txt
valid = data/val.txt
names = data/obj.names
backup = backup/
* Tip: specify the paths to the training and validation images in the training and validation .txt files relative
to /workspace/darknet/
• Place your training and validation images and labels in your data folder.
• then please try changing subdivisions in the .cfg file (e.g., from 16 to 32).
2. Export to ONNX
Export the model to ONNX using the following command:
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model. In order to do so you
need a working model-zoo environment. Choose the model YAML from our networks configuration directory,
i.e. hailo_model_zoo/cfg/networks/tiny_yolov4_license_plates.yaml, and run compi-
lation using the model zoo:
Page 28 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
• The model zoo will take care of adding the input normalization to be part of the model.
Note:
Hailo’s license plate recognition network (lprnet) was trained in-house on a synthetic auto-generated dataset to predict
registration numbers of license plates under various weather and lighting conditions.
Architecture
• GMACS: 18.29
• Accuracy* : 99.96%
* Evaluated on internal dataset containing 1178 images
Page 29 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Inputs
Outputs
– The 11 channels contain logits scores for 11 classes (10 digits + blank class)
• A Connectionist temporal classification (CTC) greedy decoding outputs the final license plate number prediction
Download
• Hailo’s LPRNet was trained on a synthetic auto-generated dataset containing 4 million license plate images.
Auto-generation of synthetic data for training is cheap, allows one to obtain a large annotated dataset easily
and can be adapted quickly for other domains
• A notebook for auto-generation of synthetic training data for LPRNet can be found here
• For more details on the training data autogeneration, please see the training guide
Here we describe how to finetune Hailo’s optical character reader (OCR) model for license plate recognition with your
own custom dataset.
Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
Page 30 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Environment Preparations
cd hailo_model_zoo/hailo_models/license_plate_recognition/
docker build --build-arg timezone=`cat /etc/timezone` -t license_plate_
,→recognition:v0 .
• This command will build the docker image with the necessary requirements using the Dockerfile that
exists in this directory.
• Create a folder with license plates images for training and testing. The folder should contain images
whose file names correspond to the plate number, e.g. 12345678.png.
NOTE: Please make sure the file names do not contain characters which are not numbers or letters.
– Start from our pre-trained weights in pre_trained/lprnet.pth (you can also download it
from here)
Page 31 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
2. Export to ONNX
Export the model to ONNX using the following command:
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model. In order to do so you
need a working model-zoo environment. Choose the model YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/lprnet.yaml, and run compilation using the model zoo:
hailomz compile --ckpt lprnet.onnx --calib-path /path/to/calibration/imgs/dir/ --
,→yaml path/to/lprnet.yaml --start-node-names name1 name2 --end-node-names name1
• The model zoo will take care of adding the input normalization to be part of the model.
Note:
Page 32 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Hailo’s person-face detection network (yolov5s_personface) is based on YOLOv5s and was trained in-house with two
classes [person, face]. It can work under various lighting conditions, number of people, and numerous camera angles.
Architecture
• YOLOv5s
• GMACS: 8.38G
Page 33 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Inputs
Outputs
• The above 7 values per anchor are concatenated into the 21 output channels
The table below shows the performance of our trained network on an internal validation set containing 6000 images,
compared to other benchmark models from the model zoo*.
* Benchmark models were trained on all COCO classes, and evaluated on our internal validation set, on ‘Person’ class
only.
Download
Page 34 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Here we describe how to finetune Hailo’s person-face detection network with your own custom dataset.
Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
Environment Preparations
cd hailo_model_zoo/hailo_models/personface_detection/
docker build --build-arg timezone=`cat /etc/timezone` -t personface_
,→detection:v0 .
• This command will build the docker image with the necessary requirements using the Dockerfile that
exists in this directory.
• Update the dataset config file data/personface_data.yaml with the paths to your training and
validation images files.
Page 35 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
• Start training on your dataset starting from our pre-trained weights in weights/
yolov5s_personface.pt (you can also download it from here
python train.py --data ./data/personface_data.yaml --cfg ./models/yolov5s_
,→personface.yaml --weights ./weights/yolov5s_personface.pt --epochs 300 --
2. Export to ONNX Export the model to ONNX using the following command:
• The best model’s weights will be saved under the following path:
./runs/exp<#>/weights/best.pt
, where <#> is the experiment number.
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model. In order to do so you need a
working model-zoo environment.
Choose the model YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/yolov5s_personface.yaml, and run compilation using the
model zoo:
Note:
Page 36 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
The training flow will automatically try to find more fitting anchors values then the default anchors. In our TAPPAS
environment we use the default anchors, but you should be aware that the resulted anchors might be different.
The model anchors can be retrieved from the trained model using the following snippet:
m = torch.load(”last.pt”)[”model”]
detect = list(m.children())[0][-1]
print(detect.anchor_grid)
6.4. Person-ReID
Hailo’s person Re-Identification network is based on RepVGG_A0 and was trained in-house using several Person ReID
datasets. It can work under various lighting conditions and numerous camera angles. 2 Models were trained inde-
pendently - using 512 & 2048 embedding dimensions.
• RepVGG_A0
Page 37 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Inputs
Outputs
The table below shows the performance of our trained network on Market1501 dataset.
Download
• 512-dim
• 2048-dim
Use the following command to measure model performance on hailo’s HW:
Here we describe how to finetune Hailo’s person-reid network with your own custom dataset.
Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
Page 38 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Environment Preparations
cd hailo_model_zoo/hailo_models/reid/
docker build --build-arg timezone=`cat /etc/timezone` -t person_reid:v0 .
This command will build the docker image with the necessary requirements using the Dockerfile that
exists in this directory.
val=”docker_vol_path”>/path/to/docker/dir person_reid:v0
• Start training on your dataset starting from our pre-trained weights in models/
repvgg_a0_person_reid_512.pth or models/repvgg_a0_person_reid_2048.
pth (you can also download it from 512-dim & 2048-dim) - to do so, you can edit the added yaml
configs/repvgg_a0_hailo_pre_train.yaml and take a look at the examples in
torchreid.
Page 39 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
2. Export to ONNX
Export the model to ONNX using the following command:
In case you exported to onnx based on one of our provided RepVGG models, you can generate an HEF file for
inference on Hailo-8 from your trained ONNX model. In order to do so you need a working model-zoo environment.
Choose the model YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/repvgg_a0_person_reid_512.yaml (or 2048), and run
compilation using the model zoo:
• The model zoo will take care of adding the input normalization to be part of the model.
Note:
Page 40 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Hailo’s vehicle detection network (yolov5m_vehicles) is based on YOLOv5m and was trained in-house with a single class.
It can work under various weather and lighting conditions, and numerous camera angles.
Architecture
• YOLOv5m
• GMACS: 25.63
Inputs
Page 41 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Outputs
• The above 6 values per anchor are concatenated into the 18 output channels
The table below shows the performance of our trained network on an internal validation set containing 5000 images,
compared with the performance of other benchmark models from the model zoo*.
Download
Here we describe how to finetune Hailo’s vehicle detection network with your own custom dataset.
Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
Page 42 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Environment Preparations
cd hailo_model_zoo/hailo_models/vehicle_detection/
docker build --build-arg timezone=`cat /etc/timezone` -t vehicle_detection:v0 .
• This command will build the docker image with the necessary requirements using the Dockerfile that
exists in this directory.
1. Train the network on your dataset Once the docker is started, you can train the vehicle detector on your
custom dataset. We recommend following the instructions for YOLOV5 training that can be found in here. The
important steps are specified below:
• Update the dataset config file data/vehicles.yaml with the paths to your training and validation
images files.
# number of classes
nc: 1
# class names
names: ['vehicle']
• Start training on your dataset starting from our pre-trained weights in weights/
yolov5m_vehicles.pt (you can also download it from here)
python train.py --data ./data/vehicles.yaml --cfg ./models/yolov5m.yaml --
,→weights ./weights/yolov5m_vehicles.pt --epochs 300 --batch 128 --device 1,2,3,
,→4
2. Export to ONNX Export the model to ONNX using the following command:
• The best model’s weights will be saved under the following path: ./runs/exp<#>/weights/
best.pt, where <#> is the experiment number.
Page 43 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model. In order to do so you need a
working model-zoo environment.
Choose the model YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/yolov5m_vehicles.yaml, and run compilation using the
model zoo:
Note:
• This model has an on-chip resize from the video input [1080x1920] to the model’s input ([640x640], the reso-
lution the model is trained with). Model Zoo automatically adds the resize for this model using a model script
command on yolov5m_vehicles.alls. Therefore, the input_resize command should be updated if the
video input resolution is different (or even removed if it is equal to the resolution the model is trained with).
• On yolov5m_vehicles.yaml, change input_resize field to match the input_resize command on the model
script.
The training flow will automatically try to find more fitting anchors values then the default anchors. In our TAPPAS
environment we use the default anchors, but you should be aware that the resulted anchors might be different.
The model anchors can be retrieved from the trained model using the following snippet:
m = torch.load(”last.pt”)[”model”]
detect = list(m.children())[0][-1]
print(detect.anchor_grid)
Page 44 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
7. Datasets
The Hailo Model Zoo works with TFRecord files which store the images and labels of the dataset for evaluation and
calibration.
The instructions on how to create the TFRecord files are given below. By default, datasets are stored in the following
path:
~/.hailomz
We recommend to define the data directory path yourself, by setting the HMZ_DATA environment variable.
export HMZ_DATA=/new/path/for/storage/
• Datasets
– ImageNet
– COCO2017
– Cityscapes
– WIDERFACE
– VisDrone
– NYU Depth V2
– Hand Landmark
– Market1501
– PETA
– CelebA
– LFW
– BSD100
– ‘CIFAR100‘_
– LOL
– BSD68
– CBSD68
– KITTI_STEREO
– KINETICS400
– NUSCENES
Page 45 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
7.1. ImageNet
To evaluate/optimize/compile the classification models of the Hailo Model Zoo you should generate the ImageNet
TFRecord files (manual download is required).
1. Download the ImageNet dataset from here. The expected dataset structure:
imagenet
|_ train
| |_ n01440764
| |_ ...
| |_ n15075141
|_ val
| |_ n01440764
| |_ ...
| |_ n15075141
|_ ...
* To avoid downloading the ImageNet training data, you may consider using the validation dataset for
calibration (does not apply for finetune).
7.2. COCO2017
To evaluate/optimize/compile the object detection / pose estimation models of the Hailo Model Zoo you should
generate the COCO (link) TFRecord files.
Run the create TFRecord scripts to download the dataset and generate the TFRecord files:
To evaluate/optimize/compile the single person pose estimation models of the Hailo Model Zoo you should
generate the single-person COCO TFRecord files.
Run the create TFRecord scripts to download the dataset and generate the TFRecord files:
Page 46 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Annotations:
annotations
|_ captions_train2017.json
|_ captions_val2017.json
|_ instances_train2017.json
|_ instances_val2017.json
|_ person_keypoints_train2017.json
|_ person_keypoints_val2017.json
Validation set:
val2017
|_ 000000000139.jpg
|_ 000000000285.jpg
|_ 000000000632.jpg
|_ 000000000724.jpg
|_ 000000000776.jpg
|_ 000000000785.jpg
|_ 000000000802.jpg
|_ 000000000872.jpg
|_ 000000000885.jpg
|_ ...
Training set:
train2017
|_ 000000000009.jpg
|_ 000000000025.jpg
|_ 000000000030.jpg
|_ 000000000034.jpg
|_ 000000000036.jpg
|_ 000000000042.jpg
|_ 000000000049.jpg
|_ 000000000061.jpg
|_ 000000000064.jpg
|_ ...
7.3. Cityscapes
To evaluate/optimize/compile the semantic segmentation models of the Hailo Model Zoo you should generate the
Cityscapes TFRecord files (manual download is required).
1. Download the Cityscapes dataset from here. The expected dataset structure:
Cityscapes
|_ gtFine
| |_ train
| |_ test
(continues on next page)
Page 47 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
| |_ val
|_ leftImg8bit
| |_ train
| |_ test
| |_ val
| |_ train_extra
|_ ...
7.4. WIDERFACE
To evaluate/optimize/compile the face detection models of the Hailo Model Zoo you should generate the
WIDERFACE (link) TFRecord files.
Run the create TFRecord scripts to download the dataset and generate the TFRecord files:
• Face annotations
• wider_hard_val.mat
widerface/
|_ wider_face_split
| |_ readme.txt
| |_ wider_face_test_filelist.txt
| |_ wider_face_test.mat
| |_ wider_face_train_bbx_gt.txt
| |_ wider_face_train.mat
| |_ wider_face_val_bbx_gt.txt
| |_ wider_face_val.mat
| |_ wider_hard_val.mat
|_ WIDER_train
| |_ images
| |_ 0--Parade
| |_ 10--People_Marching
| |_ 11--Meeting
| |_ ...
|_ WIDER_val
(continues on next page)
Page 48 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
|_ images
|_ 0--Parade
|_ 10--People_Marching
|_ 11--Meeting
|_ ...
,→path/to/wider_face_split
,→to/wider_face_split
7.5. VisDrone
To evaluate/optimize/compile the visdrone object detection models of the Hailo Model Zoo you should generate the
VisDrone (link) TFRecord files.
Run the create TFRecord scripts to download the dataset and generate the TFRecord files:
Training set:
VisDrone2019-DET-train/
|_ annotations
| |_ 0000002_00005_d_0000014.txt
| |_ 0000002_00448_d_0000015.txt
| |_ ...
|_ images
|_ 0000002_00005_d_0000014.jpg
|_ 0000002_00448_d_0000015.jpg
|_ ...
Validation set:
VisDrone2019-DET-val/
|_ annotations
| |_ 0000001_02999_d_0000005.txt
| |_ 0000001_03499_d_0000006.txt
| |_ ...
|_ images
|_ 0000001_02999_d_0000005.jpg
|_ 0000001_03499_d_0000006.jpg
|_ ...
Page 49 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
benchmark_RELEASE
|_ dataset
|_ cls
|_ img
|_ inst
|_ train.txt
|_ val.txt
1. Download the dataset from here. Extract using ‘tar -xf d2s_images_v1.1.tar.xz’. Expected dataset structure:
|_ images
|_ D2S_000200.jpg
|_ D2S_000201.jpg
|_ ...
2. Download the annotations from here. Extract using ‘tar -xf d2s_annotations_v1.1.tar.xz’. Expected annotations
structure:
|_ annotations
|_ D2S_augmented.json
|_ D2S_validation.json
|_ ...
Page 50 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
1. Download the dataset from here. Extract using ‘tar -xf nyudepthv2.tar.gz’. Expected dataset structure:
|_ train
|_ study_0300
|_ 00626.h5
|_ 00631.h5
|_ ...
|_ ...
|_ val
|_ official
|_ 00001.h5
|_ 00002.h5
|_ 00009.h5
|_ 00014.h5
|_ ...
python hailo_model_zoo/datasets/create_300w-lp_tddfa_tfrecord.py
python hailo_model_zoo/datasets/create_aflw2k3d_tddfa_tfrecord.py
Page 51 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
1. Download the augmented_cropped 300W-LP dataset from here and extract. Expected structure:
train_aug_120x120
|_ AFW_AFW_1051618982_1_0_10.jpg
|_ AFW_AFW_1051618982_1_0_11.jpg
|_ AFW_AFW_1051618982_1_0_12.jpg
|_ AFW_AFW_1051618982_1_0_13.jpg
|_ AFW_AFW_1051618982_1_0_1.jpg
|_ AFW_AFW_1051618982_1_0_2.jpg
|_ AFW_AFW_1051618982_1_0_3.jpg
|_ AFW_AFW_1051618982_1_0_4.jpg
|_ ...
2. Run
– AFLW2000-3D.pose.npy
– AFLW2000-3D.pts68.npy
– AFLW2000-3D-Reannotated.pts68.npy
– AFLW2000-3D_crop.roi_box.npy
aflw2k3d_tddfa
|_ AFLW2000-3D_crop.roi_box.npy
|_ AFLW2000-3D.pose.npy
|_ AFLW2000-3D.pts68.npy
|_ AFLW2000-3D-Reannotated.pts68.npy
|_ test.data
|_ AFLW2000
| |_ Code
| | |_ Mex
| | |_ ModelGeneration
| |_ image00002.jpg
| |_ image00002.mat
| |_ image00004.jpg
| |_ image00004.mat
| |_ ...
|_ AFLW2000-3D_crop
| |_ image00002.jpg
| |_ image00004.jpg
| |_ image00006.jpg
| |_ image00008.jpg
| |_ ...
|_ AFLW2000-3D_crop.list
|_ AFLW_GT_crop
| |_ ...
|_ AFLW_GT_crop.list
Page 52 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
python hailo_model_zoo/datasets/create_hand_landmark_tfrecord.py
Hands 00 000
|_ Hand_0011695.jpg
|_ Hand_0011696.jpg
|_ Hand_0011697.jpg
|_ ...
2. Run
7.11. Market1501
Market-1501-v15.09.15
|_ bounding_box_test
|_ 0000_c1s1_000151_01.jpg
|_ 0000_c1s1_000376_03.jpg
|_ ...
|_ bounding_box_train
|_ 0002_c1s1_000451_03.jpg
|_ 0002_c1s1_000551_01.jpg
|_ ...
|_ gt_bbox
|_ 0001_c1s1_001051_00.jpg
|_ 0001_c1s1_002301_00.jpg
|_ ...
|_ gt_query
|_ 0001_c1s1_001051_00_good.mat
|_ 0001_c1s1_001051_00_junk.mat
|_ ...
(continues on next page)
Page 53 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
|_ query
|_ 0001_c1s1_001051_00.jpg
|_ 0001_c2s1_000301_00.jpg
|_ ...
2. Run
7.12. PETA
To evaluate/optimize/compile the person attribute models of the Hailo Model Zoo you should generate the PETA
TFRecord files (manual download is required).
1. Download the PETA dataset from here. The expected dataset structure:
PETA
|_ images
| |_ 00001.png
| |_ ...
| |_ 19000.png
|_ PETA.mat
7.13. CelebA
To evaluate/optimize/compile the face attribute models of the Hailo Model Zoo you should generate the CelebA
TFRecord files (manual download is required).
1. Download the CelebA dataset from here. The expected dataset structure:
Celeba
|_ img_align_celeba_png
| |_ 000001.jpg
| |_ ...
| |_ 202599.jpg
|_ list_attr_celeba.txt
|_ list_eval_partition.txt
Page 54 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
7.14. LFW
To evaluate/optimize/compile the face recognition models of the Hailo Model Zoo you should generate the arcface_lfw
TFRecord files
7.15. BSD100
To evaluate/optimize/compile the super resolution models of the Hailo Model Zoo you should generate the BSD100
TFRecord files.
1. Download the BSD100 dataset from here and extract. The expected dataset structure:
BSD100
|_ GTmod12
| |_ 101085.png
| |_ ...
| |_ 97033.png
|_ GTmod16
| |_ ...
|_ LRbicx8
| |_ ...
|_ LRbicx4
| |_ ...
|_ LRbicx3
| |_ ...
|_ LRbicx2
| |_ ...
|_ LRbicx16
| |_ ...
|_ original
| |_ ...
Page 55 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
7.16. CLIP_CIFAR100
To evaluate/optimize/compile the CLIP models of the Hailo Model Zoo you should generate the CIFAR100 TFRecord
files.
7.17. LOL
To evaluate/optimize/compile the low light enhancement models of the Hailo Model Zoo you should generate the
LOL TFRecord files.
1. Download the LOL dataset from here and extract. The expected dataset structure:
lol_dataset
|_ eval15
|_ high
| |_ 111.png
| |_ 146.png
| |_ ...
|_ low
| |_ 111.png
| |_ 146.png
| |_ ...
|_ our485
|_ high
| |_ 100.png
| |_ 101.png
| |_ ...
|_ low
| |_ 100.png
| |_ 101.png
| |_ ...
Page 56 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
7.18. BSD68
To evaluate/optimize/compile the image denoising models of the Hailo Model Zoo you should generate the BSD68
TFRecord files.
1. Download the BSD100 dataset from here and extract. The expected dataset structure:
test
|_ BSD68
| |_ test001.png
| |_ ...
| |_ test068.png
|_ CBSD68
| |_ ...
|_ Kodak
| |_ ...
|_ McMaster
| |_ ...
|_ Set12
| |_ ...
|_ Urban100
| |_ ...
|_ LRbicx16
7.19. CBSD68
To evaluate/optimize/compile the image denoising models of the Hailo Model Zoo you should generate the CBSD68
TFRecord files.
Page 57 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
1. Download the BSD100 dataset from here and extract. The expected dataset structure:
test
|_ BSD68
| |_ ...
|_ CBSD68
| |_ test001.png
| |_ ...
| |_ test068.png
|_ Kodak
| |_ ...
|_ McMaster
| |_ ...
|_ Set12
| |_ ...
|_ Urban100
| |_ ...
|_ LRbicx16
7.20. KITTI_STEREO
To evaluate/optimize/compile the stereo models of the Hailo Model Zoo you should generate the KITTI Stereo
TFRecord files.
1. Download the KITTI Stereo dataset from here. One must request access and await approval.
kitti_stereo
|_ testing
|_ image_2
| |_ 000000_10.png
| |_ 000000_11.png
| |_ ...
|_ image_3
| |_ 000000_10.png
| |_ 000000_11.png
| |_ ...
|_ training
|_ image_2
| |_ 000000_10.png
| |_ 000000_11.png
| |_ ...
|_ disp_occ_0
| |_ 000000_10.png
| |_ 000001_10.png
| |_ 000002_10.png
| |_ ...
Page 58 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
7.21. KINETICS400
To evaluate/optimize/compile the video classification models of the Hailo Model Zoo you should generate the KINET-
ICS400 TFRecord files.
1. Download the kinetics400 dataset from here. Follow the instructions to download the dataset.
k400/videos
|_ test
|_ abseiling
| |_ 0aSqlZT8QmM_000048_000058.mp4
| |_ 0xreS8KFbrw_000417_000427.mp4
| |_ ...
|_ air drumming
| |_ 013SMb0SX8I_000020_000030.mp4
| |_ 013SMb0SX8I_000020_000030.mp4
| |_ ...
|_ train
|_ abseiling
| |_ 0347ZoDXyP0_000095_000105.mp4
| |_ 035LtPeUFTE_000085_000095.mp4
| |_ ...
|_ air drumming
| |_ 03V2idM7_KY_000003_000013.mp4
| |_ 1R7Ds_000003_000013.mp4
| |_ 0c1bhfxioqE_000078_000088.mp4
| |_ ...
|_ val
|_ abseiling
| |_ 0wR5jVB-WPk_000417_000427.mp4
| |_ 3caPS4FHFF8_000036_000046.mp4
| |_ ...
|_ air drumming
| |_ 2cPLjY5AWXU_000001_000011.mp4
| |_ 3K0Sw7rbzPU_000114_000124.mp4
| |_ 6Tnsmk9C2rg_000048_000058.mp4
| |_ ...
Page 59 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
7.22. NUSCENES
nuscenes
|_ maps
| |_ *.png
|_ samples
| |_ CAM_BACK
| |_ |_ *.jpg
| |_ CAM_BACK_LEFT
| |_ |_ *.jpg
| |_ CAM_BACK_RIGHT
| |_ |_ *.jpg
| |_ CAM_FRONT
| |_ |_ *.jpg
| |_ CAM_FRONT_LEFT
| |_ |_ *.jpg
| |_ CAM_FRONT_RIGHT
| |_ |_ *.jpg
|_ sweeps
| |_ CAM_BACK
| |_ |_ *.jpg
| |_ CAM_BACK_LEFT
| |_ |_ *.jpg
| |_ CAM_BACK_RIGHT
| |_ |_ *.jpg
| |_ CAM_FRONT
| |_ |_ *.jpg
| |_ CAM_FRONT_LEFT
| |_ |_ *.jpg
| |_ CAM_FRONT_RIGHT
| |_ |_ *.jpg
|_ v1.0-trainval
| |_ *.json
python hailo_model_zoo/datasets/create_nuscenes_petrv2_cascade_tfrecord.py�
,→calib --ann_file <train_annotation_file.pkl> --coords-dir <coords3d_
,→directory_path>
python hailo_model_zoo/datasets/create_nuscenes_petrv2_cascade_tfrecord.py�
,→val --ann_file <val_annotation_file.pkl> --coords-dir <coords3d_directory_
,→path>
Where <*_annotation_file.pkl> is the train / val .pkl annotation file generated from the PETR training environ-
ment.
Notice: In order to benchmark our PETRv2 cascade (petrv2), please download the annotation .pkl file from here
and create a symbolic link (softlink) from /fastdata/data/nuscenes/nuesence/ to your nuscenes dataset folder.
Page 60 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
8. Benchmarks
In order to measure FPS, power and latency of the Hailo Model Zoo networks you can use the HailoRT command line
interface.
For more information please refer to the HailoRT documentation in hailo.ai.
8.1. Example
The HailoRT command line interface works with the Hailo Executable File (HEF) of the model.
To generate the HEF file use the following command:
After building the HEF you will be able to measure the performance of the model by using the HailoRT command
line interface.
Example for measuring performance of resnet_v1_50:
Example output:
=======
Summary
=======
FPS (hw_only) = 1328.83
(streaming) = 1328.8
Latency (hw) = 2.93646 ms
Power in streaming mode (average) = 3.19395 W
(max) = 3.20456 W
To use datasets from the Hailo Model Zoo, you can use the command:
which will generate a bin file with serialized images. This bin file can be used inside the HailoRT:
Page 61 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
9.1. Properties
• network
– network_path (array): Path to the network files, can be ONNX / TF Frozen graph (pb) / TF CKPT files / TF2
saved model.
• parser
– nodes ([‘array’, ‘null’]): List of [start node, [end nodes]] for parsing.
For example: [“resnet_v1_50/conv1/Pad”, “resnet_v1_50/predictions/Softmax”].
* normalize_in_net (boolean): Whether or not the network includes an on-chip normalization layer. If
so, the normalization layer will appear on the .alls file that is used. Default: False.
normalization1 = normalization([123.675, 116.28,
Example alls command:
103.53], [58.395, 57.12, 57.375])
If the alls doesn’t include the required normalization, then the MZ (and the user application) will
apply normalization before feeding inputs to the network
* mean_list ([‘array’, ‘null’]): Used only in case normalize_in_net=false. The MZ automatically performs
normalization to the calibration set, so the network receives already-normalized input (saves the
user the need to normalize the dataset). Default: None.
* std_list ([‘array’, ‘null’]): Used only in case normalize_in_net=false: The MZ automatically performs
normalization to the calibration set, so the network receives already-normalized input (saves the
user the need to normalize the dataset). Default: None.
– start_node_shapes ([‘array’, ‘null’]): Dict for setting input shape of supported models that does not
explicitly use it.
For example, models with input shape of [?, ?, ?, 3] can be set with {“Preprocessor/sub:0”: [1, 512, 512,
3]}. Default: None.
• preprocessing
• quantization
• postprocessing
Page 62 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
* nms (boolean): Activate the NMS PPU layer and the relevant decoding layers. Default: False.
• evaluation
– dataset_name ([‘string’, ‘null’]): Name of the dataset to be used in evaluation. Default: None.
– data_set ([‘string’, ‘null’]): Path to TFrecord dataset for evaluation. Default: None.
– classes ([‘integer’, ‘null’]): Number of classes in the model. Default: 1000.
– labels_offset ([‘integer’, ‘null’]): Offset of labels. Default: 0.
– network_type ([‘string’, ‘null’]): The type of the network used for evaluation.
Use this field if evaluation type is different than preprocessing type. Default: None.
• hn_editor
• The MZ uses hierarchical .yaml infrastructure for configuration. For example, for yolov5m_vehicles:
base:
- base/yolov5.yaml
• Each property on the child hierarchies replaces the properties on the parent ones. For example, if preprocess-
ing.input_shape is defined both in base/yolov5.yaml and base/base.yaml, the one from base/yolov5.yaml will be
used
Page 63 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
• Therefore, if we want to change some property, we can just update the last child file that is using that property
• evaluation and postprocessing properties aren’t needed for compilation as they are used by the
Model-Zoo for model evaluation (which isn’t supported yet for retrained models). Also info field is just used
for description.
• You might want to update those default values on some advanced scenarios:
– preprocessing.padding_color
* Change those values only if you have used a different value for training your model
– parser.normalization_params.normalize_in_net
* If you have manually changed the normalization values on the retraining docker, and normal-
ize_in_net=true, remember to update the corresponding alls command
– parser.normalization_params.mean_list
* Update those values if normalize_in_net=false and you have manually changed the normalization val-
ues on the retraining docker
– parser.normalization_params.std_list
* Update those values if normalize_in_net=false and you have manually changed the normalization val-
ues on the retraining docker
Page 64 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Important: Retraining is not available inside the docker version of Hailo Software Suite. In case you use it, clone
the hailo_model_zoo outside of the docker, and perform the retraining there: git clone https://github.
com/hailo-ai/hailo_model_zoo.git
Object Detection
• YOLOv3
• YOLOv4
• YOLOv5
• YOLOv8
• YOLOX
• DAMO-YOLO
• NanoDet
Pose Estimation
• CenterPose
• MSPN
Semantic Segmentation
• FCN
Instance Segmentation
• YOLACT
• YOLOv8_seg
Face Recognition
• ArcFace
Page 65 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
10.1.1. Prerequisites
NOTE:In case you use Hailo Software Suite docker, make sure you are doing all the following instructions outside of
this docker.
cd hailo_model_zoo/training/yolov3
docker build --build-arg timezone=`cat /etc/timezone` -t yolov3:v0 .
Page 66 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
1. Train your model: Once the docker is started, you can start training your YOLOv3.
• Prepare your custom dataset - Follow the full instructions described here:
– Create data/obj.data with paths to your training and validation .txt files, which contain
the list of the image paths*.
classes = 80
train = data/train.txt
valid = data/val.txt
names = data/coco.names
backup = backup/
* Tip: specify the paths to the training and validation images in the training and validation .txt
files relative to /workspace/darknet/
Place your training/validation images and labels in your data folder and make sure you update the
number of classes.
– Labels - each image should have labels in YOLO format with corresponding txt file for each image.
• Start training - The following command is an example for training the yolov3.
2. Export to ONNX:
In order to export your trained YOLOv3 model to ONNX run the following script:
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model.
In order to do so you need a working model-zoo environment.
Choose the corresponding YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/yolov3_416.yaml (for the default YOLOv3 model).
Page 67 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Note:
• On your desired YOLOv3 YAML, make sure preprocessing.input_shape fits your model’s resolution.
• For TAPPAS, retrain the model with a resolution of 608x608, and on compilation use yolov3_gluon.yaml.
10.2.1. Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
cd hailo_model_zoo/training/yolov4
docker build --build-arg timezone=`cat /etc/timezone` -t yolov4:v0 .
Page 68 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Once the docker is started, you can start training your YOLOv4-leaky.
• Prepare your custom dataset - Follow the full instructions described here:
– Create data/obj.data with paths to your training and validation .txt files, which contain
the list of the image paths*.
classes = 80
train = data/train.txt
valid = data/val.txt
names = data/coco.names
backup = backup/
* Tip: specify the paths to the training and validation images in the training and validation .txt
files relative to /workspace/darknet/
Place your training/validation images and labels in your data folder and make sure you update the
number of classes.
– Labels - each image should have labels in YOLO format with corresponding txt file for each image.
• Start training - The following command is an example for training the yolov4-leaky model.
2. Export to ONNX:
In order to export your trained YOLOv4 model to ONNX run the following script:
Page 69 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model.
In order to do so you need a working model-zoo environment.
Choose the corresponding YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/yolov4_leaky.yaml, and run compilation using the model zoo:
,→names name1
• The model zoo will take care of adding the input normalization to be part of the model.
Note:
10.3.1. Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
Page 70 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
cd hailo_model_zoo/training/yolov5
docker build --build-arg timezone=`cat /etc/timezone` -t yolov5:v0 .
* This command will build the docker image with the necessary requirements using the Dockerfile exists in
yolov5 directory.
• Prepare your custom dataset - Follow the steps described here in order to create:
– Make sure to include number of classes field in the yaml, for example: nc: 80
• Start training - The following command is an example for training a yolov5s model.
Page 71 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
– yolov5s.pt - pretrained weights. You can find the pretrained weights for yolov5s, yolov5m,
yolov5l, yolov5x and yolov5m_wo_spp in your working directory.
– models/yolov5s.yaml - configuration file of the yolov5 variant you would like to train. In
order to change the number of classes make sure you update this file.
2. Export to ONNX:
In order to export your trained YOLOv5 model to ONNX run the following script:
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model.
In order to do so you need a working model-zoo environment.
Choose the corresponding YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/yolov5s.yaml, and run compilation using the model zoo:
,→classes 80
Note:
• Make sure to also update preprocessing.input_shape field on yolo.yaml, if it was changed on re-
training.
Page 72 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
The training flow will automatically try to find more fitting anchors values then the default anchors. In our TAPPAS
environment we use the default anchors, but you should be aware that the resulted anchors might be different.
The model anchors can be retrieved from the trained model using the following snippet:
m = torch.load(”last.pt”)[”model”]
detect = list(m.children())[0][-1]
print(detect.anchor_grid)
10.4.1. Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
cd hailo_model_zoo/training/yolov8
docker build --build-arg timezone=`cat /etc/timezone` -t yolov8:v0 .
* This command will build the docker image with the necessary requirements using the Dockerfile exists in
yolov8 directory.
Page 73 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
• Prepare your custom dataset - Follow the steps described here in order to create:
– Make sure to include number of classes field in the yaml, for example: nc: 80
• Start training - The following command is an example for training a yolov8s model.
– yolov8s.pt - pretrained weights. The pretrained weights for yolov8n, yolov8s, yolov8m, yolov8l
and yolov8x will be downloaded to your working directory when running this command.
2. Export to ONNX:
In order to export your trained YOLOv8 model to ONNX run the following script:
Page 74 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model.
In order to do so you need a working model-zoo environment.
Choose the corresponding YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/yolov8s.yaml, and run compilation using the model zoo:
,→classes 80
Note:
• Make sure to also update preprocessing.input_shape field on yolo.yaml, if it was changed on re-
training.
10.5.1. Prerequisites
NOTE: In case you use Hailo Software Suite docker, make sure you are doing all the following in-
structions outside of this docker.
cd hailo_model_zoo/training/yolox
docker build --build-arg timezone=`cat /etc/timezone` -t yolox:v0 .
Page 75 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
You can use coco format, which is already supported for training on your own custom dataset. More
information can be found here
2. Training:
exps/default/yolox_m_leaky.py
exps/default/yolox_l_leaky.py
exps/default/yolox_x_leaky.py
exps/default/yolox_s_wide_leaky.py
• -c: path to pretrained weights which can be found in your working directory
|_ yolox_s.pth
|_ yolox_m.pth
|_ yolox_l.pth
|_ yolox_x.pth
Page 76 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
3. Exporting to onnx:
NOTE: Your trained model will be found under the following path: /workspace/YOLOX/
YOLOX_outputs/yolox_s_leaky/, and the exported onnx will be written to /workspace/
YOLOX/yolox_s_leaky.onnx
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model. In order to do so you need
a working model-zoo environment. Choose the corresponding YAML from our networks configuration directory,
i.e. hailo_model_zoo/cfg/networks/yolox_s_leaky.yaml, and run compilation using the model
zoo:
Page 77 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
10.6.1. Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
cd hailo_model_zoo/training/damoyolo
docker build --build-arg timezone=`cat /etc/timezone` -t damoyolo:v0 .
* This command will build the docker image with the necessary requirements using the Dockerfile exists in
damoyolo directory.
Page 78 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
• Prepare your custom dataset (must be coco format) - Follow the steps described here.
• Start training - The following command is an example for training a damoyolo_tinynasL20_T model.
configs/damoyolo_tinynasL25_S.py
configs/damoyolo_tinynasL35_M.py
In order to export your trained DAMO-YOLO model to ONNX run the following script:
,→size 1
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model.
In order to do so you need a working model-zoo environment.
Choose the corresponding YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/damoyolo_tinynasL20_T.yaml, and run compilation using
the model zoo:
• The model zoo will take care of adding the input normalization to be part of the model.
Page 79 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
10.7.1. Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
cd hailo_model_zoo/training/nanodet
docker build -t nanodet:v0 --build-arg timezone=`cat /etc/timezone` .
Page 80 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
2. Training:
data:
train:
name: CocoDataset
img_path: <path-to-train-dir>
ann_path: <path-to-annotations-file>
...
val:
name: CocoDataset
img_path: <path-to-validation-dir>
ann_path: <path-to-annotations-file>
...
cd /workspace/nanodet
ln -s /workspace/data/coco/ /coco
python tools/train.py ./config/legacy_v0.x_configs/RepVGG/nanodet-RepVGG-A0_
,→416.yml
In case you want to use the pretrained nanodet-RepVGG-A0_416.ckpt, which was predownloaded into your
docker modify your configurationf file:
schedule:
load_model: ./pretrained/nanodet-RepVGG-A0_416.ckpt
Modifying the batch size and the number of GPUs used for training can be done also in the configuration file:
device:
gpu_ids: [0]
workers_per_gpu: 1
batchsize_per_gpu: 128
3. Exporting to onnx
Page 81 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
After training, install the ONNX and ONNXruntime packages, then export the ONNX model:
,→A0-416/model_last.ckpt
NOTE: Your trained model will be found under the following path: /workspace/nanodet/workspace/<backbone-
name> /model_last.ckpt, and exported onnx will be written to /workspace/nanodet/nanodet.onnx
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model.
In order to do so you need a working model-zoo environment.
Choose the corresponding YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/nanodet_repvgg.yaml, and run compilation using the model
zoo:
,→name1 --classes 80
Note:
Page 82 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
10.8.1. Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
cd hailo_model_zoo/training/fcn
docker build -t fcn:v0 --build-arg timezone=`cat /etc/timezone` .
Page 83 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
/workspace
|-- mmsegmentation
`-- |-- data
`-- cityscapes
|-- gtFine
| | -- train
| | | -- aachem
| | | -- | -- *.png
| | ` -- ...
| ` -- test
| | -- berlin
| | -- | -- *.png
| ` -- ...
`-- leftImg8bit
| -- train
| -- | -- aachem
| -- | -- | -- *.png
| -- ` -- ...
` -- test
| -- berlin
| -- | -- *.png
` -- ...
2. Training:
cd /workspace/mmsegmentation
./tools/dist_train.sh configs/fcn/fcn8_r18_hailo.py 2
3. Exporting to onnx
cd /workspace/mmsegmentation
python ./tools/pytorch2onnx.py configs/fcn/fcn8_r18_hailo.py --checkpoint ./
,→work_dirs/fcn8_r18_hailo/iter_59520.pth --shape 1024 1920 --out_name fcn.onnx
Page 84 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model.
In order to do so you need a working model-zoo environment.
Choose the corresponding YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/fcn8_resnet_v1_18.yaml, and run compilation using the
model zoo:
,→name1
• The model zoo will take care of adding the input normalization to be part of the model.
10.9.1. Prerequisites
NOTE: In case you use Hailo Software Suite docker, make sure you are doing all the following instructions outside of
this docker.
cd hailo_model_zoo/training/yolact
docker build --build-arg timezone=`cat /etc/timezone` -t yolact:v0 .
Page 85 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
/workspace
|-- data
`-- |-- coco
`-- |-- annotations
| |-- instances_train2017.json
| |-- instances_val2017.json
| |-- person_keypoints_train2017.json
| |-- person_keypoints_val2017.json
| |-- image_info_test-dev2017.json
|-- images
|-- train2017
|-- *.jpg
|-- val2017
|-- *.jpg
|-- test2017
` |-- *.jpg
Once your dataset is prepared, create a soft link to it under the yolact/data work directory, then you can start
training your model:
Page 86 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
cd /workspace/yolact
ln -s /workspace/data/coco data/coco
python train.py --config=yolact_regnetx_800MF_config
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model. In order to do so you need
a working model-zoo environment. Choose the corresponding YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/yolact.yaml, and run compilation using the model zoo:
hailomz compile yolact --ckpt yolact.onnx --calib-path /path/to/calibration/imgs/
,→dir/ --yaml path/to/yolact_regnetx_800mf_20classes.yaml --start-node-names name1�
• The model zoo will take care of adding the input normalization to be part of the model.
Note:
• The ‘yolact_regnetx_800mf_20classes.yaml<https://github.com/hailo-ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/
is an example yaml where some of the classes (out of 80) were removed. If you wish to change the number
of classes, the easiest way is to retrain with the exact number of classes, erase the channels_remove
section (lines 18 to 437).
Page 87 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
10.10.1. Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
cd hailo_model_zoo/training/yolov8_seg
docker build --build-arg timezone=`cat /etc/timezone` -t yolov8_seg:v0 .
* This command will build the docker image with the necessary requirements using the Dockerfile exists in
yolov8-seg directory.
Page 88 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
• Prepare your custom dataset - Follow the steps described here in order to create:
– Make sure to include number of classes field in the yaml, for example: nc: 80
• Start training - The following command is an example for training a yolov8s-seg model.
2. Export to ONNX:
In order to export your trained YOLOv8-seg model to ONNX run the following script:
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model.
In order to do so you need a working model-zoo environment.
Choose the corresponding YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/yolov8s-seg.yaml, and run compilation using the model zoo:
,→name1
Page 89 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
• The model zoo will take care of adding the input normalization to be part of the model.
Note:
• Make sure to also update preprocessing.input_shape field on yolo.yaml, if it was changed on re-
training.
10.11.1. Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
cd hailo_model_zoo/training/centerpose
docker build -t centerpose:v0 --build-arg timezone=`cat /etc/timezone` .
Page 90 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
/workspace
|-- data
`-- |-- coco
`-- |-- annotations
| |-- instances_train2017.json
| |-- instances_val2017.json
| |-- person_keypoints_train2017.json
| |-- person_keypoints_val2017.json
| |-- image_info_test-dev2017.json
`-- |-- images
|---|-- train2017
|---|---|-- *.jpg
|---|-- val2017
|---|---|-- *.jpg
|---|-- test2017
`---|---|-- *.jpg
The path for the dataset can be configured in the .yaml file, e.g. centerpose/experiments/regnet_fpn.yaml
2. Training:
cd /workspace/centerpose/tools
python -m torch.distributed.launch --nproc_per_node 4 train.py --cfg ../
,→experiments/regnet_fpn.yaml
Page 91 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
cd /workspace/centerpose/tools
python export.py --cfg ../experiments/regnet_fpn.yaml --TESTMODEL /workspace/
,→out/regnet1_6/model_best.pth
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model. In order to do so you need a
working model-zoo environment. Choose the corresponding YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/centerpose_regnetx_1.6gf_fpn.yaml, and run compi-
lation using the model zoo:
• The model zoo will take care of adding the input normalization to be part of the model.
10.12.1. Prerequisites
NOTE: In case you are using the Hailo Software Suite docker, make sure to run all of the following instructions outside
of that docker.
cd hailo_model_zoo/training/mspn
docker build -t mspn:v0 --build-arg timezone=`cat /etc/timezone` .
Page 92 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
/workspace
|-- data
`-- |-- coco
`-- |-- annotations
| |-- instances_train2017.json
| |-- instances_val2017.json
| |-- person_keypoints_train2017.json
| |-- person_keypoints_val2017.json
| |-- image_info_test-dev2017.json
`---|-- train2017
|---|---|-- *.jpg
`---|-- val2017
|---|---|-- *.jpg
`---|-- test2017
`---|---|-- *.jpg
The path for the dataset can be configured in the .py config file, e.g. configs/body/
2d_kpt_sview_rgb_img/topdown_heatmap/coco/regnetx_800mf_256x192.py
2. Training:
cd /workspace/mmpose
./tools/dist_train.sh ./configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/
,→coco/regnetx_800mf_256x192.py 4 --work-dir exp0
Page 93 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
Where 4 is the number of GPUs used for training. In this example, the trained model will be saved under exp0
directory.
3. Export to onnx
In order to export your trained model to ONNX run the following script:
cd /workspace/mmpose
python tools/deployment/pytorch2onnx.py ./configs/body/2d_kpt_sview_rgb_img/
,→topdown_heatmap/coco/regnetx_800mf_256x192.py exp0/best_AP_epoch_310.pth --
,→output-file mspn_regnetx_800mf.onnx
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model.
In order to do so you need a working model-zoo environment.
Choose the corresponding YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/mspn_regnetx_800mf.yaml, and run compilation using the
model zoo:
,→end-node-names name1
• The model zoo will take care of adding the input normalization to be part of the model.
Page 94 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
10.13.1. Prerequisites
NOTE: In case you use Hailo Software Suite docker, make sure you are doing all the following in-
structions outside of this docker.
cd hailo_model_zoo/training/arcface
docker build --build-arg timezone=`cat /etc/timezone` -t arcface:v0 .
Page 95 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
data_dir/
├── agedb_30.bin
├── cfp_fp.bin
├── lfw.bin
├── person0/
├── person1/
├── ...
└── personlast/
2. MxNetRecord - train.rec and train.idx files. This is the format of insightface datasets.
Validation data is packed .bin files
data_dir/
├── agedb_30.bin
├── cfp_fp.bin
├── lfw.bin
├── train.idx
└── train.rec
2. Training:
3. Exporting to onnx:
You can generate an HEF file for inference on Hailo-8 from your trained ONNX model. In order to do so you need
a working model-zoo environment. Choose the corresponding YAML from our networks configuration directory, i.e.
hailo_model_zoo/cfg/networks/arcface_mobilefacenet.yaml, and run compilation using
the model zoo:
,→end-node-names name1
Page 96 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.
Model Zoo User Guide
• The model zoo will take care of adding the input normalization to be part of the model.
Page 97 Release v2.14.0 Confidential and Proprietary | Copyright © 2025 – Hailo Technologies Ltd.