0% found this document useful (0 votes)
189 views6 pages

Yolov6 Using Vitis AI Library

The document discusses using the YOLOv6 model from the Vitis AI Model Zoo with the Vitis AI library. It describes downloading the pre-compiled YOLOv6 model for the KV260 board, copying the model files to the board, compiling a sample detection application, and running the application on an image to obtain bounding box detections. The code sample processes the model output to draw the bounding boxes on the image and save the results. Using the Vitis AI library and pre-compiled models can accelerate building machine learning applications.

Uploaded by

Thành Trung
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
189 views6 pages

Yolov6 Using Vitis AI Library

The document discusses using the YOLOv6 model from the Vitis AI Model Zoo with the Vitis AI library. It describes downloading the pre-compiled YOLOv6 model for the KV260 board, copying the model files to the board, compiling a sample detection application, and running the application on an image to obtain bounding box detections. The code sample processes the model output to draw the bounding boxes on the image and save the results. Using the Vitis AI library and pre-compiled models can accelerate building machine learning applications.

Uploaded by

Thành Trung
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Yolov6 Using Vitis AI Library

Last time we looked into different API levels provided by Vitis AI. VITIS AI API_1 are the
interfaces provided in the Vitis AI Library to use the Model Zoo models directly to
speed up the process of building ML applications.

In this post we will look into the YOLOv6 model that has been added in the Vitis AI 3.0
and basic code to test it using Vitis AI Library. Our targetwill be KV260 but same will
also work for ZCU102 and ZCU104.

Preparing Target Board


We will be using pre-built images which comes with Vitis AI and Vitis AI Library pre-
installed. To download the image go to this link: https://docs.xilinx.com/r/en-
US/ug1354-xilinx-ai-sdk/Step-1-Installing-the-Board-Image and in Step 1 we can find
the links to respective boards along with instructions to setup the device.

If we have a custom developed hardware then we will also have to follow the Step 2
and Step 3 from the above link.

Getting the Model


Next lets go to the AI Model Zoo and download a pre-compiled model for KV260. The
model we are going to download is inside pt_yolov6m_coco_640_640_82.4G_3.0 in the
model zoo of Vitis-AI repo. Xilinx follows a naming convention for the model zoo
models, in our case, pt_yolov6m_coco_640_640_82.4G_3.0 is the YOLOv6m model
trained with PyTorch using Coco dataset, the input size for the network is 640*640,
the computational cost per image is 82.4 GFLOPs and the Vitis AI version for the
model is 3.0

Model Link: https://github.com/Xilinx/Vitis-AI/blob/master/model_zoo/model-


list/pt_yolov6m_coco_640_640_82.4G_3.0/model.yaml

Inside the model.yaml file we will find links to model for the respective boards.

Our target is KV260 so we will choose the download link below board: zcu102 &
zcu104 & kv260. I have also referenced the download link here below:
https://www.xilinx.com/bin/public/openDownload?filename=yolov6m_pt-
zcu102_zcu104_kv260-r3.0.0.tar.gz

Now that we have downloaded the model and extracted it we will see two folders
1. yolov6m_pt_acc
2. yolov6m_pt

We will use model yolov6m_pt which contains md5sum, meta.json, prototxt and
xmodel files

We send prototxt and xmodel to the board using scp command.

Running Demo
Next, lets go to the board terminal. In the terminal using the ls command we can see
two folders Vitis-AI and dpu_sw_optimize.

Vitis-AI folder contains all the Vitis AI library examples. Change the directory using the
command:

cd examples/vai_library/samples/yolov6

Before running the Yolov6 example we will have to compile the examples. The example
comes with a build script build.sh, which is we will execute with as

sh build.sh

Before we run the demo we have to copy the .xmodel and .prototxt file into the same
folder where we have our demo code.

From the yolov6 folder execute the commands:

cp ../../../../../yolov6m_pt.xmodel ./

cp ../../../../../yolov6m_pt.prototxt ./

Note: We assume that you have used scp to transfer model files to /home/root
directory and are execute cp command from /home/root/Vitis-
AI/examples/vai_library/samples/yolov6

If you have transfered the model files to a different location, please modify the
commands accordingly.

Then we will run the provided test_jpeg_yolov6 to test the model on an image. Run the
command:

./test_jpeg_yolov6 yolov6m_pt.xmodel 000000000285.jpg

We will get the following log showing class label, coordinates and confidence level.

And a output file is saved as 0_000000000285_result.jpg

If we open the result file we can see a bounding box iin the detected image. In our
case it is a bear.

Code Snippets
Next lets look into the sample code(test_jpeg_yolov6) provided:

#include <glog/logging.h>

#include <iostream>

#include <memory>

#include <opencv2/core.hpp>

#include <opencv2/highgui.hpp>

#include <opencv2/imgproc.hpp>

#include <vitis/ai/yolov6.hpp>

#include <vitis/ai/demo.hpp>

#include "./process_result.hpp"

int main(int argc, char* argv[]) {

std::string model = argv[1];

return vitis::ai::main_for_jpeg_demo(

argc, argv,

[model] {

return vitis::ai::YOLOv6::create(model);

},

process_result, 2);

Our main function for running the image test, main_for_jpeg_demo , is located inside
the demo.hpp file. However, this file is only accessible on the target board after the
installation of Vitis AI. You can find the demo.hpp file in the following directory:

/usr/include/vitis/ai/demo.hpp

Implementation of main_for_jpeg_demo function:

// Entrance of jpeg demo

template <typename FactoryMethod, typename ProcessResult>

int main_for_jpeg_demo(int argc, char* argv[],

const FactoryMethod& factory_method,

const ProcessResult& process_result, int start_pos


= 1) {

if (argc <= 1) {

usage_jpeg(argv[0]);

exit(1);

auto model = factory_method();

if (ENV_PARAM(SAMPLES_ENABLE_BATCH)) {

std::vector<std::string> image_files;

for (int i = start_pos; i < argc; ++i) {

image_files.push_back(std::string(argv[i]));

if (image_files.empty()) {

std::cerr << "no input file" << std::endl;

exit(1);

auto batch = model->get_input_batch();

if (ENV_PARAM(SAMPLES_BATCH_NUM)) {

unsigned int batch_set = ENV_PARAM(SAMPLES_BATCH_NUM);

assert(batch_set <= batch);

batch = batch_set;

std::vector<std::string> batch_files(batch);

std::vector<cv::Mat> images(batch);

for (auto index = 0u; index < batch; ++index) {

const auto& file = image_files[index % image_files.size()];

batch_files[index] = file;

images[index] = cv::imread(file);

CHECK(!images[index].empty()) << "cannot read image from " << file;

auto results = model->run(images);

assert(results.size() == batch);

for (auto i = 0u; i < results.size(); i++) {


LOG(INFO) << "batch: " << i << " image: " << batch_files[i];

auto image = process_result(images[i], results[i], true);

auto out_file = std::to_string(i) + "_" +

batch_files[i].substr(0, batch_files[i].size() - 4)
+

"_result.jpg";

cv::imwrite(out_file, image);

LOG_IF(INFO, ENV_PARAM(DEBUG_DEMO))

<< "result image write to " << out_file;

std::cout << std::endl;

} else {

for (int i = start_pos; i < argc; ++i) {

auto image_file_name = std::string{argv[i]};

auto image = cv::imread(image_file_name);

if (image.empty()) {

LOG(FATAL) << "[UNILOG][FATAL][VAILIB_DEMO_IMAGE_LOAD_ERROR]


[Failed to "

"load image!]cannot load "

<< image_file_name << std::endl;

abort();

auto result = model->run(image);

image = process_result(image, result, true);

auto out_file =

image_file_name.substr(0, image_file_name.size() - 4) +
"_result.jpg";

cv::imwrite(out_file, image);

LOG_IF(INFO, ENV_PARAM(DEBUG_DEMO))

<< "result image write to " << out_file;

LOG_IF(INFO, ENV_PARAM(DEBUG_DEMO)) << "BYEBYE";

return 0;

process_result.hpp processes the result and creates the bounding box over the image.

#pragma once

#include <iomanip>

#include <iostream>

#include <opencv2/core.hpp>

#include <opencv2/highgui.hpp>

#include <opencv2/imgproc.hpp>

static cv::Scalar getColor(int label) {

return cv::Scalar(label * 2, 255 - label * 2, label + 50);

static cv::Mat process_result(cv::Mat& image,

const vitis::ai::YOLOv6Result& result,

bool is_jpeg) {

for (const auto& result : result.bboxes) {

int label = result.label;

auto& box = result.box;

LOG_IF(INFO, is_jpeg) << "RESULT: " << label << "\t" << std::fixed

<< std::setprecision(2) << box[0] << "\t" <<


box[1]

<< "\t" << box[2] << "\t" << box[3] << "\t"

<< std::setprecision(6) << result.score << "\n";

cv::rectangle(image, cv::Point(box[0], box[1]), cv::Point(box[2],


box[3]),

getColor(label), 1, 1, 0);

return image;

Using Vitis AI library we can rapidly prototype ML applications for our use case. If the
our required models are already in the Model Zoo or is supported by Vitis AI Library, it
can really speedup our application development using Vitis AI Library APIs.

Compiled by Abhidan([email protected])

Date: Feb 22,2023

You might also like