Skip to content
This repository was archived by the owner on Dec 8, 2021. It is now read-only.

Commit 10c3e3c

Browse files
committed
added guide and image for yolov5 from Pytorch to ONNX to Openvino
1 parent 3ffe635 commit 10c3e3c

File tree

2 files changed

+121
-0
lines changed

2 files changed

+121
-0
lines changed
89.8 KB
Loading

guides/yolov5_docu.md

+121
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,121 @@
1+
# YOLOV5
2+
3+
Yolov5 is supported by the Openvino Toolkit. This means it can be converted and executed on platforms running Openvino such as Intel CPUs and the Intel Neural Compute Stick 2 (NCS2). In this guide, the necessary reposities are shown and various steps are documented for future reference.
4+
5+
6+
## Repositories
7+
Oficial yolov5 repository: https://github.com/ultralytics/yolov5
8+
9+
A yolov5 on Openvino demo repository: https://github.com/violet17/yolov5_demo.git
10+
11+
```console
12+
git clone https://github.com/ultralytics/yolov5.git
13+
git clone https://github.com/violet17/yolov5_demo.git
14+
```
15+
16+
## Virtual Environment
17+
It is recommended to work with a python virtual environment to avoid version conflicts between pip packages.
18+
19+
```console
20+
cd yolov5 # navigate to the cloned yolov5 ultralytics repo
21+
python3 -m venv venv_yolov5 # this creates an empty virtual environment
22+
source venv_yolov5/bin/activate # activate the environment
23+
# now update the basic pip packages
24+
pip3 install --upgrade pip setuptools
25+
# install all necessary tools for the conversion to onnx
26+
pip3 install -r requirements.txt
27+
# and finally some packages for the Openvino conversion script
28+
pip3 install networkx defusedxml progress
29+
```
30+
31+
## ONNX Export
32+
33+
The original model was developed in Pytorch. To convert the model to Openvino, we need to first convert the model from Pytorch to ONNX. The next command shoudĺd generate a yolov5s.onnx model in the models folder
34+
35+
```console
36+
python3 export.py --weights models/yolov5s.pt --include onnx --simplify
37+
```
38+
39+
## ONNX to Openvino
40+
Now we have to convert the ONNX yolv5s into Openvino IR format.
41+
Please consult this website: https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html on how to install and activate the Openvino Toolkit for Linux.
42+
43+
After the successful installation of Openvino under /opt/intel/openvino_$version$ we can convert the ONNX model to IR. There is a slight catch here. The output convolution layers which need to be specified for the later steps to work, can be called differently from what is used in this guide. To figure out the correct names we can use Netron - a powerful Neural Network visualization tool - under: https://netron.app/.
44+
Drag and drop the yolov5s.onnx model into the browser window. Scroll down to where the three output nodes for different scales are. There you should find nodes similar to what is shown in the picture below.
45+
<div align="center">
46+
<img src="../_img/yolov5s_outputs_netron_circled.png", width="800">
47+
</div>
48+
Click on each of the last convolution layers before the output nodes and note the names. The names can be seen in the red rectangle on the right. These names can be parsed to the output flag of the mo.py Openvino conversion script as a comma separated list.
49+
50+
```console
51+
python3 /opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo.py \
52+
--input_model models/yolov5s.onnx \
53+
--output_dir models/ \
54+
--input_shape [1,3,640,640] \
55+
--input images \
56+
--scale 255 \
57+
--data_type FP16 \
58+
--output Conv_196,Conv_250,Conv_304 \
59+
--model_name yolov5s_FP16 \
60+
--progress
61+
```
62+
63+
Now the models folder should contain three more files:
64+
* yolov5s_FP16.bin
65+
* yolov5s_FP16.xml
66+
* yolov5s_FP16.mapping
67+
68+
It is possible to drag and drop the XML file into Netron to check if the outputs were correctly chosen.
69+
Adjust the flags according to your needs. The parameter --scale 255 should not be touched, as it can lead to inaccuracies in the resulting model, during the execution on the NCS2.
70+
71+
## Model execution on Hardware
72+
73+
Now we can try the models in different frameworks and on different devices. Lets start with the simplified ONNX model.
74+
75+
```console
76+
# run detect to try out the model; also accepts images as input; cam for webcam
77+
python3 detect.py --source cam --device cpu --view-img --nosave
78+
```
79+
80+
We can see if the model can be executed on the NCS2 with the help of the benchmark_app from the Openvino framework.
81+
82+
```console
83+
python3 /opt/intel/openvino_2021.4.689/deployment_tools/tools/benchmark_tool/benchmark_app.py \
84+
--path_to_model models/yolov5_simpl_FP16.xml \
85+
-niter 10 \ # number of execution iterations
86+
-nireq 1 \ # number of inference requests
87+
-d MYRIAD # change to CPU to run model on the CPU
88+
```
89+
90+
Finally, to look at the detection results on the NCS2, we can use the yolov5_demo.git repository.
91+
92+
```console
93+
cd ../yolov5_demo/
94+
```
95+
96+
Download the coco labels from: https://github.com/amikelive/coco-labels/blob/master/coco-labels-2014_2017.txt and copy them into the yolov5_demo folder.
97+
98+
```console
99+
python3 yolov5_demo_OV2021.3.py \
100+
--input cam \
101+
--model ../yolov5/models/yolov5s_simpl_FP16.xml \
102+
-d MYRIAD \
103+
--labels coco-labels-2014_2017.txt \
104+
-t 0.5
105+
```
106+
107+
This script outputs the identified objects in the yolo format.
108+
109+
## Compatibility
110+
111+
The conversion and execution in this guide work for yolov3.
112+
113+
## Versions
114+
115+
This guide was updated on the 8.11.2021 using the following software versions:
116+
* Openvino: 2021.4.689
117+
* Pytorch: 1.10.0
118+
* ONNX: 1.10.2
119+
* Tensorboard: 2.7.0
120+
* Opencv-python: 4.5.4.58
121+
* Numpy: 1.19.5

0 commit comments

Comments
 (0)