Skip to content

Commit 2499475

Browse files
author
Geoffroy
committed
Update README, update catching errors in export_onnx.py, added yolop.onnx
1 parent e3e9e68 commit 2499475

File tree

5 files changed

+99
-8
lines changed

5 files changed

+99
-8
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
.idea

README.md

Lines changed: 77 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,83 @@
11
# YOLOP-opencv-dnn
2-
YOLOP, a panoramic driving perception network deployed using OpenCV, can handle traffic target detection, driveable area segmentation, and lane line detection, three visual perception tasks simultaneously, and still contains both C++ and Python versions of the program implementation
2+
This repository contained an OpenCV version of YOLOP, a panoptic driving perception network that can handle simultaneously traffic target detection, drivable area segmentation, and lane line detection.
33

4-
The onnx file is downloaded from Baidu Cloud Drive, link: https://pan.baidu.com/s/1A_9cldUHeY9GUle_HO4Crg Extraction code: mf1x
4+
You can find joined to the repository, an onnx file created from the provided weight of YOLOP.
55

6-
The main program file for the C++ version is main.cpp, and the main program file for the Python version is main.py. After downloading the onnx file to the directory where the main program file is located, you can run the program. The folderimages contains several test images from the bdd100k autopilot dataset.
6+
You will find in the repository, a C++ version (main.cpp), a Python version (main.py), an onnx file created from the provided weight of YOLOP and images folder that contains several test images from the bdd100k autopilot dataset.
77

8-
This program is an opencv inference deployment program based on the recently released project https://github.com/hustvl/YOLOP by the vision team of Huazhong University of Science and Technology. This program can be run by relying only on the opencv library, thus completely getting rid of the dependency on any deep learning framework. If the program runs with errors, it is likely that the version of opencv you installed is low, so you can upgrade the version of opencv to run normally.
8+
This program is an opencv inference deployment program based on the recently released [project YOLOP](https://github.com/hustvl/YOLOP) by the vision team of Huazhong University of Science and Technology.
9+
It can be run using only the opencv library, thus completely getting rid of the dependency of any deep learning framework.
910

10-
In addition, there is an export_onnx.py file in this set, which is the program that generates the onnx file. If you want to know how to generate .onnx files, you need to copy the export_onnx.py file to the home directory of https://github.com/hustvl/YOLOP, and modify the code in lib/models/ common.py, then run export_onnx.py to generate the onnx file. See my csdn blog post https://blog.csdn.net/nihate/article/details/112731327 for what code to change in lib/models/common.py.
11+
This program has been tested with opencv 4.5.3. It doesn't work with opencv 4.2.0 and below.
12+
13+
## Export your own onnx file
14+
You will find in this repository a file export_onnx.py, which is the program that generates the onnx file. If you want to know how to generate .onnx files, you need to copy the export_onnx.py file to the home directory of [YOLOP](https://github.com/hustvl/YOLOP).
15+
You will also need to modify the code in YOLOP/lib/models/common.py as follow :
16+
~~~python
17+
class Contract(nn.Module):
18+
# Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)
19+
def __init__(self, gain = 2):
20+
super().__init__()
21+
self.gain = gain
22+
23+
def forward(self, x):
24+
N, C, H, W = x.size() # assert (H / s == 0) and (W / s == 0), 'Indivisible gain'
25+
s = self.gain
26+
x = x.view(N, C, H // s, s, W // s, s) # x(1,64,40,2,40,2)
27+
x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40)
28+
return x.view(N, C * s * s, H // s, W // s) # x(1,256,40,40)
29+
30+
31+
class Focus(nn.Module):
32+
# Focus wh information into c-space
33+
# slice concat conv
34+
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
35+
super(Focus, self).__init__()
36+
self.conv = Conv(c1 * 4, c2, k, s, p, g, act)
37+
self.contract = Contract(gain=2)
38+
39+
def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
40+
# return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))
41+
return self.conv(self.contract(x))
42+
~~~
43+
We are adding a Contract class and we have modified the content of the Focus class.
44+
We also need to modify the content of the method forward from the Detect class as follow :
45+
~~~python
46+
def forward(self, x):
47+
if not torch.onnx.is_in_onnx_export():
48+
z = [] # inference output
49+
for i in range(self.nl):
50+
x[i] = self.m[i](x[i]) # conv
51+
# print(str(i)+str(x[i].shape))
52+
bs, _, ny, nx = x[i].shape # x(bs,255,w,w) to x(bs,3,w,w,85)
53+
x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
54+
# print(str(i)+str(x[i].shape))
55+
56+
if not self.training: # inference
57+
if self.grid[i].shape[2:4] != x[i].shape[2:4]:
58+
self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
59+
y = x[i].sigmoid()
60+
# print("**")
61+
# print(y.shape) #[1, 3, w, h, 85]
62+
# print(self.grid[i].shape) #[1, 3, w, h, 2]
63+
y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i] # xy
64+
y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
65+
"""print("**")
66+
print(y.shape) # [1, 3, w, h, 85]
67+
print(y.view(bs, -1, self.no).shape) # [1, 3*w*h, 85]"""
68+
z.append(y.view(bs, -1, self.no))
69+
return x if self.training else (torch.cat(z, 1), x)
70+
71+
else:
72+
for i in range(self.nl):
73+
x[i] = self.m[i](x[i]) # conv
74+
# print(str(i)+str(x[i].shape))
75+
bs, _, ny, nx = x[i].shape # x(bs,255,w,w) to x(bs,3,w,w,85)
76+
x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
77+
x[i] = torch.sigmoid(x[i])
78+
x[i] = x[i].view(-1, self.no)
79+
return torch.cat(x, dim=0)
80+
~~~
81+
After these steps, you can run export_onnx.py to generate the onnx file.
82+
These steps have been extracted from the following Chinese csdn blog post : https://blog.csdn.net/nihate/article/details/112731327
1183

12-
Translated with www.DeepL.com/Translator (free version)

README_CH.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
# YOLOP-opencv-dnn
2+
使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,依然是包含C++和Python两种版本的程序实现
3+
4+
onnx文件从百度云盘下载,链接:https://pan.baidu.com/s/1A_9cldUHeY9GUle_HO4Crg
5+
提取码:mf1x
6+
7+
C++版本的主程序文件是main.cpp,Python版本的主程序文件是main.py。把onnx文件下载到主程序文件所在目录后,就可以运行程序了。文件夹images
8+
里含有若干张测试图片,来自于bdd100k自动驾驶数据集。
9+
10+
本套程序是在华中科技大学视觉团队在最近发布的项目https://github.com/hustvl/YOLOP
11+
的基础上做的一个opencv推理部署程序,本套程序只依赖opencv库就可以运行,
12+
从而彻底摆脱对任何深度学习框架的依赖。如果程序运行出错,那很有可能是您安装的opencv版本低了,这时升级opencv版本就能正常运行的。
13+
14+
此外,在本套程序里,还有一个export_onnx.py文件,它是生成onnx文件的程序。不过,export_onnx.py文件不能本套程序目录内运行的,
15+
假如您想了解如何生成.onnx文件,需要把export_onnx.py文件拷贝到https://github.com/hustvl/YOLOP
16+
的主目录里之后,并且修改lib/models/common.py里的代码,
17+
这时运行export_onnx.py就可以生成onnx文件了。在lib/models/common.py里修改哪些代码,可以参见我的csdn博客文章
18+
https://blog.csdn.net/nihate/article/details/112731327

export_onnx.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -132,5 +132,6 @@ def forward(self, x):
132132
try:
133133
dnnnet = cv2.dnn.readNet(output_onnx)
134134
print('read sucess')
135-
except:
136-
print('read failed')
135+
except cv2.error as err:
136+
print('Your Opencv version : {} may be incompatible, please consider upgrading'.format(cv2.__version__))
137+
print('Read failed : ', err)

yolop.onnx

30.3 MB
Binary file not shown.

0 commit comments

Comments
 (0)