|
1 | 1 | # YOLOP-opencv-dnn
|
2 |
| -YOLOP, a panoramic driving perception network deployed using OpenCV, can handle traffic target detection, driveable area segmentation, and lane line detection, three visual perception tasks simultaneously, and still contains both C++ and Python versions of the program implementation |
| 2 | +This repository contained an OpenCV version of YOLOP, a panoptic driving perception network that can handle simultaneously traffic target detection, drivable area segmentation, and lane line detection. |
3 | 3 |
|
4 |
| -The onnx file is downloaded from Baidu Cloud Drive, link: https://pan.baidu.com/s/1A_9cldUHeY9GUle_HO4Crg Extraction code: mf1x |
| 4 | +You can find joined to the repository, an onnx file created from the provided weight of YOLOP. |
5 | 5 |
|
6 |
| -The main program file for the C++ version is main.cpp, and the main program file for the Python version is main.py. After downloading the onnx file to the directory where the main program file is located, you can run the program. The folderimages contains several test images from the bdd100k autopilot dataset. |
| 6 | +You will find in the repository, a C++ version (main.cpp), a Python version (main.py), an onnx file created from the provided weight of YOLOP and images folder that contains several test images from the bdd100k autopilot dataset. |
7 | 7 |
|
8 |
| -This program is an opencv inference deployment program based on the recently released project https://github.com/hustvl/YOLOP by the vision team of Huazhong University of Science and Technology. This program can be run by relying only on the opencv library, thus completely getting rid of the dependency on any deep learning framework. If the program runs with errors, it is likely that the version of opencv you installed is low, so you can upgrade the version of opencv to run normally. |
| 8 | +This program is an opencv inference deployment program based on the recently released [project YOLOP](https://github.com/hustvl/YOLOP) by the vision team of Huazhong University of Science and Technology. |
| 9 | +It can be run using only the opencv library, thus completely getting rid of the dependency of any deep learning framework. |
9 | 10 |
|
10 |
| -In addition, there is an export_onnx.py file in this set, which is the program that generates the onnx file. If you want to know how to generate .onnx files, you need to copy the export_onnx.py file to the home directory of https://github.com/hustvl/YOLOP, and modify the code in lib/models/ common.py, then run export_onnx.py to generate the onnx file. See my csdn blog post https://blog.csdn.net/nihate/article/details/112731327 for what code to change in lib/models/common.py. |
| 11 | +This program has been tested with opencv 4.5.3. It doesn't work with opencv 4.2.0 and below. |
| 12 | + |
| 13 | +## Export your own onnx file |
| 14 | +You will find in this repository a file export_onnx.py, which is the program that generates the onnx file. If you want to know how to generate .onnx files, you need to copy the export_onnx.py file to the home directory of [YOLOP](https://github.com/hustvl/YOLOP). |
| 15 | +You will also need to modify the code in YOLOP/lib/models/common.py as follow : |
| 16 | +~~~python |
| 17 | +class Contract(nn.Module): |
| 18 | + # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40) |
| 19 | + def __init__(self, gain = 2): |
| 20 | + super().__init__() |
| 21 | + self.gain = gain |
| 22 | + |
| 23 | + def forward(self, x): |
| 24 | + N, C, H, W = x.size() # assert (H / s == 0) and (W / s == 0), 'Indivisible gain' |
| 25 | + s = self.gain |
| 26 | + x = x.view(N, C, H // s, s, W // s, s) # x(1,64,40,2,40,2) |
| 27 | + x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40) |
| 28 | + return x.view(N, C * s * s, H // s, W // s) # x(1,256,40,40) |
| 29 | + |
| 30 | + |
| 31 | +class Focus(nn.Module): |
| 32 | + # Focus wh information into c-space |
| 33 | + # slice concat conv |
| 34 | + def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups |
| 35 | + super(Focus, self).__init__() |
| 36 | + self.conv = Conv(c1 * 4, c2, k, s, p, g, act) |
| 37 | + self.contract = Contract(gain=2) |
| 38 | + |
| 39 | + def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) |
| 40 | + # return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1)) |
| 41 | + return self.conv(self.contract(x)) |
| 42 | +~~~ |
| 43 | +We are adding a Contract class and we have modified the content of the Focus class. |
| 44 | +We also need to modify the content of the method forward from the Detect class as follow : |
| 45 | +~~~python |
| 46 | + def forward(self, x): |
| 47 | + if not torch.onnx.is_in_onnx_export(): |
| 48 | + z = [] # inference output |
| 49 | + for i in range(self.nl): |
| 50 | + x[i] = self.m[i](x[i]) # conv |
| 51 | + # print(str(i)+str(x[i].shape)) |
| 52 | + bs, _, ny, nx = x[i].shape # x(bs,255,w,w) to x(bs,3,w,w,85) |
| 53 | + x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() |
| 54 | + # print(str(i)+str(x[i].shape)) |
| 55 | + |
| 56 | + if not self.training: # inference |
| 57 | + if self.grid[i].shape[2:4] != x[i].shape[2:4]: |
| 58 | + self.grid[i] = self._make_grid(nx, ny).to(x[i].device) |
| 59 | + y = x[i].sigmoid() |
| 60 | + # print("**") |
| 61 | + # print(y.shape) #[1, 3, w, h, 85] |
| 62 | + # print(self.grid[i].shape) #[1, 3, w, h, 2] |
| 63 | + y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i] # xy |
| 64 | + y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh |
| 65 | + """print("**") |
| 66 | + print(y.shape) # [1, 3, w, h, 85] |
| 67 | + print(y.view(bs, -1, self.no).shape) # [1, 3*w*h, 85]""" |
| 68 | + z.append(y.view(bs, -1, self.no)) |
| 69 | + return x if self.training else (torch.cat(z, 1), x) |
| 70 | + |
| 71 | + else: |
| 72 | + for i in range(self.nl): |
| 73 | + x[i] = self.m[i](x[i]) # conv |
| 74 | + # print(str(i)+str(x[i].shape)) |
| 75 | + bs, _, ny, nx = x[i].shape # x(bs,255,w,w) to x(bs,3,w,w,85) |
| 76 | + x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() |
| 77 | + x[i] = torch.sigmoid(x[i]) |
| 78 | + x[i] = x[i].view(-1, self.no) |
| 79 | + return torch.cat(x, dim=0) |
| 80 | +~~~ |
| 81 | +After these steps, you can run export_onnx.py to generate the onnx file. |
| 82 | +These steps have been extracted from the following Chinese csdn blog post : https://blog.csdn.net/nihate/article/details/112731327 |
11 | 83 |
|
12 |
| -Translated with www.DeepL.com/Translator (free version) |
|
0 commit comments