【YOLOV5

阅读: 评论:0

【YOLOV5

【YOLOV5

 主干目录:

【YOLOV5-6.x 版本讲解】整体项目代码注释导航现在YOLOV5已经更新到6.X版本,现在网上很多还停留在5.X的源码注释上,因此特开一贴传承开源精神!5.X版本的可以看其他大佬的帖子本文章主要从6.X版本出发,主要解决6.X版本的项目注释与代码分析!......

以下内容为本栏目的一部分,更多关注以上链接目录,查找YOLOV5的更多信息

祝福你朋友早日发表sci!


# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""Common modules这个模块存放着yolov5网络搭建常见Common模块。
"""import json
import math # 数学函数模块
import platform
import warnings
from collections import OrderedDict, namedtuple
from copy import copy   #  数据拷贝模块 分浅拷贝和深拷贝
from pathlib import Path    # Path将str转换为Path对象 使字符串路径易于操作的模块import cv2
import numpy as np  # numpy数组操作模块
import pandas as pd # panda数组操作模块
import requests  # Python的HTTP客户端库
import torch      # pytorch深度学习框架
 as nn   # 专门为神经网络设计的模块化接口
import yaml
from PIL import Image   # 图像基础操作模块
from torch.cuda import amp  # 混合精度训练模块from utils.datasets import exif_transpose, letterbox
al import (LOGGER, check_requirements, check_suffix, check_version, colorstr, increment_path,make_divisible, non_max_suppression, scale_coords, xywh2xyxy, xyxy2xywh)
from utils.plots import Annotator, colors, save_one_box
h_utils import copy_attr, time_sync# ============================================= 核心模块 =====================================
def autopad(k, p=None):  # kernel, padding"""用于Conv函数和Classify函数中,为same卷积或same池化作自动扩充(0填充)  Pad to 'same'根据卷积核大小k自动计算卷积核padding数(0填充)v5中只有两种卷积:1、下采样卷积:conv3x3 s=2 p=k//2=12、feature size不变的卷积:conv1x1 s=1 p=k//2=1:params k: 卷积核的kernel_size:return p: 自动计算的需要pad值(0填充)"""if p is None:p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-padreturn pclass Conv(nn.Module):# Standard convolutiondef __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups"""Standard convolution  conv+BN+act:params c1: 输入的channel值:params c2: 输出的channel值:params k: 卷积的kernel_size:params s: 卷积的stride:params p: 卷积的padding  一般是None  可以通过autopad自行计算需要pad的padding数:params g: 卷积的groups数  =1就是普通的卷积  >1就是深度可分离卷积,也就是分组卷积:params act: 激活函数类型   True就是SiLU()/Swish   False就是不使用激活函数类型是nn.Module就使用传进来的激活函数类型"""super().__init__()v = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)self.bn = nn.BatchNorm2d(c2)# Todo 修改激活函数# self.act = nn.Identity() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())# self.act = nn.Tanh() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())# self.act = nn.Sigmoid() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())# self.act = nn.ReLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())# self.act = nn.LeakyReLU(0.1) if act is True else (act if isinstance(act, nn.Module) else nn.Identity())# self.act = nn.Hardswish() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())def forward(self, x):return self.act(self.v(x)))def forward_fuse(self, x):"""用于Model类的fuse函数前向融合conv+bn计算 加速推理 一般用于测试/验证阶段"""return self.v(x))class Focus(nn.Module):# Focus wh information into c-spacedef __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups"""理论:从高分辨率图像中,周期性的抽出像素点重构到低分辨率图像中,即将图像相邻的四个位置进行堆叠,聚焦wh维度信息到c通道空,提高每个点感受野,并减少原始信息的丢失,该模块的设计主要是减少计算量加快速度。Focus wh information into c-space 把宽度w和高度h的信息整合到c空间中先做4个slice 再concat 最后再做Convslice后 (b,c1,w,h) -> 分成4个slice 每个slice(b,c1,w/2,h/2)concat(dim=1)后 4个slice(b,c1,w/2,h/2)) -> (b,4c1,w/2,h/2)conv后 (b,4c1,w/2,h/2) -> (b,c2,w/2,h/2):params c1: slice后的channel:params c2: Focus最终输出的channel:params k: 最后卷积的kernel:params s: 最后卷积的stride:params p: 最后卷积的padding:params g: 最后卷积的分组情况  =1普通卷积  >1深度可分离卷积:params act: bool激活函数类型  默认True:SiLU()/Swish  False:不用激活函数"""super().__init__()v = Conv(c1 * 4, c2, k, s, p, g, act)# act = Contract(gain=2)  # 也可以调用Contract函数实现slice操作def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)# x(b,c,w,h) -> y(b,4c,w/2,h/2)  有点像做了个下采样v(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))# act(x))class Bottleneck(nn.Module):# 这个模式是一个标准的 bottleneck 模块,非常简单,就是由一些 1x1conv、3x3conv、残差块组成# Standard bottleneckdef __init__(self, c1, c2, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, shortcut, groups, expansion"""在BottleneckCSP和yolo.py的parse_model中调用Standard bottleneck  Conv+Conv+shortcut:params c1: 第一个卷积的输入channel:params c2: 第二个卷积的输出channel:params shortcut: bool 是否有shortcut连接 默认是True:params g: 卷积分组的个数  =1就是普通卷积  >1就是深度可分离卷积:params e: expansion ratio  e*c2就是第一个卷积的输出channel=第二个卷积的输入channel"""super().__init__()c_ = int(c2 * e)  # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)self.cv2 = Conv(c_, c2, 3, 1, g=g)self.add = shortcut and c1 == c2def forward(self, x):return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))class BottleneckCSP(nn.Module):# 这个模块和上面yolov5s中的C3模块等效# 如果要用的话直接在yolov5s.yaml文件中讲C3改成BottleneckCSP即可,但是一般来说不用改,因为C3更好。# CSP Bottleneck  __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansion"""CSP Bottleneck :params c1: 整个BottleneckCSP的输入channel:params c2: 整个BottleneckCSP的输出channel:params n: 有n个Bottleneck:params shortcut: bool Bottleneck中是否有shortcut,默认True:params g: Bottleneck中的3x3卷积类型  =1普通卷积  >1深度可分离卷积:params e: expansion ratio c2xe=中间其他所有层的卷积核个数/中间所有层的输入输出channel数"""# ch_in, ch_out, number, shortcut, groups, expansionsuper().__init__()c_ = int(c2 * e)  # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)self.cv4 = Conv(2 * c_, c2, 1, 1)self.bn = nn.BatchNorm2d(2 * c_)  # applied to cat(cv2, cv3)self.act = nn.SiLU()# 叠加n次Bottleneckself.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))def forward(self, x):y1 = self.cv3(self.m(self.cv1(x)))y2 = self.cv2(x)return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))class C3(nn.Module):# CSP Bottleneck with 3 convolutions# 这个模块是一种简化版的BottleneckCSP,因为除了Bottleneck部分只有3个卷积,可以减少参数,所以取名C3。def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):# ch_in, ch_out, number, shortcut, groups, expansion"""在C3TR模块和yolo.py的parse_model模块调用CSP Bottleneck with 3 convolutions:params c1: 整个BottleneckCSP的输入channel:params c2: 整个BottleneckCSP的输出channel:params n: 有n个Bottleneck:params shortcut: bool Bottleneck中是否有shortcut,默认True:params g: Bottleneck中的3x3卷积类型  =1普通卷积  >1深度可分离卷积:params e: expansion ratio c2xe = 中间其他所有层的卷积核个数/中间所有层的输入输出channel数"""super().__init__()c_ = int(c2 * e)  # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)self.cv2 = Conv(c1, c_, 1, 1)self.cv3 = Conv(2 * c_, c2, 1)  # act=FReLU(c2)self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))# 实验性 CrossConv 目标位置 models/experimental.py# self.m = nn.Sequential(*[CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)])def forward(self, x):return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))class SPP(nn.Module):# 这个模块的主要目的是为了将更多不同分辨率的特征进行融合,得到更多的信息。# Spatial Pyramid Pooling (SPP) layer .4729def __init__(self, c1, c2, k=(5, 9, 13)):"""空间金字塔池化 Spatial pyramid pooling layer used in YOLOv3-SPP:params c1: SPP模块的输入channel:params c2: SPP模块的输出channel:params k: 保存着三个maxpool的卷积核大小 默认是(5, 9, 13)"""super().__init__()c_ = c1 // 2  # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)   # 第一层卷积self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)    # 最后一层卷积  +1是因为有len(k)+1个输入self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])def forward(self, x):x = self.cv1(x)with warnings.catch_warnings():warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warningreturn self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))class Concat(nn.Module):# 按照自身某个维度进行concat,常用来合并前后两个feature map,也就是上面Yolo 5s结构图中的Concat。# Concatenate a list of tensors along dimensiondef __init__(self, dimension=1):"""在yolo.py的parse_model模块调用:params dimension: 沿着哪个维度进行concat"""super().__init__()self.d = dimensiondef forward(self, x):return torch.cat(x, self.d)class DWConv(Conv):"""Depthwise convolution 深度可分离卷积:params c1: 输入的channel值:params c2: 输出的channel值:params k: 卷积的kernel_size:params s: 卷积的stride:params act:g: 深度可分离的groups数"""def __init__(self, c1, c2, k=1, s=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groupssuper().__init__(c1, c2, k, s, g&#d(c1, c2), act=act)# 改变feature map的维度  用的不多
class Contract(nn.Module):"""用在yolo.py的parse_model模块改变输入特征的shape 将w和h维度(缩小)的数据收缩到channel维度上(放大)Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)"""# Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)def __init__(self, gain=2):super().__init__()self.gain = gaindef forward(self, x):b, c, h, w = x.size()  # assert (h / s == 0) and (W / s == 0), 'Indivisible gain's = self.gainx = x.view(b, c, h // s, s, w // s, s)  # x(1,64,40,2,40,2)# permute: 改变tensor的维度顺序x = x.permute(0, 3, 5, 1, 2, 4).contiguous()  # x(1,2,2,64,40,40)# .view: 改变tensor的维度return x.view(b, c * s * s, h // s, w // s)  # x(1,256,40,40)class Expand(nn.Module):"""用在yolo.py的parse_model模块  用的不多Expand函数也是改变输入特征的shape,不过与Contract的相反, 是将channel维度(变小)的数据扩展到W和H维度(变大)。改变输入特征的shape 将channel维度(变小)的数据扩展到W和H维度(变大)Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160)"""# Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160)def __init__(self, gain=2):super().__init__()self.gain = gaindef forward(self, x):b, c, h, w = x.size()  # assert C / s ** 2 == 0, 'Indivisible gain's = self.gainx = x.view(b, s, s, c // s ** 2, h, w)  # x(1,2,2,16,80,80)x = x.permute(0, 3, 4, 1, 5, 2).contiguous()  # x(1,16,80,2,80,2)return x.view(b, c // s ** 2, h * s, w * s)  # x(1,16,160,160)# ============================================= 注意力机制 ======================================================
# transformer
class TransformerLayer(nn.Module):"""Transformer layer .11929 (LayerNorm layers removed for better performance)视频: =5&spm_id_from=pageDriver=search&seid=12070149695619006113这部分相当于原论文中的单个Encoder部分(只移除了两个Norm部分, 其他结构和原文中的Encoding一模一样)"""# Transformer layer .11929 (LayerNorm layers removed for better performance)def __init__(self, c, num_heads):super().__init__()self.q = nn.Linear(c, c, bias=False)self.k = nn.Linear(c, c, bias=False)self.v = nn.Linear(c, c, bias=False)# 输入: query、key、value# 输出: 0 attn_output 即通过self-attention之后,从每一个词语位置输出来的attention 和输入的query它们形状一样的#      1 attn_output_weights 即attention weights 每一个单词和任意另一个单词之间都会产生一个weightself.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads)self.fc1 = nn.Linear(c, c, bias=False)self.fc2 = nn.Linear(c, c, bias=False)def forward(self, x):# 多头注意力机制 + 残差(这里移除了LayerNorm for better performance)x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x# feed forward 前馈神经网络 + 残差(这里移除了LayerNorm for better performance)x = self.fc2(self.fc1(x)) + xreturn xclass TransformerBlock(nn.Module):"""Vision Transformer .11929视频: =5&spm_id_from=pageDriver=search&seid=12070149695619006113这部分相当于原论文中的Encoders部分 只替换了一些编码方式和最后Encoders出来数据处理方式"""# Vision Transformer .11929def __init__(self, c1, c2, num_heads, num_layers):super().__init__()v = Noneif c1 != v = Conv(c1, c2)self.linear = nn.Linear(c2, c2)  # learnable position embedding 位置编码 = nn.Sequential(*(TransformerLayer(c2, num_heads) for _ in range(num_layers)))self.c2 = c2    # 输出channeldef forward(self, x):v is not None:    # embeddingx = v(x)b, _, w, h = x.shapep = x.flatten(2).permute(2, 0, 1)(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h)class C3TR(C3):"""这部分是根据上面的C3结构改编而来的, 将原先的Bottleneck替换为调用TransformerBlock模块"""# C3 module with TransformerBlock()def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):super().__init__(c1, c2, n, shortcut, g, e)c_ = int(c2 * e)self.m = TransformerBlock(c_, c_, 4, n)# ============================================= 模型扩展模块 ======================================================class AutoShape(nn.Module):"""在yolo.py中Model类的autoshape函数中使用将model封装成包含前处理、推理、后处理的模块(预处理 + 推理 + nms)  也是一个扩展模型功能的模块autoshape模块在train中不会被调用,当模型训练结束后,会通过这个模块对图片进行重塑,来方便模型的预测自动调整shape,我们输入的图像可能不一样,可能来自cv2/np/PIL/torch 对输入进行预处理 调整其shape,调整shape在datasets.py文件中,这个实在预测阶段使用的,model.eval(),模型就已经无法训练进入预测模式了input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS"""# YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMSconf = 0.25  # 置信度阈值 NMS confidence thresholdiou = 0.45  # NMS IoU thresholdagnostic = False  # NMS class-agnosticmulti_label = False  # NMS multiple labels per boxclasses = None  # 是否nms后只保留特定的类别 (optional list) filter by classmax_det = 1000  # maximum number of detections per imageamp = False  # Automatic Mixed Precision (AMP) inferencedef __init__(self, model):super().__init__()LOGGER.info(' ')copy_attr(self, model, include=('yaml', 'nc', 'hyp', 'names', 'stride', 'abc'), exclude=())  # copy attributesself.dmb = isinstance(model, DetectMultiBackend)  # DetectMultiBackend() instanceself.pt = not self.dmb or model.pt  # PyTorch model# 开启验证模式del = model.eval()def _apply(self, fn):# Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffersself = super()._apply(fn)if self.pt:m = del[-1] if self.dmb del[-1]  # Detect()m.stride = fn(m.id = list(map(fn, m.grid))if isinstance(m.anchor_grid, list):m.anchor_grid = list(map(fn, m.anchor_grid))return self&#_grad()def forward(self, imgs, size=640, augment=False, profile=False):# 这里的imgs针对不同的方法读入,官方也给了具体的方法,size是图片的尺寸,就比如最上面图片里面的输入608*608*3# Inference from various sources. For height=640, width=1280, RGB images example inputs are:#   file:       imgs = 'data/images/zidane.jpg'  # str or PosixPath#   URI:             = '.jpg'#   OpenCV:          = cv2.imread('image.jpg')[:,:,::-1]  # HWC BGR to RGB x(640,1280,3)#   PIL:             = Image.open('image.jpg') ab()  # HWC x(640,1280,3)#   numpy:           = np.zeros((640,1280,3))  # HWC#   torch:           = s(16,3,320,640)  # BCHW (scaled to size=640, 0-1 values)#   multiple:        = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...]  # list of imagest = [time_sync()]p = del.parameters()) if self.pt s(1)  # for device and typeautocast = self.amp and (pe != 'cpu')  # Automatic Mixed Precision (AMP) inference# 图片如果是tensor格式 说明是预处理过的, 直接正常进行前向推理即可 nms在推理结束进行(函数外写)if isinstance(imgs, torch.Tensor):  # torchwith amp.autocast(enabled=autocast):(p.device).type_as(p), augment, profile)  # inference# Pre-process# 图片不是tensor格式 就先对图片进行预处理  Pre-processn, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs])  # number of images, list of imagesshape0, shape1, files = [], [], []  # image and inference shapes, filenamesfor i, im in enumerate(imgs):f = f'image{i}'  # filenameif isinstance(im, (str, Path)):  # filename or uriim, f = Image.(im, stream=True).raw if str(im).startswith('http') else im), imim = np.asarray(exif_transpose(im))elif isinstance(im, Image.Image):  # PIL Imageim, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or ffiles.append(Path(f).with_suffix('.jpg').name)if im.shape[0] < 5:  # image in CHWim = im.transpose((1, 2, 0))  # reverse dataloader .transpose(2, 0, 1)im = im[..., :3] if im.ndim == 3 else np.tile(im[..., None], 3)  # enforce 3ch inputs = im.shape[:2]  # HWCshape0.append(s)  # image shapeg = (size / max(s))  # gainshape1.append([y * g for y in s])imgs[i] = im if iguous else np.ascontiguousarray(im)  # updateshape1 = [make_divisible(x, self.stride) for x in np.stack(shape1, 0).max(0)]  # inference shapex = [letterbox(im, new_shape=shape1 if self.pt else size, auto=False)[0] for im in imgs]  # padx = np.stack(x, 0) if n > 1 else x[0][None]  # stackx = np.anspose((0, 3, 1, 2)))  # BHWC to BCHWx = torch.from_numpy(x).to(p.device).type_as(p) / 255  # uint8 to fp16/32t.append(time_sync())with amp.autocast(enabled=autocast):# 预处理结束再进行前向推理  Inferencey = del(x, augment, profile)  # forward  前向推理t.append(time_sync())# 前向推理结束后 进行后处理Post-process  nmsy = non_max_suppression(y if self.dmb else y[0], f, iou_thres=self.iou, classes=self.classes,agnostic=self.agnostic, multi_label=self.multi_label, max_det=self.max_det)  # NMSfor i in range(n):scale_coords(shape1, y[i][:, :4], shape0[i])     # 将nms后的预测结果映射回原图尺寸t.append(time_sync())return Detections(imgs, y, files, t, self.names, x.shape)class Detections:"""用在AutoShape函数结尾对推理结果进行一些处理detections class for YOLOv5 inference results"""# YOLOv5 detections class for inference resultsdef __init__(self, imgs, pred, files, times=(0, 0, 0, 0), names=None, shape=None):super().__init__()d = pred[0].device  # devicegn = [sor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in imgs]  # normalizationsself.imgs = imgs  # list of images as numpy arraysself.pred = pred  # list of tensors pred[0] = (xyxy, conf, cls)self.names = names  # class namesself.files = files  # image filenamesself.times = times  #  = pred  # wh = [xyxy2xywh(x) for x in pred]  #  = [x / g for x, g in , gn)]  # whn = [x / g for x, g in wh, gn)]  # xywh normalizedself.n = len(self.pred)  # number of images (batch size)self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3))  # timestamps (ms)self.s = shape  # inference BCHW shapedef display(self, pprint=False, show=False, save=False, crop=False, render=False, save_dir=Path('')):crops = []for i, (im, pred) in enumerate(zip(self.imgs, self.pred)):s = f'image {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} '  # stringif pred.shape[0]:for c in pred[:, -1].unique():n = (pred[:, -1] == c).sum()  # detections per classs += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, "  # add to stringif show or save or render or crop:annotator = Annotator(im, example=str(self.names))for *box, conf, cls in reversed(pred):  # xyxy, confidence, classlabel = f'{self.names[int(cls)]} {conf:.2f}'if crop:file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else Nonecrops.append({'box': box, 'conf': conf, 'cls': cls, 'label': label,'im': save_one_box(box, im, file=file, save=save)})else:  # all othersannotator.box_label(box, label, color=colors(cls))im = annotator.imelse:s += '(no detections)'im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im  # from npif pprint:LOGGER.info(s.rstrip(', '))if show:im.show(self.files[i])  # showif save:f = self.files[i]im.save(save_dir / f)  # saveif i == self.n - 1:LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}")if render:self.imgs[i] = np.asarray(im)if crop:if save:LOGGER.info(f'Saved results to {save_dir}n')return cropsdef print(self):self.display(pprint=True)  # print resultsLOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' %self.t)def show(self):self.display(show=True)  # show resultsdef save(self, save_dir='runs/detect/exp'):save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True)  # increment save_dirself.display(save=True, save_dir=save_dir)  # save resultsdef crop(self, save=True, save_dir='runs/detect/exp'):save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) if save else Nonereturn self.display(crop=True, save=save, save_dir=save_dir)  # crop resultsdef render(self):self.display(render=True)  # render resultsreturn self.imgsdef pandas(self):# return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0])new = copy(self)  # return copyca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name'  # xyxy columnscb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name'  # xywh columnsfor k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]):a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x list()] for x in getattr(self, k)]  # updatesetattr(new, k, [pd.DataFrame(x, columns=c) for x in a])return newdef tolist(self):# return a list of Detections objects, i.e. 'for result list():'r = range(self.n)  # iterablex = [Detections([self.imgs[i]], [self.pred[i]], [self.files[i]], self.times, self.names, self.s) for i in r]# for d in x:#    for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']:#        setattr(d, k, getattr(d, k)[0])  # pop out of listreturn xdef __len__(self):lass Classify(nn.Module):# Classification head, i.e. x(b,c1,20,20) to x(b,c2)def __init__(self, c1, c2, k=1, s=1, p=None, g=1):  # ch_in, ch_out, kernel, stride, padding, groups"""这是一个二级分类模块, 什么是二级分类模块? 比如做车牌的识别, 先识别出车牌, 如果想对车牌上的字进行识别, 就需要二级分类进一步检测.如果对模型输出的分类再进行分类, 就可以用这个模块. 不过这里这个类写的比较简单, 若进行复杂的二级分类, 可以根据自己的实际任务可以改写, 这里代码不唯一.Classification head, i.e. x(b,c1,20,20) to x(b,c2)用于第二级分类   可以根据自己的任务自己改写,比较简单比如车牌识别 检测到车牌之后还需要检测车牌在哪里,如果检测到侧拍后还想对车牌上的字再做识别的话就要进行二级分类"""super().__init__()self.aap = nn.AdaptiveAvgPool2d(1)  # to x(b,c1,1,1)    自适应平均池化操作v = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g)  # to x(b,c2,1,1)self.flat = nn.Flatten()    # 展平def forward(self, x):# 先自适应平均池化操作, 然后拼接z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1)  # cat if list# 对z进行展平操作return self.v(z))  # flatten to x(b,c2)# ============================================= V6新增模块 ======================================================
class C3SPP(C3):"""这部分是根据上面的C3结构改编而来的, 将原先的Bottleneck替换为调用TransformerBlock模块"""# C3 module with TransformerBlock()# C3 module with SPP()def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5):super().__init__(c1, c2, n, shortcut, g, e)c_ = int(c2 * e)self.m = SPP(c_, c_, k)class C3Ghost(C3):# C3 module with GhostBottleneck()def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):super().__init__(c1, c2, n, shortcut, g, e)c_ = int(c2 * e)  # hidden channelsself.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n)))class SPPF(nn.Module):# Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocherdef __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))super().__init__()c_ = c1 // 2  # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)self.cv2 = Conv(c_ * 4, c2, 1, 1)self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)def forward(self, x):x = self.cv1(x)with warnings.catch_warnings():warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warningy1 = self.m(x)y2 = self.m(y1)return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))class GhostConv(nn.Module):# Ghost Convolution  __init__(self, c1, c2, k=1, s=1, g=1, act=True):  # ch_in, ch_out, kernel, stride, groups"""Standard bottleneck  Conv+Conv+shortcut:params c1: 第一个卷积的输入channel:params c2: 第二个卷积的输出channel:params shortcut: bool 是否有shortcut连接 默认是True:params g: 卷积分组的个数  =1就是普通卷积  >1就是深度可分离卷积:params e: expansion ratio  e*c2就是第一个卷积的输出channel=第二个卷积的输入channel"""super().__init__()c_ = c2 // 2  # hidden channelsself.cv1 = Conv(c1, c_, k, s, None, g, act)self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)def forward(self, x):y = self.cv1(x)return torch.cat([y, self.cv2(y)], 1)class GhostBottleneck(nn.Module):# Ghost Bottleneck  __init__(self, c1, c2, k=3, s=1):  # ch_in, ch_out, kernel, stridesuper().__init__()c_ = c2 // v = nn.Sequential(GhostConv(c1, c_, 1, 1),  # pwDWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(),  # dwGhostConv(c_, c2, 1, 1, act=False))  # pw-linearself.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()def forward(self, x):v(x) + self.shortcut(x)class DetectMultiBackend(nn.Module):# YOLOv5 MultiBackend class for python inference on various backendsdef __init__(self, weights='yolov5s.pt', device=None, dnn=False, data=None):# Usage:#   PyTorch:              weights = *.pt#   TorchScript:                    *.torchscript#   ONNX Runtime:                   *.onnx#   ONNX OpenCV DNN:                *.onnx with --dnn#   OpenVINO:                       *.xml#   CoreML:                         *.mlmodel#   TensorRT:                       *.engine#   TensorFlow SavedModel:          *_saved_model#   TensorFlow GraphDef:            *.pb#   TensorFlow Lite:                *.tflite#   TensorFlow Edge TPU:            *_edgetpu.perimental import attempt_download, attempt_load  # scoped to avoid circular importsuper().__init__()w = str(weights[0] if isinstance(weights, list) else weights)pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs = del_type(w)  # get backendstride, names = 64, [f'class{i}' for i in range(1000)]  # assign defaultsw = attempt_download(w)  # download if not localif data:  # data.yaml path (optional)with open(data, errors='ignore') as f:names = yaml.safe_load(f)['names']  # class namesif pt:  # PyTorchmodel = attempt_load(weights if isinstance(weights, list) else w, map_location=device)stride = max(int(model.stride.max()), 32)  # model stridenames = dule.names if hasattr(model, 'module') else model.names  # get del = model  # explicitly assign for to(), cpu(), cuda(), half()elif jit:  # TorchScriptLOGGER.info(f'Loading {w} for ')extra_files = {&#': ''}  # model metadatamodel = torch.jit.load(w, _extra_files=extra_files)if extra_files[&#']:d = json.loads(extra_files[&#'])  # extra_files dictstride, names = int(d['stride']), d['names']elif dnn:  # ONNX OpenCV DNNLOGGER.info(f'Loading {w} for ONNX OpenCV ')check_requirements(('opencv-python>=4.5.4',))net = adNetFromONNX(w)elif onnx:  # ONNX RuntimeLOGGER.info(f'Loading {w} for ONNX ')cuda = torch.cuda.is_available()check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime'))import onnxruntimeproviders = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider']session = onnxruntime.InferenceSession(w, providers=providers)elif xml:  # OpenVINOLOGGER.info(f'Loading {w} for ')check_requirements(('openvino-dev',))  # requires openvino-dev:  openvino.inference_engine as iecore = ie.IECore()if not Path(w).is_file():  # if not *.xmlw = next(Path(w).glob('*.xml'))  # get *.xml file from *_openvino_model dirnetwork = ad_network(model=w, weights=Path(w).with_suffix('.bin'))  # *.xml, *.bin pathsexecutable_network = core.load_network(network, device_name='CPU', num_requests=1)elif engine:  # TensorRTLOGGER.info(f'Loading {w} for ')import tensorrt as trt  # (trt.__version__, '7.0.0', hard=True)  # require tensorrt>=7.0.0Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr'))logger = trt.Logger(trt.Logger.INFO)with open(w, 'rb') as f, trt.Runtime(logger) as runtime:model = runtime.deserialize_cuda_ad())bindings = OrderedDict()for index in range(model.num_bindings):name = _binding_name(index)dtype = trt._binding_dtype(index))shape = _binding_shape(index))data = torch.from_pty(shape, dtype=np.dtype(dtype))).to(device)bindings[name] = Binding(name, dtype, shape, data, int(data.data_ptr()))binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items())context = ate_execution_context()batch_size = bindings['images'].shape[0]elif coreml:  # CoreMLLOGGER.info(f'Loading {w} for ')import coremltools as ctmodel = ct.models.MLModel(w)else:  # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU)if saved_model:  # SavedModelLOGGER.info(f'Loading {w} for TensorFlow ')import tensorflow as tfkeras = False  # assume TF1 saved_modelmodel = dels.load_model(w) if keras else tf.saved_model.load(w)elif pb:  # GraphDef .info(f'Loading {w} for TensorFlow ')import tensorflow as tfdef wrap_frozen_graph(gd, inputs, outputs):x = tfpat.v1.wrap_function(lambda: tfpat.v1.import_graph_def(gd, name=""), [])  # wrappedge = x.graph.as_graph_elementreturn x.st.map_structure(ge, inputs), tf.nest.map_structure(ge, outputs))gd = tf.Graph().as_graph_def()  # graph_defgd.ParseFromString(open(w, 'rb').read())frozen_func = wrap_frozen_graph(gd, inputs="x:0", outputs="Identity:0")elif tflite or edgetpu:  # :  #  tflite_runtime.interpreter import Interpreter, load_delegateexcept ImportError:import tensorflow as tfInterpreter, load_delegate = tf.lite.Interpreter, perimental.load_delegate,if edgetpu:  # Edge TPU .info(f'Loading {w} for TensorFlow Lite Edge ')delegate = {'Linux': 'libedgetpu.so.1','Darwin': 'libedgetpu.1.dylib','Windows': 'edgetpu.dll'}[platform.system()]interpreter = Interpreter(model_path=w, experimental_delegates=[load_delegate(delegate)])else:  # LiteLOGGER.info(f'Loading {w} for TensorFlow ')interpreter = Interpreter(model_path=w)  # load TFLite modelinterpreter.allocate_tensors()  # allocateinput_details = _input_details()  # inputsoutput_details = _output_details()  # outputselif tfjs:raise Exception('ERROR: YOLOv5 TF.js inference is not supported')self.__dict__.update(locals())  # assign all variables to selfdef forward(self, im, augment=False, visualize=False, val=False):# YOLOv5 MultiBackend inferenceb, ch, h, w = im.shape  # batch, channel, height, widthif self.pt or self.jit:  # PyTorchy = del(im) if self.jit del(im, augment=augment, visualize=visualize)return y if val else y[0]elif self.dnn:  # ONNX OpenCV DNNim = im.cpu().numpy()  # torch to numpyself.setInput(im)y = self.forward():  # ONNX Runtimeim = im.cpu().numpy()  # torch to numpyy = self.session.run([_outputs()[0].name], {_inputs()[0].name: im})[0]l:  # OpenVINOim = im.cpu().numpy()  # FP32desc = self.ie.TensorDesc(precision='FP32', dims=im.shape, layout='NCHW')  # Tensor Descriptionrequest = quests[0]  # inference requestrequest.set_blob(blob_name='images', blob=self.ie.Blob(desc, im))  # name=next(iter(request.input_blobs))request.infer()y = request.output_blobs['output'].buffer  # name=next(iter(request.output_blobs))ine:  # TensorRTassert im.shape == self.bindings['images'].shape, (im.shape, self.bindings['images'].shape)self.binding_addrs['images'] = int(im.data_ptr())ute_v2(list(self.binding_addrs.values()))y = self.bindings['output'].l:  # CoreMLim = im.permute(0, 2, 3, 1).cpu().numpy()  # torch BCHW to numpy BHWC shape(1,320,192,3)im = Image.fromarray((im[0] * 255).astype('uint8'))# im = im.resize((192, 320), Image.ANTIALIAS)y = del.predict({'image': im})  # coordinates are xywh normalizedif 'confidence' in y:box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]])  # xyxy pixelsconf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float)y = np.concatenate((box, shape(-1, 1), shape(-1, 1)), 1)else:k = 'var_' + str(sorted(place('var_', '')) for k in y)[-1])  # output keyy = y[k]  # outputelse:  # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU)im = im.permute(0, 2, 3, 1).cpu().numpy()  # torch BCHW to numpy BHWC shape(1,320,192,3)if self.saved_model:  # SavedModely = (del(im, training=False) if self.keras del(im)[0]).numpy()elif self.pb:  # GraphDefy = self.frozen_func(x=stant(im)).numpy()else:  # Lite or Edge TPUinput, output = self.input_details[0], self.output_details[0]int8 = input['dtype'] == np.uint8  # is TFLite quantized uint8 modelif int8:scale, zero_point = input['quantization']im = (im / scale + zero_point).astype(np.uint8)  # de-scaleself.interpreter.set_tensor(input['index'], im)self.interpreter.invoke()y = _tensor(output['index'])if int8:scale, zero_point = output['quantization']y = (y.astype(np.float32) - zero_point) * scale  # re-scaley[..., :4] *= [w, h, w, h]  # xywh normalized to pixelsy = sor(y) if isinstance(y, np.ndarray) else yreturn (y, []) if val else ydef warmup(self, imgsz=(1, 3, 640, 640), half=False):# Warmup model by running inference onceif self.pt or self.jit  ine:  # warmup typesif isinstance(self.device, torch.device) and pe != 'cpu':  # only warmup GPU modelsim = s(*imgsz).to(self.device).type(torch.half if half else torch.float)  # input imageself.forward(im)  # warmup@staticmethoddef model_type(p='path/to/model.pt'):# Return model type from model path, i.e. path='path/' -> type=onnxfrom export import export_formatssuffixes = list(export_formats().Suffix) + ['.xml']  # export suffixescheck_suffix(p, suffixes)  # checksp = Path(p).name  # eliminate trailing separatorspt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, xml2 = (s in p for s in suffixes)xml |= xml2  # *_openvino_model or *.xmltflite &= not edgetpu  # *.tflitereturn pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs

本文发布于:2024-01-29 15:34:39,感谢您对本站的认可!

本文链接:https://www.4u4v.net/it/170651368316282.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:
留言与评论(共有 0 条评论)
   
验证码:

Copyright ©2019-2022 Comsenz Inc.Powered by ©

网站地图1 网站地图2 网站地图3 网站地图4 网站地图5 网站地图6 网站地图7 网站地图8 网站地图9 网站地图10 网站地图11 网站地图12 网站地图13 网站地图14 网站地图15 网站地图16 网站地图17 网站地图18 网站地图19 网站地图20 网站地图21 网站地图22/a> 网站地图23