第一章:yolov8命令行运行参数详解,运行方式的改进(符合yolov5的风格)
这篇主要介绍yolov8的参数含义以及命令行运行方式的改进,yolov8的参数含义与yolov5大致相似,但是运行方式又有所不同,这对于用惯yolov5(python命令行和argparse管理)的同学来说比较难受,所以改进了其命令行的运行方式,使得与yolov5的运行方式相同
源码的参数可以归类为四部分:
值得注意的是,每一个mode都有固定而不同的args(参数),yolov8支持你可以输入所有参数,但是代码会根据mode来自动选择使用特定的参数。以下是不同mode的参数详解(同时附上官网链接):
train:
val:
predict:
export: /
训练命令行示例:
# 从YAML中构建一个新模型,并从头开始训练
yolo detect train data=coco128.yaml model=yolov8n.yaml epochs=100 imgsz=640# 从预先训练的*.pt模型开始训练
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640# 从YAML中构建一个新的模型,将预训练的权重传递给它,并开始训练
yolo detect train data=coco128.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640
对应python代码示例
from ultralytics import YOLO# Load a model
model = YOLO('yolov8n.yaml') # 从YAML中构建一个新模型
model = YOLO('yolov8n.pt') #加载预训练的模型(推荐用于训练)
model = YOLO('yolov8n.yaml').load('yolov8n.pt') # 从YAML构建并传递权重# Train the model
ain(data='coco128.yaml', epochs=100, imgsz=640)
可以看出,不管是使用yolov8命令行运行代码,还是python示例运行,和yolov5的风格区别较大,这对于习惯于用argparse参数方式的人来说有些许难受,所以我把接口做了改进。
argparse模式
python train.py --data coco128.yaml --weights yolov8n.pt --imgsz 640
改进后的命令行格式只需要将yolo改成python,参数前面加上‘- -’,也就是说完全符合yolov5的风格。
命令行示例:
python pythoncil.py --task detect --mode train --data coco128.yaml --weights yolov8n.pt --imgsz 640
在工程主目录新建pythoncil.py文件,粘贴如下代码,就可以实现上述功能。值得注意的是:我将训练超参数单独写在一个yaml文件中,在以下代码中会加载进去覆盖yolov8默认的训练超参数(在- -hyp中指定地址)。(这样符合yolov5的文件管理方式,也方便参数管理)
# -*- coding:utf-8 -*-
"""
作者:ChenTao
日期:2023年05月20日
"""
import argparse
import yaml
from ultralytics import YOLOremove = {'weight': None, 'hyp': None}def parse_opt(known=False):parser = argparse.ArgumentParser()parser.add_argument('--task', type=str, default='detect', help='YOLO task, i.e. detect, segment, classify, pose')parser.add_argument('--mode', type=str, default='train',help='YOLO mode, i.e. train, val, predict, export, track, benchmark')# Train settings ---------------------------------------------------------------------------------------------------parser.add_argument('--weight', type=str, default='yolov8s.pt', help='initial weights path')parser.add_argument('--model', type=str, default='ultralytics/models/v8/yolov8n.yaml', help='model.yaml path')parser.add_argument('--data', type=str, default='ultralytics/datasets/coco128.yaml', help='dataset.yaml path')parser.add_argument('--hyp', type=str, default='ultralytics/datasets/hyp/default.yaml', help='hyperparameters path')parser.add_argument('--epochs', type=int, default=300, help='total training epochs')parser.add_argument('--patience', type=int, default=50,help='epochs to wait for no observable improvement for early stopping of training')parser.add_argument('--batch', type=int, default=1, help='total batch size for all GPUs, -1 for autobatch')parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')parser.add_argument('--save', type=bool, default=True, help='save train checkpoints and predict results')parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)')parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"')parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')parser.add_argument('--project', default='runs/train', help='save to project/name')parser.add_argument('--name', default='exp', help='save to project/name')parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')parser.add_argument('--pretrained', action='store_true', help='whether to use a pretrained model')parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer')parser.add_argument('--verbose', type=bool, default=True, help='whether to print verbose output')parser.add_argument('--seed', type=int, default=0, help='Global training seed')parser.add_argument('--deterministic', type=bool, default=True, help='whether to enable deterministic mode')parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')parser.add_argument('--rect', action='store_true', help='rectangular training')parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler')parser.add_argument('--close_mosaic', type=int, default=0,help='(int) disable mosaic augmentation for final epochs')parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')parser.add_argument('--amp', type=bool, default=True,help='Automatic Mixed Precision (AMP) training, choices=[True, False], True runs AMP check')# Segmentationparser.add_argument('--overlap_mask', type=bool, default=True,help='masks should overlap during training (segment train only)')parser.add_argument('--mask_ratio', type=int, default=4, help='mask downsample ratio (segment train only)')# Classificationparser.add_argument('--dropout', type=float, default=0.0, help='use dropout regularization (classify train only)')# Val/Test settings ------------------------------------------------------------------------------------------------parser.add_argument('--val', type=bool, default=True, help='validate/test during training')parser.add_argument('--split', type=str, default='val', help="dataset split to use for validation, i.e. 'val', ""'test' or 'train'")parser.add_argument('--save_json', action='store_true', help='save results to JSON file')parser.add_argument('--save_hybrid', action='store_true', help='save hybrid version of labels (labels + ''additional predictions)')parser.add_argument('--conf', type=float, default=0.001, help='object confidence threshold for detection (default ''0.25 predict, 0.001 val)')parser.add_argument('--iou', type=float, default=0.7, help='intersection over union (IoU) threshold for NMS')parser.add_argument('--max_det', type=int, default=300, help='maximum number of detections per image')parser.add_argument('--half', action='store_true', help='use half precision (FP16)')parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')parser.add_argument('--plots', type=bool, default=True, help='save plots during train/val')# Prediction settings ----------------------------------------------------------------------------------------------parser.add_argument('--source', type=str, default='', help='source directory for images or videos')parser.add_argument('--show', action='store_true', help='show results if possible')parser.add_argument('--save_txt', action='store_true', help='save results as .txt file')parser.add_argument('--save_conf', action='store_true', help='save results with confidence scores')parser.add_argument('--save_crop', action='store_true', help='save cropped images with results')parser.add_argument('--show_labels', type=bool, default=True, help='show object labels in plots')parser.add_argument('--show_conf', type=bool, default=True, help='show object confidence scores in plots')parser.add_argument('--vid_stride', type=int, default=1, help='video frame-rate stride')parser.add_argument('--line_width', type=int, default=None, help='line width of the bounding boxes')parser.add_argument('--visualize', action='store_true', help='visualize model features')parser.add_argument('--augment', action='store_true', help='apply image augmentation to prediction sources')parser.add_argument('--agnostic_nms', action='store_true', help='class-agnostic NMS')parser.add_argument('--classes', nargs='+', type=int, help='filter results by class, i.e. class=0, or class=[0,2,3]')parser.add_argument('--retina_masks', action='store_true', help='use high-resolution segmentation masks')parser.add_argument('--boxes', type=bool, default=True, help='Show boxes in segmentation predictions')# Export settings --------------------------------------------------------------------------------------------------parser.add_argument('--format', type=str, default='onnx', help='format to export to')parser.add_argument('--keras', action='store_true', help='use Keras')parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile')parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization')parser.add_argument('--dynamic', action='store_true', help='ONNX/TF/TensorRT: dynamic axes')parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model')parser.add_argument('--opset', action='store_true', help='opset version (optional)')parser.add_argument('--workspace', action='store_true', help='TensorRT: workspace size (GB)')parser.add_argument('--nms', action='store_true', help='CoreML: add NMS')return parser.parse_known_args()[0] if known else parser.parse_args()def load_yaml(path):with open(path, 'r', encoding='utf-8') as f:hyp = yaml.safe_load(f)return hypdef remove_key(dic):for key in remove.keys():del dic[key]return dicdef run(**kwargs):# Usage: import pythoncil; pythoncil.run(task='', mode='', data='coco128.yaml', imgsz=320, weights='yolov8m.pt')opt = parse_opt(True)dic = load_yaml('ultralytics/yolo/cfg/default.yaml')for k, v in kwargs.items():setattr(opt, k, v)for arg in vars(opt):dic[arg] = getattr(opt, arg)# Load Hyperparametershyp = load_yaml(opt.hyp)for key in hyp.keys():dic[key] = hyp[key]dic['project'] = f"runs/{dic['task']}/{dic['mode']}"dic['name'] = f"{dic['data'].split('/')[-1].split('.')[0]}-{dic['model'].split('/')[-1].split('.')[0]}-{dic['imgsz']}- "if dic['mode'] == 'train':model = YOLO(dic['model'], task=dic['task']).load(dic['weight'])ain(**remove_key(dic))elif dic['mode'] == 'val':model = YOLO(dic['weight'])model.val(**remove_key(dic))elif dic['mode'] == 'predict':model = YOLO(dic['weight'])model.predict(**remove_key(dic))elif dic['mode'] == 'export':model = YOLO(dic['weight'])port(**remove_key(dic))run()
本文分析的yolov8的参数含义和使用方式,并对其进行改进使得其符合yolov5的参数管理方式和代码运行方式,毕竟yolov5经过长时间的磨炼,是非常优秀的项目,值得我们借鉴。
本文发布于:2024-01-28 21:19:51,感谢您对本站的认可!
本文链接:https://www.4u4v.net/it/170644799610356.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
留言与评论(共有 0 条评论) |