Description
Thank you for your wonderful work!
YOLOv9 with End2End ( Efficient NMS)
Note: The primary purpose of employing End2End is to utilize ONNX models on TensorRT. If you choose not to use TensorRT, you should proceed with the standard ONNX export process.
I've created a forked repository from the original, adding End-to-End support for ONNX export. The changes can be found in export.py and models/experimental.py. Both files remain fully compatible with all current export operations.
Check it out at https://github.com/levipereira/yolov9
-
Support for End-to-End ONNX Export: Added support for end-to-end ONNX export in
export.py
andmodels/experimental.py
. -
Model Compatibility: This functionality currently works with all
DetectionModel
models ; -
Configuration Variables: Use the following flags to configure the model:
--include onnx_end2end
: Enabled export End2End--simplify
: ONNX/ONNX END2END: Simplify model.--topk-all
: ONNX END2END/TF.js NMS: Top-k for all classes to keep (default: 100).--iou-thres
: ONNX END2END/TF.js NMS: IoU threshold (default: 0.45).--conf-thres
: ONNX END2END/TF.js NMS: Confidence threshold (default: 0.25).
Example:
$ python3 export.py --weights ./yolov9-c.pt --imgsz 640 --simplify --include onnx_end2end
export: data=data/coco.yaml, weights=['./yolov9-c.pt'], imgsz=[640], batch_size=1, device=cpu, half=False, inplace=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=True, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['onnx_end2end']
YOLOv5 🚀 v0.1-27-g86b0667 Python-3.8.10 torch-1.14.0a0+44dac51 CPU
Fusing layers...
Model summary: 604 layers, 50880768 parameters, 0 gradients, 237.6 GFLOPs
PyTorch: starting from ./yolov9-c.pt with output shape (1, 84, 8400) (98.4 MB)
ONNX END2END: starting export with onnx 1.13.0...
/yolov9/models/experimental.py:102: FutureWarning: 'torch.onnx._patch_torch._graph_op' is deprecated in version 1.13 and will be removed in 1.14. Please note 'g.op()' is to be removed from torch.Graph. Please open a GitHub issue if you need this functionality..
out = g.op("TRT::EfficientNMS_TRT",
[W shape_type_inference.cpp:1913] Warning: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
[W shape_type_inference.cpp:1913] Warning: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
[W shape_type_inference.cpp:1913] Warning: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
[W shape_type_inference.cpp:1913] Warning: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
========== Diagnostic Run torch.onnx.export version 1.14.0a0+44dac51 ===========
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 4 WARNING 0 ERROR ========================
4 WARNING were not printed due to the log level.
Starting to simplify ONNX...
ONNX export success, saved as ./yolov9-c_end2end.onnx
ONNX END2END: export success ✅ 11.5s, saved as ./yolov9-c_end2end.onnx (129.3 MB)
Export complete (13.6s)
Results saved to /yolov9/experiments/models
Visualize: https://netron.app
Activity