This repository offers plug-and-play custom parsers tailored for YOLOv11 AI models in DeepStream. Ideal for developers looking to streamline model parsing in DeepStream applications.
This repository supports DeepStreamSDK version 6.2, 6.3, 6.4, and 7.0 on Jetson and dGPU platform. You can follow this guide for detailed installation instructions.
Note: I prefer using Docker containers on both dGPU and Jetson platforms. These containers provide a convenient, out-of-the-box way to deploy DeepStream applications by packaging all associated dependencies within the container.
- YOLOv11
- YOLOv11
https://github.com/quangdungluong/DeepStream-YOLOv11
cd DeepStream-YOLOv11
bash scripts/compile_nvdsinfer.sh
Export the ONNX model file. Check the documentation for more detailed instructions.
Convert ONNX model to a TensorRT engine using trtexec.
Edit the configs/config_primary_yolov11.txt
file according to your model.
[property]
...
model-engine-file=yolo11s_b1_fp32.engine
...
num-detected-classes=80
...
# 0: FP32, 1: INT8, 2:FP16
network-mode=0
...
parse-bbox-func-name=NvDsInferParseYolo
...
custom-lib-path=../libs/nvdsinfer_customparser_yolo/libnvds_infercustomparser_yolo.so
Edit the deepstream_app_det_config.txt
file according to your GIE config file.
[primary-gie]
...
config-file=config_primary_yolov11.txt
deepstream-app -c deepstream_app_det_config.txt