English | 简体中文
Implementation based on YOLOv6 v3.0 code.
New Feature
- Face-landmarks localization
- Repulsion loss
- Same-channel Dehead
Model | Size | Easy | Medium | Hard | SpeedT4 trt fp16 b1 (fps) |
SpeedT4 trt fp16 b32 (fps) |
Params (M) |
FLOPs (G) |
---|---|---|---|---|---|---|---|---|
YOLOv6-N | 640 | 95.0 | 92.4 | 80.4 | 797 | 1313 | 4.63 | 11.35 |
YOLOv6-S | 640 | 96.2 | 94.7 | 85.1 | 339 | 484 | 12.41 | 32.45 |
YOLOv6-M | 640 | 97.0 | 95.3 | 86.3 | 188 | 240 | 24.85 | 70.59 |
YOLOv6-L | 640 | 97.2 | 95.9 | 87.5 | 102 | 121 | 56.77 | 159.24 |
YOLOv6Lite-S | 416 | 89.6 | 84.6 | 58.8 | / | / | 0.53 | 0.90 |
YOLOv6Lite-M | 416 | 90.6 | 86.1 | 60.6 | / | / | 0.76 | 1.07 |
YOLOv6Lite-L | 416 | 91.8 | 87.6 | 64.2 | / | / | 1.06 | 1.40 |
- All checkpoints are fine-tuned from COCO pretrained model for 300 epochs without distillation.
- Results of the mAP and speed are evaluated on WIDER FACE dataset with the input resolution of 640×640.
- Speed is tested with TensorRT 8.2 on T4.
- Refer to Test speed tutorial to reproduce the speed results of YOLOv6.
- Params and FLOPs of YOLOv6 are estimated on deployed models.
Install
git clone https://github.com/meituan/YOLOv6
cd YOLOv6
git checkout yolov6-face
pip install -r requirements.txt
Training
Single GPU
python tools/train.py --batch 8 --conf configs/yolov6s_finetune.py --data data/WIDER_FACE.yaml --fuse_ab --device 0
Multi GPUs (DDP mode recommended)
python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --batch 64 --conf configs/yolov6s_finetune.py --data data/WIDER_FACE.yaml --fuse_ab --device 0,1,2,3,4,5,6,7
- fuse_ab: Anchor Aided Training Mode
- conf: select config file to specify network/optimizer/hyperparameters. We recommend to apply yolov6n/s/m/l_finetune.py when training on WIDER FACE or your custom dataset.
- data: prepare dataset and specify dataset paths in data.yaml ( WIDERFACE, YOLO format widerface labels )
- make sure your dataset structure as follows:
├── widerface
│ ├── images
│ │ ├── train
│ │ └── val
│ ├── labels
│ │ ├── train
│ │ ├── val
Inference
First, download a pretrained model from the YOLOv6 release or use your trained model to do inference.
Second, run inference with tools/infer.py
python tools/infer.py --weights yolov6s_face.pt --source ../widerface/images/val/ --yaml data/WIDER_FACE.yaml --conf 0.02 --not-save-img --save-txt-widerface --name widerface_yolov6s
Evaluation
cd widerface_evaluate
python evaluation.py --pred ../runs/inference/widerface_yolov6s/labels/
Deployment
Tutorials
Third-party resources
-
YOLOv6 NCNN Android app demo: ncnn-android-yolov6 from FeiGeChuanShu
-
YOLOv6 ONNXRuntime/MNN/TNN C++: YOLOv6-ORT, YOLOv6-MNN and YOLOv6-TNN from DefTruth
-
YOLOv6 TensorRT Python: yolov6-tensorrt-python from Linaom1214
-
YOLOv6 web demo on Huggingface Spaces with Gradio.
-
Interactive demo on DagsHub with Streamlit
-
Tutorial: How to train YOLOv6 on a custom dataset
-
YouTube Tutorial: How to train YOLOv6 on a custom dataset
-
Blog post: YOLOv6 Object Detection – Paper Explanation and Inference
If you have any questions, welcome to join our WeChat group to discuss and exchange.