Skip to content

nagyist/roboflow-supervision

 
 

Repository files navigation

👋 hello

We write your reusable computer vision tools. Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! 🤝

supervision-hackfest

💻 install

Pip install the supervision package in a Python>=3.8 environment.

pip install supervision

Read more about conda, mamba, and installing from source in our guide.

🔥 quickstart

models

Supervision was designed to be model agnostic. Just plug in any classification, detection, or segmentation model. For your convenience, we have created connectors for the most popular libraries like Ultralytics, Transformers, or MMDetection.

import cv2
import supervision as sv
from ultralytics import YOLO

image = cv2.imread(...)
model = YOLO('yolov8s.pt')
result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)

len(detections)
# 5
👉 more model connectors
  • inference

    Running with Inference requires a Roboflow API KEY.

    import cv2
    import supervision as sv
    from inference import get_model
    
    image = cv2.imread(...)
    model = get_model(model_id="yolov8s-640", api_key=<ROBOFLOW API KEY>)
    result = model.infer(image)[0]
    detections = sv.Detections.from_inference(result)
    
    len(detections)
    # 5

annotators

Supervision offers a wide range of highly customizable annotators, allowing you to compose the perfect visualization for your use case.

import cv2
import supervision as sv

image = cv2.imread(...)
detections = sv.Detections(...)

box_annotator = sv.BoxAnnotator()
annotated_frame = box_annotator.annotate(
    scene=image.copy(),
    detections=detections
)
supervision-0.16.0-annotators.mp4

datasets

Supervision provides a set of utils that allow you to load, split, merge, and save datasets in one of the supported formats.

import supervision as sv
from roboflow import Roboflow

project = Roboflow().workspace(<WORKSPACE_ID>).project(<PROJECT_ID>)
dataset = project.version(<PROJECT_VERSION>).download("coco")

ds = sv.DetectionDataset.from_coco(
    images_directory_path=f"{dataset.location}/train",
    annotations_path=f"{dataset.location}/train/_annotations.coco.json",
)

path, image, annotation = ds[0]
    # loads image on demand

for path, image, annotation in ds:
    # loads image on demand
👉 more dataset utils
  • load

    dataset = sv.DetectionDataset.from_yolo(
        images_directory_path=...,
        annotations_directory_path=...,
        data_yaml_path=...
    )
    
    dataset = sv.DetectionDataset.from_pascal_voc(
        images_directory_path=...,
        annotations_directory_path=...
    )
    
    dataset = sv.DetectionDataset.from_coco(
        images_directory_path=...,
        annotations_path=...
    )
  • split

    train_dataset, test_dataset = dataset.split(split_ratio=0.7)
    test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)
    
    len(train_dataset), len(test_dataset), len(valid_dataset)
    # (700, 150, 150)
  • merge

    ds_1 = sv.DetectionDataset(...)
    len(ds_1)
    # 100
    ds_1.classes
    # ['dog', 'person']
    
    ds_2 = sv.DetectionDataset(...)
    len(ds_2)
    # 200
    ds_2.classes
    # ['cat']
    
    ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
    len(ds_merged)
    # 300
    ds_merged.classes
    # ['cat', 'dog', 'person']
  • save

    dataset.as_yolo(
        images_directory_path=...,
        annotations_directory_path=...,
        data_yaml_path=...
    )
    
    dataset.as_pascal_voc(
        images_directory_path=...,
        annotations_directory_path=...
    )
    
    dataset.as_coco(
        images_directory_path=...,
        annotations_path=...
    )
  • convert

    sv.DetectionDataset.from_yolo(
        images_directory_path=...,
        annotations_directory_path=...,
        data_yaml_path=...
    ).as_pascal_voc(
        images_directory_path=...,
        annotations_directory_path=...
    )

🎬 tutorials

Want to learn how to use Supervision? Explore our how-to guides, end-to-end examples, cheatsheet, and cookbooks!


Dwell Time Analysis with Computer Vision | Real-Time Stream Processing Dwell Time Analysis with Computer Vision | Real-Time Stream Processing

Created: 5 Apr 2024

Learn how to use computer vision to analyze wait times and optimize processes. This tutorial covers object detection, tracking, and calculating time spent in designated zones. Use these techniques to improve customer experience in retail, traffic management, or other scenarios.


Speed Estimation & Vehicle Tracking | Computer Vision | Open Source Speed Estimation & Vehicle Tracking | Computer Vision | Open Source

Created: 11 Jan 2024

Learn how to track and estimate the speed of vehicles using YOLO, ByteTrack, and Roboflow Inference. This comprehensive tutorial covers object detection, multi-object tracking, filtering detections, perspective transformation, speed estimation, visualization improvements, and more.

💜 built with supervision

Did you build something cool using supervision? Let us know!

football-players-tracking-25.mp4
traffic_analysis_result.mov
vehicles-step-7-new.mp4

📚 documentation

Visit our documentation page to learn how supervision can help you build computer vision applications faster and more reliably.

🏆 contribution

We love your input! Please see our contributing guide to get started. Thank you 🙏 to all our contributors!


About

We write your reusable computer vision tools. 💜

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%