More readable and flexible yolov5 with more backbone(gcn, resnet, shufflenet, moblienet, efficientnet, hrnet, swin-transformer, etc) and (cbam,dcn and so on), and tensorrt
-
Updated
Aug 19, 2024 - Python
More readable and flexible yolov5 with more backbone(gcn, resnet, shufflenet, moblienet, efficientnet, hrnet, swin-transformer, etc) and (cbam,dcn and so on), and tensorrt
Deploy DL/ ML inference pipelines with minimal extra code.
This repository serves as an example of deploying the YOLO models on Triton Server for performance and testing purposes
Set up CI in DL/ cuda/ cudnn/ TensorRT/ onnx2trt/ onnxruntime/ onnxsim/ Pytorch/ Triton-Inference-Server/ Bazel/ Tesseract/ PaddleOCR/ NVIDIA-docker/ minIO/ Supervisord on AGX or PC from scratch.
Provides an ensemble model to deploy a YoloV8 ONNX model to Triton
Proxy server for triton gRPC server that inferences embedding model in Rust
This repository utilizes the Triton Inference Server Client, which streamlines the complexity of model deployment.
The Purpose of this repository is to create a DeepStream/Triton-Server sample application that utilizes yolov7, yolov7-qat, yolov9 models to perform inference on video files or RTSP streams.
A template for delpoy AI server use django with tf_serving or triton_inference_serving
This repository provides an out-of-the-box deployment solution for creating an end-to-end procedure to train, deploy, and use Yolov7 models on Nvidia GPUs using Triton Server and Deepstream.
NvDsInferYolov7EfficientNMS for Gst-nvinferserver
Triton Inference Server Deployment with ONNX Models
This is a cutom triton-inference-server backend demo for process image (resize + norm)
Pipeline to insert text embeddings generated from self hosted embedding model into qdrant vector database using grpc in rust
QuickStart for Deploying a Basic Model on the Triton Inference Server
Add Some plus extra features to transformers
Add a description, image, and links to the triton-server topic page so that developers can more easily learn about it.
To associate your repository with the triton-server topic, visit your repo's landing page and select "manage topics."