A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.
-
Updated
Feb 13, 2023 - C
A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.
A toolbox for deep learning model deployment using C++ YoloX | YoloV7 | YoloV8 | Gan | OCR | MobileVit | Scrfd | MobileSAM | StableDiffusion
Open Neural Network Exchange to C compiler.
Pure C ONNX runtime with zero dependancies for embedded devices
A GStreamer Deep Learning Inference Framework
Voice100 includes neural TTS/ASR models. Inference of Voice100 is low cost as its models are tiny and only depend on CNN without recursion.
Simple microTVM example for running ONNX model on NUCLEO-F746ZG board
Small footprint, standalone, zero dependency, offline keyword spotting (KWS) CLI tool.
ONNX Runtime binding for Lua
Using ONNX to run inference in C
Add a description, image, and links to the onnx topic page so that developers can more easily learn about it.
To associate your repository with the onnx topic, visit your repo's landing page and select "manage topics."