An app that uses CoreML and SqueezeNet to take a picture and tell what the object it thinks is in the picture!
-
Updated
Jul 20, 2017 - Swift
An app that uses CoreML and SqueezeNet to take a picture and tell what the object it thinks is in the picture!
Machine learning project for Computer science (University of Catania)
Deep learning models to predict the emotion exhibited by the person from videos
Automatic Diagnosis of COVID-19 using CT Scan
Implemented the training and inference of several common deep learning model algorithms with tensorflow and pytorch.
SqueezeNet for ncnn framework
Easy to use, popular computer vision layers' implementations with customizable parameters
Steering Angle prediction
Real time camera object detection with Machine Learning in swift. Basic introduction to Core ML, Vision and ARKit.
Behavior cloning of driving a car on a track
SqueezeNet Implementation in TensorFlow 1.10
Codes Used on My Thesis Work - Fire and Smoke Detection using Fully Supervised Training Methods and Search by QuadTree ( FireFront Project )
Architectures of convolutional neural networks for image classification in PyTorch
Python program allowing to control mouse by performing hand gestures captured by a webcam.
Add a description, image, and links to the squeezenet topic page so that developers can more easily learn about it.
To associate your repository with the squeezenet topic, visit your repo's landing page and select "manage topics."