🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
-
Updated
Nov 10, 2024 - Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
Deezer source separation library including pretrained models.
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
An open source implementation of CLIP.
Semantic segmentation models with 500+ pretrained convolutional and transformer-based backbones.
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
A PyTorch implementation of EfficientNet
CodeGeeX2: A More Powerful Multilingual Code Generation Model
Official release of InternLM2.5 base and chat models. 1M context support
Natural Language Processing Best Practices & Examples
Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
a state-of-the-art-level open visual language model | 多模态预训练模型
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
A treasure chest for visual classification and recognition powered by PaddlePaddle
Silero Models: pre-trained speech-to-text, text-to-speech and text-enhancement models made embarrassingly simple
Add a description, image, and links to the pretrained-models topic page so that developers can more easily learn about it.
To associate your repository with the pretrained-models topic, visit your repo's landing page and select "manage topics."