Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 31k 4.7k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 740 63

Repositories

Showing 8 of 8 repositories

Sponsors

  • @davedgd
  • @massif-01
  • @HiddenPeak
  • @dvlpjrs
  • @vincentkoc
  • @mgoin
  • @upstash
  • @robertgshaw2-neuralmagic
  • Private Sponsor

Most used topics

Loading…