We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Fix for models with LM head
Multi-LoRA prefix caching + spec decoding
Prefix caching, VLMs, BERT
Hotfix for LoRA batching logic
Speculative decoding, sgmv + bgmv
Adapter memory manager
Gemma support
Structured output via Outlines
LoRA merging per request
OpenAI compatible API