| Blog | Documentation | Join Slack | Join Bi-Weekly Development Meeting | Slides |
- [2024/12] 🔥 SGLang v0.4: Zero-Overhead Batch Scheduler, Cache-Aware Load Balancer, Faster Structured Outputs (blog).
- [2024/10] 🔥 The First SGLang Online Meetup (slides).
- [2024/09] SGLang v0.3 Release: 7x Faster DeepSeek MLA, 1.5x Faster torch.compile, Multi-Image/Video LLaVA-OneVision (blog).
- [2024/07] Faster Llama3 Serving with SGLang Runtime (vs. TensorRT-LLM, vLLM) (blog).
More
- [2024/02] SGLang enables 3x faster JSON decoding with compressed finite state machine (blog).
- [2024/04] SGLang is used by the official LLaVA-NeXT (video) release (blog).
- [2024/01] SGLang provides up to 5x faster inference with RadixAttention (blog).
- [2024/01] SGLang powers the serving of the official LLaVA v1.6 release demo (usage).
SGLang is a fast serving framework for large language models and vision language models. It makes your interaction with models faster and more controllable by co-designing the backend runtime and frontend language. The core features include:
- Fast Backend Runtime: Provides efficient serving with RadixAttention for prefix caching, jump-forward constrained decoding, overhead-free CPU scheduler, continuous batching, token attention (paged attention), tensor parallelism, FlashInfer kernels, chunked prefill, and quantization (FP8/INT4/AWQ/GPTQ).
- Flexible Frontend Language: Offers an intuitive interface for programming LLM applications, including chained generation calls, advanced prompting, control flow, multi-modal inputs, parallelism, and external interactions.
- Extensive Model Support: Supports a wide range of generative models (Llama, Gemma, Mistral, QWen, DeepSeek, LLaVA, etc.), embedding models (e5-mistral, gte, mcdse) and reward models (Skywork), with easy extensibility for integrating new models.
- Active Community: SGLang is open-source and backed by an active community with industry adoption.
- Install SGLang
- Send requests
- Backend: SGLang Runtime (SRT)
- Frontend: Structured Generation Language (SGLang)
Learn more in our release blogs: v0.2 blog, v0.3 blog, v0.4 blog
The project is supported by (alphabetically): AMD, Baseten, Etched, Hyperbolic, Jam & Tea Studios, LinkedIn, Meituan, NVIDIA, RunPod, Stanford, UC Berkeley, xAI and 01.AI.
We learned from the design and reused code from the following projects: Guidance, vLLM, LightLLM, FlashInfer, Outlines, and LMQL. Please cite our paper, SGLang: Efficient Execution of Structured Language Model Programs, if you find the project useful.