Ship agents faster. Plano is delivery infrastructure for agentic applications - a smart proxy & dataplane that offloads plumbing work, so you stay focused on product logic.
-
Updated
Jan 2, 2026 - Rust
Ship agents faster. Plano is delivery infrastructure for agentic applications - a smart proxy & dataplane that offloads plumbing work, so you stay focused on product logic.
AI Gateway: Claude Pro, Copilot, Gemini subscriptions → OpenAI/Anthropic/Gemini APIs. No API keys needed.
High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model discovery across local and remote inference backends.
Build mods for Claude Code: Hook any request, modify any response, /model "with-your-custom-model", intelligent model routing using your logic or ours, and even use your Claude subscription as an API
Open Source LLM proxy that transparently captures and logs all interactions with LLM API
OpenAI-compatible HTTP LLM proxy / gateway for multi-provider inference (Google, Anthropic, OpenAI, PyTorch). Lightweight, extensible Python/FastAPI—use as library or standalone service.
This is a robust and configurable LLM proxy server built with Node.js, Express, and PostgreSQL. It acts as an intermediary between your applications and various Large Language Model (LLM) providers
A personal LLM gateway with fault-tolerant capabilities for calls to LLM models from any provider with OpenAI-compatible APIs. Advanced features like retry, model sequencing, and body parameter injection are also available. Especially useful to work with AI coders like Cline and RooCode and providers like OpenRouter.
Allows any BYOK AI editor or extension, such as Cursor or Continue, to connect to any openai-compatible LLM by aliasing it as a different model
Go LLM gateway — one interface for Claude Code, Codex, Gemini CLI, Anthropic, OpenAI, Qwen, and vLLM.
Local LLM proxy, DevOps friendly
Connect any LLM-powered client app, such as a coding agent, to any supported inference backend/model.
Small reliability layer for HTTP APIs and LLM calls. Idempotent HTTP/LLM proxy with retries, cache, circuit breaker and predictable AI costs.
A self-hosted, open-source (Apache 2.0) proxy for LLM's with prometheus metrics
OpenAI-compatible AI proxy: Anthropic Claude, Google Gemini, GPT-5, Cloudflare AI. Free hosting, automatic failover, token rotation. Deploy in 1 minute.
[WIP] Sorai is a lightweight, high-performance, and open-source LLM proxy gateway.
Nodejs OpenRouter proxy inference that provides all nessary endpoints for your LLM application.
A proxy for claude code to use liteLLM
Store your knowledge (privately), lead LLMs with it and cure hallucinations.
Add a description, image, and links to the llm-proxy topic page so that developers can more easily learn about it.
To associate your repository with the llm-proxy topic, visit your repo's landing page and select "manage topics."