/ʤiː piː tiː miː/
what does it stand for?
Getting Started • Website • Documentation
📜 A personal AI agent in your terminal, with tools to:
run shell commands, write code, edit files, browse the web, use vision, and much more.
A great coding agent, but general-purpose enough to assist in all kinds of knowledge-work.
An unconstrained local free and open-source alternative to Claude Code, Codex, Cursor Agents, etc.
One of the first agent CLIs created (Spring 2023) — and still in very active development.
- Coming soon - gptme.ai service for running agents in the cloud; gptme desktop app for easy local use.
- 2026-01 - gptme-agent-template v0.4: Bob reaches 1000+ autonomous sessions, autonomous run loops, enhanced context generation
- 2025-12 - v0.31.0: Background jobs, form tool, cost tracking, content-addressable storage
- 2025-11 - v0.30.0: Plugin system, context compression, subagent planner mode
- 2025-10 - v0.29.0: Lessons system for contextual guidance, MCP discovery & dynamic loading, token awareness; Bob begins autonomous runs with GitHub monitoring
- 2025-08 - v0.28.0: MCP support, morph tool for fast edits, auto-commit, redesigned server API
- 2025-03 - v0.27.0: Pre-commit integration, macOS computer use, Claude 3.7 Sonnet, DeepSeek R1, local TTS with Kokoro
- 2025-01 - gptme-contrib created: community plugins including Twitter/X, Discord bot, email tools, consortium (multi-agent)
- 2024-12 - gptme-agent-template v0.3: Template for persistent agents
- 2024-11 - Ecosystem expansion: gptme-webui, gptme-rag, gptme.vim, Bob created (first autonomous agent)
- 2024-10 - First viral tweet bringing widespread attention
- 2024-08 - Show HN, Anthropic Claude support, tmux tool
- 2023-09 - Initial public release on HN, Reddit, Twitter
- 2023-03 - Initial commit - one of the first agent CLIs
For more history, see the Timeline and Changelog.
Note
The screencasts below are from 2023. gptme has evolved a lot since then! For up-to-date examples and screenshots, see the Documentation. We're working on automated demo generation: #1554.
| Fibonacci | Snake with curses |
|---|---|
Steps
|
Steps
|
| Mandelbrot with curses | Answer question from URL |
Steps
|
Steps
|
| Terminal UI | Web UI |
Features
|
Features
|
You can find more Demos and Examples in the documentation.
- 💻 Code execution
- 🧩 Read, write, and change files
- Makes incremental changes with the patch tool.
- 🌐 Search and browse the web
- Can use a browser via Playwright with the browser tool.
- 👀 Vision
- Can see images referenced in prompts, screenshots of your desktop, and web pages.
- 🔄 Self-correcting
- Output is fed back to the assistant, allowing it to respond and self-correct.
- 📚 Lessons system
- Contextual guidance and best practices automatically included when relevant.
- Keyword, tool, and pattern-based matching.
- Adapts to interactive vs autonomous modes.
- Extend with your own lessons and skills.
- 🤖 Support for many LLM providers
- Anthropic (Claude), OpenAI (GPT), Google (Gemini), xAI (Grok), DeepSeek, and more.
- Use OpenRouter for access to 100+ models, or serve locally with
llama.cpp.
- 🌐 Web UI and REST API
- Modern web interface at chat.gptme.org (gptme-webui)
- Simple built-in web UI included in the Python package.
- Server with REST API.
- Standalone executable builds available with PyInstaller.
- 💻 Computer use (see #216)
- Give the assistant access to a full desktop, allowing it to interact with GUI applications.
- 🗣️ Text-to-Speech — locally generated using Kokoro (no cloud required).
- 🔊 Tool sounds — pleasant notification sounds for different tool operations.
- Enable with
GPTME_TOOL_SOUNDS=true.
- Enable with
gptme equips the AI with a rich set of built-in tools:
| Tool | Description |
|---|---|
shell |
Execute shell commands directly in your terminal |
ipython |
Run Python code with access to your installed libraries |
read |
Read files and directories |
save / append |
Create or update files |
patch / morph |
Make incremental edits to existing files |
browser |
Search and navigate the web via Playwright |
vision |
Process and analyze images |
screenshot |
Capture screenshots of your desktop |
rag |
Retrieve context from local files (Retrieval Augmented Generation) |
gh |
Interact with GitHub via the GitHub CLI |
tmux |
Run long-lived commands in persistent terminal sessions |
computer |
Full desktop access for GUI interactions |
subagent |
Spawn sub-agents for parallel or isolated tasks |
chats |
Reference and search past conversations |
tts |
Text-to-speech output |
youtube |
Fetch and process YouTube video transcripts |
Use /tools during a conversation to see all available tools and their status.
gptme has a layered extensibility system that lets you tailor it to your workflow:
Plugins — extend gptme with custom tools, hooks, and commands via Python packages:
# gptme.toml
[plugins]
paths = ["~/.config/gptme/plugins", "./plugins"]
enabled = ["my_plugin"]Skills — lightweight workflow bundles (Anthropic format) that auto-load when mentioned by name. Great for packaging reusable instructions and helper scripts without writing Python.
Lessons — contextual guidance that auto-injects into conversations based on keywords, tools, and patterns. Write your own to capture team best-practices or domain knowledge.
Hooks — run custom code at key lifecycle events (before/after tool calls, on conversation start, etc.) without a full plugin.
gptme-contrib — community-contributed plugins, packages, scripts, and lessons:
| Plugin | Description |
|---|---|
| gptme-consortium | Multi-model consensus decision-making |
| gptme-imagen | Multi-provider image generation |
| gptme-lsp | Language Server Protocol integration |
| gptme-ace | ACE-inspired context optimization |
| gptme-gupp | Work state persistence across sessions |
MCP (Model Context Protocol) — use any MCP server as a tool source:
pipx install gptme # MCP support included by defaultgptme can discover and dynamically load MCP servers, giving the agent access to databases, APIs, file systems, and any other MCP-compatible tool. See the MCP docs for server configuration.
ACP (Agent Client Protocol) — use gptme as a coding agent directly from your editor:
pipx install 'gptme[acp]'This makes gptme available as a drop-in coding agent in Zed and JetBrains IDEs. Your editor sends requests, gptme executes with its full toolset (shell, browser, files, etc.) and streams results back.
gptme is designed to run not just interactively but as a persistent autonomous agent. The gptme-agent-template provides a complete scaffold for building your own:
- Persistent workspace — git-tracked brain across sessions
- Run loops — scheduled or event-driven autonomous operation
- Task management — structured task queue with GTD-style metadata
- Meta-learning — lessons system captures patterns and improves over time
- Multi-agent coordination — file leases, message bus, and work claiming for concurrent agents
Bob is the reference implementation — an autonomous AI agent that has completed 1000+ sessions, contributes to open source, and manages its own tasks. Bob and Alice are sibling agents forked from the same architecture — each improving themselves and collaborating with each other, hinting at the broader team of AI agents gptme enables.
- 🖥 Development: Write and run code faster with AI assistance.
- 🎯 Shell Expert: Get the right command using natural language (no more memorizing flags!).
- 📊 Data Analysis: Process and analyze data directly in your terminal.
- 🎓 Interactive Learning: Experiment with new technologies or codebases hands-on.
- 🤖 Agents & Tools: Build long-running autonomous agents for real work.
- 🔬 Research: Automate literature review, data collection, and analysis pipelines.
- ⭐ One of the first agent CLIs created (Spring 2023) that is still in active development.
- 🧰 Easy to extend
- 🧪 Extensive testing, high coverage.
- 🧹 Clean codebase, checked and formatted with
mypy,ruff, andpyupgrade. - 🤖 GitHub Bot to request changes from comments! (see #16)
- Operates in this repo! (see #18 for example)
- Runs entirely in GitHub Actions.
- 📊 Evaluation suite for testing capabilities of different models.
- 📝 gptme.vim for easy integration with vim.
- 🖥 gptme-tauri — desktop app wrapping gptme for easy local use (WIP)
- ☁️ gptme.ai — managed cloud service for running gptme agents (WIP; still self-hostable by running
gptme-server+gptme-webuiyourself) - 🌳 Tree-based conversation structure (see #17)
- 📜 RAG to automatically include context from local files (see #59)
- 🏆 Advanced evals for testing frontier capabilities
- Python 3.10 or newer
- An API key for at least one LLM provider:
- Anthropic (set
ANTHROPIC_API_KEY) - OpenAI (set
OPENAI_API_KEY) - OpenRouter (set
OPENROUTER_API_KEY) - Local models via
llama.cpp(no key required — see providers docs)
- Anthropic (set
For full setup instructions, see the Getting Started guide.
# With pipx (recommended, requires Python 3.10+)
pipx install gptme
# With uv
uv tool install gptme
# With optional extras
pipx install 'gptme[browser]' # Playwright for web browsing
pipx install 'gptme[all]' # Everything
# Latest from git with all extras
uv tool install 'git+https://github.com/gptme/gptme.git[all]'gptmeYou'll be greeted with a prompt. Type your request and gptme will respond, using tools as needed.
# Create a particle effect visualization
gptme 'write an impressive and colorful particle effect using three.js to particles.html'
# Generate visual art
gptme 'render mandelbrot set to mandelbrot.png'
# Get configuration suggestions
gptme 'suggest improvements to my vimrc'
# Process media files
gptme 'convert to h265 and adjust the volume' video.mp4
# Code assistance from git diffs
git diff | gptme 'complete the TODOs in this diff'
# Fix failing tests
make test | gptme 'fix the failing tests'
# Auto-approve tool confirmations (user can still watch and interrupt)
gptme -y 'run the test suite and fix any failing tests'
# Fully non-interactive/autonomous mode (no user interaction possible, safe for scripts/CI)
gptme -n 'run the test suite and fix any failing tests'For more, see the Getting Started guide and the Examples in the documentation.
Create ~/.config/gptme/config.toml:
[user]
name = "User"
about = "I am a curious human programmer."
response_preference = "Don't explain basic concepts"
[prompt]
# Additional files to always include as context
# files = ["~/notes/llm-tips.md"]
[env]
# Set your default model
# MODEL = "anthropic/claude-sonnet-4-20250514"
# MODEL = "openai/gpt-4o"For all options, see the configuration docs.
$ gptme --help
Usage: gptme [OPTIONS] [PROMPTS]...
gptme is a chat-CLI for LLMs, empowering them with tools to run shell
commands, execute code, read and manipulate files, and more.
If PROMPTS are provided, a new conversation will be started with it. PROMPTS
can be chained with the '-' separator.
The interface provides user commands that can be used to interact with the
system.
Available commands:
/undo Undo the last action
/log Show the conversation log
/edit Edit the conversation in your editor
/rename Rename the conversation
/fork Create a copy of the conversation
/summarize Summarize the conversation
/replay Replay tool operations
/export Export conversation as HTML
/model Show or switch the current model
/models List available models
/tokens Show token usage and costs
/context Show context token breakdown
/tools Show available tools
/commit Ask assistant to git commit
/compact Compact the conversation
/impersonate Impersonate the assistant
/restart Restart gptme process
/setup Setup gptme
/help Show this help message
/exit Exit the program
See docs for all commands: https://gptme.org/docs/commands.html
Keyboard shortcuts:
Ctrl+X Ctrl+E Edit prompt in your editor
Ctrl+J Insert a new line without executing the prompt
Options:
--name TEXT Name of conversation. Defaults to generating a random
name.
-m, --model TEXT Model to use, e.g. openai/gpt-5, anthropic/claude-
sonnet-4-20250514. If only provider given then a
default is used.
-w, --workspace TEXT Path to workspace directory. Pass '@log' to create a
workspace in the log directory.
--agent-path TEXT Path to agent workspace directory.
-r, --resume Load most recent conversation.
-y, --no-confirm Skip all confirmation prompts.
-n, --non-interactive Non-interactive mode. Implies --no-confirm.
--system TEXT System prompt. Options: 'full', 'short', or something
custom.
-t, --tools TEXT Tools to allow as comma-separated list. Available:
append, browser, chats, choice, computer, gh,
ipython, morph, patch, rag, read, save, screenshot,
shell, subagent, tmux, tts, vision, youtube.
--tool-format TEXT Tool format to use. Options: markdown, xml, tool
--no-stream Don't stream responses
--show-hidden Show hidden system messages.
-v, --verbose Show verbose output.
--version Show version and configuration information
--help Show this message and exit.gptme is more than a CLI — it's a platform with a growing ecosystem:
| Project | Description |
|---|---|
| gptme-webui | Modern React web interface, available at chat.gptme.org |
| gptme-contrib | Community plugins, packages, scripts, and lessons |
| gptme-agent-template | Template for building persistent autonomous agents |
| gptme-rag | RAG integration for semantic search over local files |
| gptme.vim | Vim plugin for in-editor gptme integration |
| gptme-tauri | Desktop app (WIP) |
| gptme.ai | Managed cloud service (WIP) |
Community agents powered by gptme:
- Bob — autonomous AI agent, 1000+ sessions, contributes to open source
- Alice — sibling agent forked from the same architecture, collaborates with Bob
- Discord — ask questions, share what you've built, discuss features
- GitHub Discussions — longer-form conversation and ideas
- X/Twitter — updates and announcements
Contributions welcome! See the contributing guide.




