# Mods

Mods product art and type treatment
Latest Release Build Status

AI for the command line, built for pipelines.

a GIF of mods running

Large Language Models (LLM) based AI is useful to ingest command output and format results in Markdown, JSON, and other text based formats. Mods is a tool to add a sprinkle of AI in your command line and make your pipelines artificially intelligent. It works great with LLMs running locally through [LocalAI]. You can also use [OpenAI], [Cohere], [Groq], or [Azure OpenAI]. [LocalAI]: https://github.com/go-skynet/LocalAI [OpenAI]: https://platform.openai.com/account/api-keys [Cohere]: https://dashboard.cohere.com/api-keys [Groq]: https://console.groq.com/keys [Azure OpenAI]: https://azure.microsoft.com/en-us/products/cognitive-services/openai-service ### Installation Use a package manager: ```bash # macOS or Linux brew install charmbracelet/tap/mods # Windows (with Winget) winget install charmbracelet.mods # Arch Linux (btw) yay -S mods # Nix nix-shell -p mods ```
Debian/Ubuntu ```bash sudo mkdir -p /etc/apt/keyrings curl -fsSL https://repo.charm.sh/apt/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/charm.gpg echo "deb [signed-by=/etc/apt/keyrings/charm.gpg] https://repo.charm.sh/apt/ * *" | sudo tee /etc/apt/sources.list.d/charm.list sudo apt update && sudo apt install mods ```
Fedora/RHEL ```bash echo '[charm] name=Charm baseurl=https://repo.charm.sh/yum/ enabled=1 gpgcheck=1 gpgkey=https://repo.charm.sh/yum/gpg.key' | sudo tee /etc/yum.repos.d/charm.repo sudo yum install mods ```
Or, download it: - [Packages][releases] are available in Debian and RPM formats - [Binaries][releases] are available for Linux, macOS, and Windows [releases]: https://github.com/charmbracelet/mods/releases Or, just install it with `go`: ```sh go install github.com/charmbracelet/mods@latest ```
Shell Completions All the packages and archives come with pre-generated completion files for Bash, ZSH, Fish, and PowerShell. If you built it from source, you can generate them with: ```bash mods completion bash -h mods completion zsh -h mods completion fish -h mods completion powershell -h ``` If you use a package (like Homebrew, Debs, etc), the completions should be set up automatically, given your shell is configured properly.
## What Can It Do? Mods works by reading standard in and prefacing it with a prompt supplied in the `mods` arguments. It sends the input text to an LLM and prints out the result, optionally asking the LLM to format the response as Markdown. This gives you a way to "question" the output of a command. Mods will also work on standard in or an argument supplied prompt individually. Be sure to check out the [examples](examples.md) and a list of all the [features](features.md). Mods works with OpenAI compatible endpoints. By default, Mods is configured to support OpenAI's official API and a LocalAI installation running on port 8080. You can configure additional endpoints in your settings file by running `mods --settings`. ## Saved Conversations Conversations are saved locally by default. Each conversation has a SHA-1 identifier and a title (like `git`!).

a GIF listing and showing saved conversations.

Check the [`./features.md`](./features.md) for more details. ## Usage - `-m`, `--model`: Specify Large Language Model to use - `-M`, `--ask-model`: Ask which model to use via interactive prompt - `-f`, `--format`: Ask the LLM to format the response in a given format - `--format-as`: Specify the format for the output (used with `--format`) - `-P`, `--prompt` Include the prompt from the arguments and stdin, truncate stdin to specified number of lines - `-p`, `--prompt-args`: Include the prompt from the arguments in the response - `-q`, `--quiet`: Only output errors to standard err - `-r`, `--raw`: Print raw response without syntax highlighting - `--settings`: Open settings - `-x`, `--http-proxy`: Use HTTP proxy to connect to the API endpoints - `--max-retries`: Maximum number of retries - `--max-tokens`: Specify maximum tokens with which to respond - `--no-limit`: Do not limit the response tokens - `--role`: Specify the role to use (See [custom roles](#custom-roles)) - `--word-wrap`: Wrap output at width (defaults to 80) - `--reset-settings`: Restore settings to default - `--theme`: Theme to use in the forms; valid choices are: `charm`, `catppuccin`, `dracula`, and `base16` - `--status-text`: Text to show while generating #### Conversations - `-t`, `--title`: Set the title for the conversation. - `-l`, `--list`: List saved conversations. - `-c`, `--continue`: Continue from last response or specific title or SHA-1. - `-C`, `--continue-last`: Continue the last conversation. - `-s`, `--show`: Show saved conversation for the given title or SHA-1 - `-S`, `--show-last`: Show previous conversation - `--delete-older-than=`: Deletes conversations older than given duration (`10d`, `1mo`). - `--delete`: Deletes the saved conversations for the given titles or SHA-1s - `--no-cache`: Do not save conversations #### MCP - `--mcp-list`: List all available MCP servers - `--mcp-list-tools`: List all available tools from enabled MCP servers - `--mcp-disable`: Disable specific MCP servers #### Advanced - `--fanciness`: Level of fanciness - `--temp`: Sampling temperature - `--topp`: Top P value - `--topk`: Top K value ## Custom Roles Roles allow you to set system prompts. Here is an example of a `shell` role: ```yaml roles: shell: - you are a shell expert - you do not explain anything - you simply output one liners to solve the problems you're asked - you do not provide any explanation whatsoever, ONLY the command ``` Then, use the custom role in `mods`: ```sh mods --role shell list files in the current directory ``` ## Setup ### Open AI Mods uses GPT-4 by default. It will fall back to GPT-3.5 Turbo. Set the `OPENAI_API_KEY` environment variable. If you don't have one yet, you can grab it the [OpenAI website](https://platform.openai.com/account/api-keys). Alternatively, set the [`AZURE_OPENAI_KEY`] environment variable to use Azure OpenAI. Grab a key from [Azure](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service). ### Cohere Cohere provides enterprise optimized models. Set the `COHERE_API_KEY` environment variable. If you don't have one yet, you can get it from the [Cohere dashboard](https://dashboard.cohere.com/api-keys). ### Local AI Local AI allows you to run models locally. Mods works with the GPT4ALL-J model as setup in [this tutorial](https://github.com/go-skynet/LocalAI#example-use-gpt4all-j-model). ### Groq Groq provides models powered by their LPU inference engine. Set the `GROQ_API_KEY` environment variable. If you don't have one yet, you can get it from the [Groq console](https://console.groq.com/keys). ### Gemini Mods supports using Gemini models from Google. Set the `GOOGLE_API_KEY` enviroment variable. If you don't have one yet, you can get it from the [Google AI Studio](https://aistudio.google.com/apikey). ## Contributing See [contributing][contribute]. [contribute]: https://github.com/charmbracelet/mods/contribute ## Whatcha Think? We’d love to hear your thoughts on this project. Feel free to drop us a note. - [Twitter](https://twitter.com/charmcli) - [The Fediverse](https://mastodon.social/@charmcli) - [Discord](https://charm.sh/chat) ## License [MIT](https://github.com/charmbracelet/mods/raw/main/LICENSE) --- Part of [Charm](https://charm.sh). The Charm logo Charm热爱开源 • Charm loves open source