AI writes code.
Riza runs it.
Riza makes AI applications more powerful and reliable by giving LLMs the ability to run code.
$ pip install rizaio
1import rizaio
2
3riza = rizaio.Riza()
4
5response = riza.command.exec(
6 code="print('Hello, World!')",
7 language="python",
8)
9print(response.stdout)
10# > Hello, World!
Speed and power, with control. Let AI run code fast without risking your systems.
Secure isolation & control
Never risk your production environment or secrets. Run AI code on Riza, and configure env variables and network access each time code runs.
No cold starts
Code starts executing <10ms after Riza gets it. No sandbox start-up or deploy needed.
API-first design
Access all features through thoughtfully designed RESTful API endpoints. Plus, SDKs for Python, TypeScript, and Go.
Serious Scale. Riza executes LLM-generated code billions of times per month for AI-native companies.
1
Your LLM writes code
The latest frontier models perform well at generating TypeScript and Python.
2
You POST the code to Riza
Use our REST APIs or SDKs. Configure stdin, files, network access, and environment variables in your request.
3
Riza safely runs the code and returns the output
Your response includes execution information, including exit code, stdout, and stderr.
Brown lines
We use Riza to run Python millions of times a day. That means our engineering team can stay focused on building the product, rather than managing low-level code isolation.Todd Berman, CEO, Hyperscout
Unlock new capabilities in your AI apps
Analyze any data and build visualizations
Calculate stats and generate charts and graphs for diverse data with unstable schemas. With Riza, LLMs can use popular libraries like pandas, matplotlib, seaborn and other libraries.
Let agents run code and write tools

LLMs can write code, but can’t safely execute their code without human review and make their code reusable. Riza provides the AI-first infrastructure that makes agents more capable, reliable, and efficient.

Plus, empower agents to write their own tools.

Extract & transform data from any source
Use LLMs to write code that extracts structured data from heterogenous, previously-unseen sources and transforms the data into any format. Run it on Riza.
Run code written by your users
You can’t trust humans, either. Let your users run their custom code on your platform safely using Riza.
Run evals on generated code
Your AI workflow depends on whether the LLM-generated code actually works. Test prompt changes and model upgrades by running evals on Riza.
Scale on our cloud or yours
Easy to bring on-prem
For those who want even more control, self-host Riza on your own infrastructure in minutes. With the same seamless experience as our cloud.
Trusted at scale
AI-native companies rely on us to power enterprise workloads, handling billions of code executions per month.
Effortless operations
Deployed as a horizontally-scalable Kubernetes service, no DevOps engineers were harmed in the making of this film.
Ready to get started?
Sign up and get started for free. Or get in touch with us to learn more.