Build • Break • Secure Large Language Models with a Fully Automated Offensive + Defensive Cyber Range
Created by Mr-Infect
A next-generation AI Security Playground for Students, Engineers, & Red Teams
AI Cyber Range – OWASP Top 10 for LLMs is a cutting-edge AI Penetration Testing Lab engineered to simulate real-world LLM vulnerabilities in a safe, automated, Docker-powered environment.
This platform enables:
- AI Security researchers to experiment with adversarial AI attacks
- Red teamers to practice offensive AI techniques
- Educators to demonstrate LLM risks interactively
- Engineers to validate AI product security
Every module replicates attack paths and exploitation vectors aligned with the OWASP Top 10 for Large Language Models — making it one of the most comprehensive AI Security Training Environments available today.
(Expertly curated for Google Discover, GitHub Search, AI Security queries)
AI Cyber Range, OWASP Top 10 for LLMs, AI Penetration Testing Lab, LLM Red Team Training, Prompt Injection Testing Lab, AI Security Playground, AI Threat Simulation, LLM Vulnerability Research, AI Security Engineer Toolkit, Adversarial Machine Learning Lab, AI Offensive Security, AI Security Hands-On Training, Ethical Hacking with LLMs, Secure AI Application Development, AI Attack Surface Modeling, LLM API Exploitation, LLM Model Theft Simulation, Training Data Poisoning Scenarios
- 🚀 One-click setup with automated dependency installations
- 🧱 Full Docker isolation for every vulnerability
- 🎯 Covers all OWASP LLM Top 10 categories
- 🧠 Progression from Beginner → Advanced Attack Scenarios
- 🎨 Premium ASCII-driven CLI UX (Rich Text + Inquirer)
- 🔐 Randomized SHA-256 flags per session
- 🔁 Auto-resetting labs on challenge completion
- 🌐 100% offline, secure, self-contained
- 🧪 Safe adversarial model behavior simulations
- 📡 Local browser interface for each vulnerable LLM endpoint
Designed for learning, teaching, experimenting, and real-world validation.
AI-cyber-range/
│
├── config/ # Lab configurations (YAML)
│ └── labs.yaml
│
├── scripts/
│ ├── setup.sh # Automated installer
│ └── labctl.py # Main CLI + lab manager
│
├── common/
│ └── base.Dockerfile # Shared base image
│
├── dockerfiles/ # Per-vulnerability Dockerfiles
│ └── LLM01–LLM10/
│
├── labs/ # Individual lab implementations
│ ├── LLM01/… # Prompt Injection
│ ├── LLM02/… # Output Handling
│ ├── …
│ └── LLM10/… # Model Theft
│
└── README.md # You're reading it- Ubuntu / Debian / WSL 2 (recommended)
- Python 3.10+
- Docker Engine + Docker Compose
- Git
Everything else is automated.
git clone https://github.com/Mr-Infect/AI-cyber-range.git
cd AI-cyber-range
chmod +x scripts/setup.sh
./scripts/setup.sh- Installs Python, pip, virtualenv
- Installs Docker + Compose
- Fixes Docker permissions
- Validates container runtime
- Builds the common base image
- Prepares labs for orchestration
python3 scripts/labctl.py- Pick a vulnerability (LLM01–LLM10)
- Choose a scenario
- Select difficulty
- A Dockerized LLM instance spins up
- Visit the local URL
- Exploit the lab → Extract the flag
- Lab resets → Repeat
Fast. Clean. Secure.
? Vulnerability: LLM01 - Prompt Injection
? Scenario: lab01_basic_direct
? Difficulty: easy
⠋ Deploying environment...
Lab ready at: http://localhost:8001
Paste your captured flag:git clone https://github.com/Mr-Infect/AI-cyber-range.git && \
cd AI-cyber-range && \
chmod +x scripts/setup.sh && \
./scripts/setup.sh && \
python3 scripts/labctl.py| ID | Vulnerability | Focus Area |
|---|---|---|
| LLM01 | Prompt Injection | Input manipulation, bypasses |
| LLM02 | Insecure Output Handling | XSS, HTML/JS bleeding |
| LLM03 | Training Data Poisoning | Compromised datasets |
| LLM04 | Model Denial of Service | Token floods, infinite loops |
| LLM05 | Supply Chain Vulnerability | Malicious dependencies |
| LLM06 | Sensitive Data Exposure | PII, keys, internal secrets |
| LLM07 | Unauthorized Code Execution | Shell/code execution via prompts |
| LLM08 | Excessive Agency | Unsafe tool-use, over-delegation |
| LLM09 | Overreliance on LLMs | Bad automation + blind trust |
| LLM10 | Model Theft | Output-based model extraction attacks |
- Cybersecurity Students
- AI/ML Engineers
- Penetration Testers
- Red Team Operators
- SOC Analysts exploring AI threats
- Security Trainers & Professors
- AI Product Teams validating safety
If you work in AI + Security, this range is your sandbox.
- Python FastAPI (vulnerable LLM endpoints)
- Docker + Docker Compose
- YAML Configuration Management
- HTML/CSS Micro-Frontends
- CLI Engine: Rich + Inquirer
- SHA256 Random Flag Generator
+--------------------------------------------------+
| labctl.py |
| Orchestration • User Interface • IA Logic |
+--------------------------------------------------+
│
▼
+------------------------+
| Docker Compose |
+------------------------+
│
▼
+--------------------------------+
| Vulnerable LLM Microservice |
| (FastAPI + HTML UI) |
+--------------------------------+
│
▼
Local Browser Interface
sudo systemctl start docker
sudo usermod -aG docker $USER
newgrp dockersudo apt remove containerd
sudo apt install containerd.ioRe-run:
python3 scripts/labctl.pyYour support fuels new labs, advanced difficulty modes, and future LLM attack modules.
Pull requests, issue reports, and new vulnerability ideas are always welcome.
- Submit bugs
- Suggest improvements
- Build new lab modules
- Extend the AI attack catalog
This project grows through collaboration.
This project is released under the MIT License. Use it, customize it, fork it — just credit Mr-Infect.
Just say the word.