This guide shows Windows users how to install Ollama and run qwen3:235b locally.
- OS: Windows 10 or later
- Disk: at least 220 GB free (model is about 142 GB, plus cache and headroom)
- RAM: 64 GB+ recommended (less can work, but it will be much slower)
- GPU: Optional, but a modern NVIDIA GPU helps a lot
- Stable internet for large model download
qwen3:235bis very large. If your PC is not high-end, it can still run but response speed may be slow.
Open PowerShell as Administrator and run:
irm https://ollama.com/install.ps1 | iex- Open: https://ollama.com/download/windows
- Download
OllamaSetup.exe - Run the installer
Open a new PowerShell window and run:
ollama --versionIf you see a version number, install is done.
ollama pull qwen3:235bThis can take a long time because the model is large.
ollama run qwen3:235bType your prompt and press Enter.
Exit chat with:
/bye
ollama list
ollama ps
ollama show qwen3:235b
ollama stop qwen3:235bOllama runs a local API at http://localhost:11434.
$body = @{
model = "qwen3:235b"
messages = @(
@{ role = "user"; content = "Give me a short summary of quantum computing." }
)
} | ConvertTo-Json -Depth 5
Invoke-RestMethod -Uri "http://localhost:11434/api/chat" -Method Post -Body $body -ContentType "application/json"- Expected on lower-end hardware for this model size
- Close heavy apps (browser tabs, games, IDEs)
- Keep enough free RAM and disk space
- Reboot Windows and retry
- Check model status:
ollama ps- Re-pull model if download was interrupted:
ollama pull qwen3:235b- Restart terminal
- Reinstall Ollama from the official Windows installer
Try a smaller Qwen3 model first:
ollama run qwen3:8bThen move to qwen3:30b, and finally qwen3:235b when hardware allows.
- Ollama Windows download: https://ollama.com/download/windows
- Ollama Qwen3 model page: https://ollama.com/library/qwen3
- Qwen3 235B model card: https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507