Skip to content

tencent-source/qwen3-235b-windows-tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

Qwen3:235B on Windows (Full Setup and Run Guide)

This guide shows Windows users how to install Ollama and run qwen3:235b locally.

1) What You Need Before You Start

  • OS: Windows 10 or later
  • Disk: at least 220 GB free (model is about 142 GB, plus cache and headroom)
  • RAM: 64 GB+ recommended (less can work, but it will be much slower)
  • GPU: Optional, but a modern NVIDIA GPU helps a lot
  • Stable internet for large model download

qwen3:235b is very large. If your PC is not high-end, it can still run but response speed may be slow.

2) Install Ollama on Windows

Option A (PowerShell one-liner)

Open PowerShell as Administrator and run:

irm https://ollama.com/install.ps1 | iex

Option B (Installer)

  1. Open: https://ollama.com/download/windows
  2. Download OllamaSetup.exe
  3. Run the installer

3) Verify Ollama Installation

Open a new PowerShell window and run:

ollama --version

If you see a version number, install is done.

4) Pull and Run Qwen3:235B

Pull model

ollama pull qwen3:235b

This can take a long time because the model is large.

Start chat in terminal

ollama run qwen3:235b

Type your prompt and press Enter.

Exit chat with:

/bye

5) Basic Commands You Will Use Often

ollama list
ollama ps
ollama show qwen3:235b
ollama stop qwen3:235b

6) Use Qwen3:235B from Local API (Windows)

Ollama runs a local API at http://localhost:11434.

PowerShell example

$body = @{
  model = "qwen3:235b"
  messages = @(
    @{ role = "user"; content = "Give me a short summary of quantum computing." }
  )
} | ConvertTo-Json -Depth 5

Invoke-RestMethod -Uri "http://localhost:11434/api/chat" -Method Post -Body $body -ContentType "application/json"

7) Troubleshooting (Windows)

A) Very slow output

  • Expected on lower-end hardware for this model size
  • Close heavy apps (browser tabs, games, IDEs)
  • Keep enough free RAM and disk space

B) Model does not start or crashes

  • Reboot Windows and retry
  • Check model status:
ollama ps
  • Re-pull model if download was interrupted:
ollama pull qwen3:235b

C) Command not found (ollama)

  • Restart terminal
  • Reinstall Ollama from the official Windows installer

8) Optional: If 235B Is Too Heavy for Your PC

Try a smaller Qwen3 model first:

ollama run qwen3:8b

Then move to qwen3:30b, and finally qwen3:235b when hardware allows.

9) Source References

About

Complete Windows tutorial for setting up and running qwen3:235b locally with Ollama, including commands and troubleshooting.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors