Follow instructions to run the demo:
- Enable PowerShell Scripts. Open PowerShell in administrator mode, and run:
Set-ExecutionPolicy -Scope CurrentUser Unrestricted -Force-
Open Anaconda PowerShell Prompt in this folder. If you don't have Anaconda PowerShell, use regular PowerShell.
-
Install platform dependencies:
..\install_platform_deps.ps1The above script will install:
- Anaconda for x86-64. We use x86-64 Python for compatibility with other Python packages. However, inference in ONNX Runtime will, for the most part, run natively with ARM64 code.
- Git for Windows. This is required to load the AI Hub Models package, which contains the application code used by this demo.
-
Open (or re-open) Anaconda Powershell Prompt to continue.
-
Create & activate your python environment:
..\activate_venv.ps1 -name AI_Hub- Install python packages:
..\install_python_deps.ps1 -model stable-diffusion-v2-1In your currently active python environment, the above script will install:
- AI Hub Models and model dependencies for stable diffusion.
- The onnxruntime-qnn package, both to enable native ARM64 ONNX inference, as well as to enable targeting Qualcomm NPUs.
-
Download the
PRECOMPILED_QNN_ONNXmodel files from Qualcomm HuggingFace Repo based on your target device, e.g., X-Elite users chooseSnapdragon® X Elite. -
Extract the zip to
<APP ROOT>/modeldirectory. The expected directory structure is:
model/
|_ metadata.yaml
|_ text_encoder.onnx
|_ text_encoder_qairt_context.bin
|_ unet.onnx
|_ unet_qairt_context.bin
|_ vae.onnx
|_ vae_qairt_context.bin
- Run demo:
python demo.py --prompt "A girl taking a walk at sunset" --num-steps 20