Commit 848e7ac
[SDPA-CUDNN] Make CuDNN Attention Opt in (#138587)
[SDPA-CUDNN] Make CuDNN Attention Opt in (#138522)
# Summary
Currently we have a `cudnn_order` that says on H100 w/ new enough CuDNN backend (we ship a 9.1 version in OSS) try to run CuDNN attention first. We have already encountered a few bugs with the release of 2.5:
1. #138529
2. huggingface/diffusers#9704
3. #138354
In light of the above we are going to make the CuDNN backend Opt-in by default.
This can be done easily with the context manager for choosing backends I.e.:
``` Python
from torch.nn.attention import sdpa_kernel, SDPBackend
with sdpa_kernel(SDPBackend.CUDNN_ATTENTION):
out = F.scaled_dot_product_attention(q, k, v)
```
This PR puts the CuDNN backend as the lowest precedence in the backend list, meaning that the Math backend will always be chosen unless disabled (which is done via the context manager).
Cc @atalman
Pull Request resolved: #138522
Approved by: https://github.com/ngimel, https://github.com/eqy, https://github.com/malfet
(cherry picked from commit 9a9a0ab)
Co-authored-by: drisspg <[email protected]>1 parent 885c823 commit 848e7ac
File tree
2 files changed
+8
-9
lines changed- aten/src/ATen/native/transformers/cuda
- test
2 files changed
+8
-9
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
68 | 68 | | |
69 | 69 | | |
70 | 70 | | |
71 | | - | |
72 | 71 | | |
73 | | - | |
74 | | - | |
| 72 | + | |
75 | 73 | | |
76 | | - | |
77 | | - | |
78 | | - | |
79 | | - | |
80 | | - | |
| 74 | + | |
| 75 | + | |
81 | 76 | | |
82 | 77 | | |
83 | 78 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
2809 | 2809 | | |
2810 | 2810 | | |
2811 | 2811 | | |
| 2812 | + | |
| 2813 | + | |
2812 | 2814 | | |
2813 | | - | |
| 2815 | + | |
| 2816 | + | |
| 2817 | + | |
2814 | 2818 | | |
2815 | 2819 | | |
2816 | 2820 | | |
| |||
0 commit comments