-
Notifications
You must be signed in to change notification settings - Fork 6.3k
[docs] Attention backends #12320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] Attention backends #12320
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for starting this! Looking good. Before talking about specific attention backends, we could educate the users about the forms in which they can use them.
- Through setting attention backend name
model. set_attention_backend("<backend_name>")
. - Through the context manager.
Then we could maintain a table of attention backend names, their GitHub/official pages, and notes.
We could then take one example for a backend and make it complete.
This way, I think it will be leaner and easier to follow. WDYT?
|
||
## PyTorch native | ||
|
||
PyTorch includes a [native implementation](https://docs.pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) of several optimized attention implementations including [FlexAttention](https://pytorch.org/blog/flexattention/), FlashAttention, memory-efficient attention, and a C++ version. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
Flex uses a different path in
torch
:
out = flex_attention.flex_attention( -
The backends that leverage
nn.functional.scaled_dot_product_attention()
can be found in https://github.com/huggingface/diffusers/blob/5e181eddfe7e44c1444a2511b0d8e21d177850a0/src/diffusers/models/attention_dispatch.py (search withscaled_dot_product_attention(
).
e876275
to
6b58810
Compare
Sounds good, feel free to contribute to the table with any notes you may have! I skipped making an additional example at the end to avoid being repetitive since I've already included an example when introducing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Superb stuff!
I would also make it clear that attention backends are an experimental feature.
e801245
to
25a97b1
Compare
Docs for enabling different attention backends.