Skip to content

Conversation

stevhliu
Copy link
Member

Docs for enabling different attention backends.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@stevhliu stevhliu requested a review from sayakpaul September 11, 2025 22:49
Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for starting this! Looking good. Before talking about specific attention backends, we could educate the users about the forms in which they can use them.

  1. Through setting attention backend name model. set_attention_backend("<backend_name>").
  2. Through the context manager.

Then we could maintain a table of attention backend names, their GitHub/official pages, and notes.

We could then take one example for a backend and make it complete.

This way, I think it will be leaner and easier to follow. WDYT?


## PyTorch native

PyTorch includes a [native implementation](https://docs.pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) of several optimized attention implementations including [FlexAttention](https://pytorch.org/blog/flexattention/), FlashAttention, memory-efficient attention, and a C++ version.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Flex uses a different path in torch:

    out = flex_attention.flex_attention(

  2. The backends that leverage nn.functional.scaled_dot_product_attention() can be found in https://github.com/huggingface/diffusers/blob/5e181eddfe7e44c1444a2511b0d8e21d177850a0/src/diffusers/models/attention_dispatch.py (search with scaled_dot_product_attention().

@stevhliu
Copy link
Member Author

Sounds good, feel free to contribute to the table with any notes you may have!

I skipped making an additional example at the end to avoid being repetitive since I've already included an example when introducing set_attention_backend and with attention_backend. I think once users learn that, its easy for them to just plug in the attention backend names.

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Superb stuff!

I would also make it clear that attention backends are an experimental feature.

@stevhliu stevhliu marked this pull request as ready for review September 23, 2025 17:09
@stevhliu stevhliu merged commit a72bc0c into huggingface:main Sep 23, 2025
1 check passed
@stevhliu stevhliu deleted the attn-backends branch September 23, 2025 17:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants