Skip to content

Commit 6682956

Browse files
authored
Add AttentionMixin to WanVACETransformer3DModel (#12268)
* Add AttentionMixin to WanVACETransformer3DModel to enable methods like `set_attn_processor()`. * Import AttentionMixin in transformer_wan_vace.py Special thanks to @tolgacangoz 🙇‍♂️
1 parent ffc8c0c commit 6682956

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

src/diffusers/models/transformers/transformer_wan_vace.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121
from ...configuration_utils import ConfigMixin, register_to_config
2222
from ...loaders import FromOriginalModelMixin, PeftAdapterMixin
2323
from ...utils import USE_PEFT_BACKEND, logging, scale_lora_layers, unscale_lora_layers
24-
from ..attention import FeedForward
24+
from ..attention import AttentionMixin, FeedForward
2525
from ..cache_utils import CacheMixin
2626
from ..modeling_outputs import Transformer2DModelOutput
2727
from ..modeling_utils import ModelMixin
@@ -134,7 +134,9 @@ def forward(
134134
return conditioning_states, control_hidden_states
135135

136136

137-
class WanVACETransformer3DModel(ModelMixin, ConfigMixin, PeftAdapterMixin, FromOriginalModelMixin, CacheMixin):
137+
class WanVACETransformer3DModel(
138+
ModelMixin, ConfigMixin, PeftAdapterMixin, FromOriginalModelMixin, CacheMixin, AttentionMixin
139+
):
138140
r"""
139141
A Transformer model for video-like data used in the Wan model.
140142

0 commit comments

Comments
 (0)