Skip to content

device_map="auto" supported for diffusers pipelines? #11555

Open
@johannaSommer

Description

@johannaSommer

Describe the bug

Hey dear diffusers team,

for DiffusionPipline, as I understand (hopefully correctly) from this part of the documentation, it should be possible to specify device_map="auto" when loading a pipeline with from_pretrained but this results in a value error saying that this is not supported.

However, the documentation on device placement currently states that only the "balanced" strategy is supported.

Is this possibly similar to #11432 and should be removed from the docstrings / documentation? Happy to help on this with a PR if it turns out to be a mistake in the documentation.

Thanks a lot for your hard work!

Reproduction

from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", device_map="auto")

or

from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", device_map="auto")

Logs

---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
Cell In[12], line 3
      1 from diffusers import StableDiffusionPipeline
----> 3 pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", device_map="auto")

File ~/miniconda3/envs/pruna/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
    111 if check_use_auth_token:
    112     kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 114 return fn(*args, **kwargs)

File ~/miniconda3/envs/pruna/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py:745, in DiffusionPipeline.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
    742     raise ValueError("`device_map` must be a string.")
    744 if device_map is not None and device_map not in SUPPORTED_DEVICE_MAP:
--> 745     raise NotImplementedError(
    746         f"{device_map} not supported. Supported strategies are: {', '.join(SUPPORTED_DEVICE_MAP)}"
    747     )
    749 if device_map is not None and device_map in SUPPORTED_DEVICE_MAP:
    750     if is_accelerate_version("<", "0.28.0"):

NotImplementedError: auto not supported. Supported strategies are: balanced

System Info

  • 🤗 Diffusers version: 0.33.1
  • Platform: Linux-5.15.0-139-generic-x86_64-with-glibc2.35
  • Running on Google Colab?: No
  • Python version: 3.10.16
  • PyTorch version (GPU?): 2.7.0+cu126 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.30.2
  • Transformers version: 4.51.3
  • Accelerate version: 1.6.0
  • PEFT version: 0.15.2
  • Bitsandbytes version: 0.45.5
  • Safetensors version: 0.5.3
  • xFormers version: not installed
  • Accelerator: NVIDIA H100 PCIe, 81559 MiB
    NVIDIA H100 PCIe, 81559 MiB
  • Using GPU in script?: yes
  • Using distributed or parallel set-up in script?: yes

Who can help?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions