-
-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[sdkit.Context] Fallback to cpu device on Windows with AMD GPU #64
base: main
Are you sure you want to change the base?
Conversation
Also gave a try with suggested commands from (CUDA12/ROCM-night) torch Apparently Windows still doesn't have torch with ROCM support. Don't know if you can reproduce this error, but apparently it is related to This is the relevant poc """PR#64 sdkit"""
from os import path as ospath
from os import getcwd
from sdkit import Context
from sdkit.models import load_model
MODEL_TYPE = "stable-diffusion"
MODEL_PATH = ospath.join(getcwd(), MODEL_TYPE, "v1-5-pruned-emaonly.ckpt")
CONTEXT = Context()
CONTEXT.model_paths[MODEL_TYPE] = MODEL_PATH
load_model(CONTEXT, MODEL_TYPE) This script will throw the These are torch verification steps in REPL: >>> import torch
>>> print(torch.__version__)
2.1.0+cpu
>>> print(torch.rand(5, 3))
tensor([[0.4543, 0.9270, 0.4408],
[0.0567, 0.1613, 0.0919],
[0.9389, 0.8564, 0.9848],
[0.7152, 0.0204, 0.2193],
[0.5769, 0.1634, 0.2768]])
>>> torch.cuda.is_available()
False |
Full stacktrace...
Using this patch I can go through with cpu device. EDIT: |
Circular import fix `from .utils import log` when running proof of concept.
When trying to load a model with
load_model()
on Windows, having an AMD GPU (RX 7900XT),the calling program crashes with error: "Torch was not compiled with CUDA", without falling back to cpu device.
Tested by installing torch packages using:
pip3 install torch torchvision torchaudio
This patch ensures the
sdkit.Context()
gets initialized with cpu device when cuda is not available.