-
Notifications
You must be signed in to change notification settings - Fork 2.5k
[bug]: Invoke refuses to use my RX 7600 XT GPU #7574
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I have the same issue with my rx 6700xt on Arch.
|
Same exact issue also but with a RX 6900XT... |
I solved (somehow) my problem installing InvokeAI and THEN:
This is my full start scrip (adjust for your GPU)t: #!/bin/bash
set -x -e
script_path=$(readlink -f "$0" 2>/dev/null || realpath "$0" 2>/dev/null || echo "$0")
sdir="$(dirname "${script_path}")"
here="$(cd "$sdir" && pwd)"
echo "The path of this script is: $script_path ($here)"
user=$(ls -ld "$script_path" | awk '{print $3}')
home=$(getent passwd "$user" | cut -d: -f6)
echo "Home directory of $user is $home"
VENV="invoke"
# Check InvokeAI is instaled in virtual environment
if [ -x "$VENV/bin/invokeai-web" ]
then
echo "InvokeAI is already instaled, skipping..."
else
# check Virtual Environment exists
if [ -x "$VENV/bin/python" ]
then
echo "Virtual Environment at '$VENV' already present, skipping..."
else
echo "Creating basic Virtual Environment at '$VENV'..."
PYTHON="python3.11"
CACHE="$here"
# prepare environment
$PYTHON -m venv $VENV
fi
# Activate virtual environment
source "$VENV/bin/activate"
# Install InvokeAI in Virtual Environment
echo "Installing InvokeAI in Virtual Environment at '$VENV'..."
REPO=https://download.pytorch.org/whl/nightly/rocm6.3
$VENV/bin/pip install --extra-index-url $REPO invokeai
# restore right version of pytorch-triton-rocm, torch and torchvision
pip uninstall pytorch-triton-rocm torch torchvision bitsandbytes --yes
pip install pytorch-triton-rocm torch torchvision --index-url https://download.pytorch.org/whl/nightly/rocm6.3
# install multi-backend "bitsandbytes"
if [ -d "$here/bitsandbytes" ]
then
echo "Multi-backend 'bitsandbytes' already present, skipping..."
else
echo "Compiling Multi-backend 'bitsandbytes'..."
(
cd "$here"
# Install bitsandbytes from source
# Clone bitsandbytes repo, ROCm backend is currently enabled on multi-backend-refactor branch
git clone -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
# Install dependencies
pip install .[dev]
# Compile & install
#sudo apt-get install -y build-essential cmake # install build tools dependencies, unless present
cmake -DCOMPUTE_BACKEND=hip -S . # Use -DBNB_ROCM_ARCH="gfx90a;gfx942" to target specific gpu arch
make
)
fi
echo "Installing Multi-backend 'bitsandbytes'..."
pip install "$here/bitsandbytes" # `-e` for "editable" install, when developing BNB (otherwise leave that out)
fi
# start InvokeAI
export PYTORCH_ROCM_ARCH=gfx1102
export HSA_OVERRIDE_GFX_VERSION=11.0.0
export PYTORCH_HIP_ALLOC_CONF=expandable_segments:True
export TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1
export INVOKEAI_ROOT=~/invokeai
export GPU_DRIVER=rocm
$VENV/bin/invokeai-web |
@mcondarelli , are you able to use all the features in Invoke? |
I am very new to InvokeAI so I have NO idea about "all the features", but I can do a lot of things with no errors, at least:
I didn't try upscaling, yet Things surely not working:
I opened a few tickets against |
The official installer, for some reason, installs a version of bitsandbytes that doesn't support ROCm as a backend. I've been swapping it out for ROCm's fork of bitsandbytes, which of course does. But, since I built it myself and my distro is on ROCm 6.3, I then have to switch torch, torchvision, and pytorch-triton-rocm to the version compatible with ROCm 6.3. Basically, the same thing mcondarelli is doing. Haven't figured out how to get patchmatch working with it. Hope this gets fixed soon. |
Thank you, it worked for me :) |
Is there an existing issue for this problem?
Operating system
Linux
GPU vendor
AMD (ROCm)
GPU model
RX 7600 XT
GPU VRAM
16GB
Version number
5.5.0
Browser
Firefox 134.0
Python dependencies
What happened
Every time I try to generate an image I get error:
What you expected to happen
I expected image generation to start.
How to reproduce the problem
In my setup all image generation attempts produce this error.
Using a CPU-only, no-GPU configuration works as expected... and, as expected, is very slow.
Additional context
I have seen several bug reports mentioning ROCm, but I didn't find anything really comparable.
Notice I'm a completely newbie at AI hosting so I might be missing something pretty basic.
Full specs of my server are:
Discord username
mcon
The text was updated successfully, but these errors were encountered: