-
Notifications
You must be signed in to change notification settings - Fork 429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FP32 causes 8GB vram gpu out of memory error, RunDiffusion/Juggernaut-X-v10 FP16 is not supported #63
Comments
I see the syntax you used to change models but I'm not sure how to apply that to my own. Of course all my models are located in the A1111 folder somewhere else. What is the syntax? Is there a problem concern with the file extension? |
@Duemellon |
So you can't point this to a local download of a file at this time? It can only do a download after you point to the HF link? |
If you want to load the local .safetensors file, you will need to modify more code |
no undesend |
yes, that's what would be needed. I have a terabyte or more of locally installed safetensor fiels that I could use instead of pointing to a a HF directory. I'm not familiar with PY at all to make these changes but that is a different topic than this thread at this point. |
|
I'll submit a PR later |
"torch_dtype=torch.float32" It run incorrectly |
# SDXL
sdxl_name = RunDiffusion/Juggernaut-X-v10 #FP16 is not supported
# sdxl_name = SG161222/RealVisXL_V4.0
# sdxl_name = stabilityai/stable-diffusion-xl-base-1.0
FP32 can be downloaded, however it causes 8GB vram gpu out of memory error
The text was updated successfully, but these errors were encountered: