Open
Description
Describe the bug
When I load serverl loras with set_lora_device(), the GPU memory continues to grow, cames from 20G to 25G, this function doesn't work
Reproduction
for key in lora_list:
weight_name = key + ".safetensors"
pipe.load_lora_weights(lora_path, weight_name=weight_name, adapter_name=key, local_files_only=True)
adapters = pipe.get_list_adapters()
print(adapters)
pipe.set_lora_device([key], torch.device('cpu'))
Logs
No response
System Info
V100 32G
diffusers 0.32.0.dev0
torch 2.0.1+cu118
peft 0.12.0
Who can help?
No response