-
Notifications
You must be signed in to change notification settings - Fork 6.5k
[tests] add tests for flux modular (t2i, i2i, kontext) #12566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| InputParam( | ||
| "guidance", | ||
| required=True, | ||
| required=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For Flux.1-schnell, guidance shouldn't be required.
| "image_processor", | ||
| VaeImageProcessor, | ||
| config=FrozenDict({"vae_scale_factor": 16}), | ||
| config=FrozenDict({"vae_scale_factor": 16, "vae_latent_channels": 16}), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be consistent.
| pooled_prompt_embeds = block_state.pooled_prompt_embeds.repeat(1, block_state.num_images_per_prompt) | ||
| block_state.pooled_prompt_embeds = pooled_prompt_embeds.view( | ||
| block_state.batch_size * block_state.num_images_per_prompt, -1 | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was left out of the expansion incorrectly.
| import torch | ||
| from PIL import Image | ||
|
|
||
| from diffusers import ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unrelated changes but I thought I would do it (happy to revert).
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the PR
looks good, but I don't know why the changes in components_manager got reverted
I left a commenet there, I think we should add a warning during enable_model_oaffload(), and you can also keep the warning from the memory_reserve (or just make a note there in comment so we know that to fix in the fugure)
feel free to merge otherwise
| device_module = getattr(torch, device_type, torch.cuda) | ||
| mem_on_device = device_module.mem_get_info(execution_device.index)[0] | ||
| mem_on_device = mem_on_device - self.memory_reserve_margin | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ohh, what happended here?
#12566 (comment)
It was a good catch, I think we should also add a warning sooner during enable_model_offload()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad for misunderstanding!
I have added the error catching. I added a TODO in enable_auto_cpu_offload() to error out early for mem_get_info().
What does this PR do?
The tests also helped me uncover some bugs and fix them. Some comments are in line.