Skip to content

Conversation

skunkworxdark
Copy link
Contributor

Summary

Related bug report: #8415

My attempt at adding hyper and turbo FLUX LoRA support. This was done with a lot of help from Gemini. I am not fully happy with the code. I don't feel it follows well with the existing code structure.

I found it tricky to follow exactly how all the FLUX LoRA patching worked. I did a fair bit of testing, and it does seem to have a positive effect on the number of steps needed to generate. However, I am not 100% confident it works fully as expected and does the patching correctly for these hyper/turbo LoRAs. It really needs someone who has a better understanding than I to take a look over it.

Related Issues / Discussions

#8415

QA Instructions

Load up one of the hyper/turbo LoRAs and generate and image.
Other Flux LoRAs should also be tested to make sure nothing have been broken.

Merge Plan

No plan. This is really just a WIP at the moment.

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • Documentation added / updated (if applicable)
  • Updated What's New copy (if doing a release after this PR)

@github-actions github-actions bot added python PRs that change python files backend PRs that change backend files labels Aug 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend PRs that change backend files python PRs that change python files
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant