Skip to content

Issues: lllyasviel/Omost

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Training another model
#17 opened Jun 1, 2024 by whitepapercg
AMD support
#19 opened Jun 1, 2024 by ikcikoR
Can be used with LoRA
#24 opened Jun 2, 2024 by wonchooo
Training Data for LLM
#76 opened Jun 5, 2024 by alphacoder01
Torch Error ?
#78 opened Jun 6, 2024 by XTRMsavage
about sd 1.5 or sd3 more future
#79 opened Jun 6, 2024 by cyy-1234
any plan to support ip-adapter?
#94 opened Jun 13, 2024 by 1093842024
Not work in RTX 4060 Ti
#99 opened Jun 21, 2024 by canytam-krystal
Linux下运行错误
#105 opened Jul 4, 2024 by a937983423
windows version
#106 opened Jul 10, 2024 by Archviz360
So where did this go?
#107 opened Jul 13, 2024 by Lustwaffel
Train a Llama 3.1 model
#110 opened Jul 26, 2024 by revolvedai
How to train an own LLM(Mixtral)?
#111 opened Jul 29, 2024 by tdlhyj
要是支持Flux就牛了!
#115 opened Aug 23, 2024 by e813519
lamas
#118 opened Sep 16, 2024 by gustvta
NVDIA driver version issue
#120 opened Sep 23, 2024 by Colin-chan1366
ProTip! Find all open issues with in progress development work with linked:pr.