-
Notifications
You must be signed in to change notification settings - Fork 124
Description
As i said in the title this UI has a lot of cool potential but suffer of some critical issue. I'm using it right now as i write this post, so i can update this post as i find an issue.
For now i found these issues:
1 - Doesn't matter if the seed is randomized. it generates almost the same song or slight variaton of it. But all of them are very similar.
2 - When i select inference steps 8, or 32 dont's seems to change anything about time of generation (its seems to stick to the turbo model) and i have all the models already downloaded
3 - when the CoT (thinking mode) option is enabled, the inference fails. with this error
"Generation failed: Generation failed on API side" (in the interface) while in the CMD windows there is this log:
INFO: 127.0.0.1:63547 - "POST /query_result HTTP/1.1" 200 OK Generating: 0%| | 0/2 [00:00<?, ?steps/s] 2026-02-04 22:48:21.948 | ERROR | acestep.llm_inference:generate_with_stop_condition:1104 - Error in batch codes generation: INFO: 127.0.0.1:63547 - "POST /query_result HTTP/1.1" 200 OK.
EDIT: it seems to be and Api call error because now every inference doesn't working event if i de-select Thinking mode. Now it gives me always API error
Will be fantastic if i can select all the models directly in the UI, i have all the models for the generation and all the LLMs. But i cannot select it.