You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use a model that I have downloaded and stored locally on my device (UI TARS 7B DPO) but after running the command in the readme:
python -m vllm.entrypoints.openai.api_server --served-model-name ui-tars --model <path to your model>
and specifying the path to the UI TARS 7B DPO model, then updating the endpoint and model name in the UI settings, I am still not able to get it to work. The UI outputs the error "ERROR: 400 status code (no body)". Any idea how I can fix this?
The text was updated successfully, but these errors were encountered:
Same issue here where requests to /v1/chat/completions return a 400 Bad Request error with no response body. Interestingly, the first action (screenshot/click) appears to work correctly then the logs indicate an error in preprocessing prompt inputs, specifically in _parse_chat_message_content_part, which results in a ValueError: At most 1 image(s) may be provided in one request.
The error appears to originate from the following files and lines in the vLLM package:
I am trying to use a model that I have downloaded and stored locally on my device (UI TARS 7B DPO) but after running the command in the readme:
python -m vllm.entrypoints.openai.api_server --served-model-name ui-tars --model <path to your model>
and specifying the path to the UI TARS 7B DPO model, then updating the endpoint and model name in the UI settings, I am still not able to get it to work. The UI outputs the error "ERROR: 400 status code (no body)". Any idea how I can fix this?
The text was updated successfully, but these errors were encountered: