You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When attempting to use the VLLM_USE_V1=1 feature in triton inference server backend the models fail to start up due to signal handling being attempted outside of the main thread.
The following error occurs in startup.
model.py:244] "[vllm] Failed to start engine: signal only works in main thread of the main interpreter"
pb_stub.cc:366] "Failed to initialize Python stub: ValueError: signal only works in main thread of the main interpreter
At:
/usr/lib/python3.12/signal.py(58): signal
/app/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py(160): __init__
/app/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py(252): __init__
/app/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py(53): make_client
/app/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py(79): __init__
/app/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py(107): from_engine_args
/app/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py(162): build_async_engine_client_from_engine_args
/usr/lib/python3.12/contextlib.py(211): __aenter__
/opt/tritonserver/backends/vllm/model.py(289): _run_llm_engine
/usr/lib/python3.12/asyncio/events.py(103): _run
/usr/lib/python3.12/asyncio/base_events.py(1988): _run_once
/usr/lib/python3.12/asyncio/base_events.py(649): run_forever
/usr/lib/python3.12/asyncio/base_events.py(687): run_until_complete
/usr/lib/python3.12/asyncio/runners.py(126): run
/usr/lib/python3.12/asyncio/runners.py(193): run
/usr/lib/python3.12/threading.py(1014): run
/usr/lib/python3.12/threading.py(1077): _bootstrap_inner
/usr/lib/python3.12/threading.py(1030): _bootstrap
"```
This seems to be an incompatibility between VLLM V1 assuming it is on the main thread and triton inference server running vllm outside of the main thread.
Question here is if the implementation of V1 can be adapted so that it can run outside of the main thread?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
Your current environment
. NA
Model Input Dumps
No response
🐛 Describe the bug
When attempting to use the
VLLM_USE_V1=1
feature in triton inference server backend the models fail to start up due to signal handling being attempted outside of the main thread.The following error occurs in startup.
The text was updated successfully, but these errors were encountered: