Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: V1 cannot be run in Triton Inference Server Backend #12690

Open
TheCodeWrangler opened this issue Feb 3, 2025 · 2 comments · May be fixed by #11737 or #14048
Open

[Bug]: V1 cannot be run in Triton Inference Server Backend #12690

TheCodeWrangler opened this issue Feb 3, 2025 · 2 comments · May be fixed by #11737 or #14048
Labels
bug Something isn't working v1

Comments

@TheCodeWrangler
Copy link

Your current environment

. NA

Model Input Dumps

No response

🐛 Describe the bug

When attempting to use the VLLM_USE_V1=1 feature in triton inference server backend the models fail to start up due to signal handling being attempted outside of the main thread.

The following error occurs in startup.

model.py:244] "[vllm] Failed to start engine: signal only works in main thread of the main interpreter"
pb_stub.cc:366] "Failed to initialize Python stub: ValueError: signal only works in main thread of the main interpreter

At:
  /usr/lib/python3.12/signal.py(58): signal
  /app/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py(160): __init__
  /app/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py(252): __init__
  /app/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py(53): make_client
  /app/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py(79): __init__
  /app/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py(107): from_engine_args
  /app/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py(162): build_async_engine_client_from_engine_args
  /usr/lib/python3.12/contextlib.py(211): __aenter__
  /opt/tritonserver/backends/vllm/model.py(289): _run_llm_engine
  /usr/lib/python3.12/asyncio/events.py(103): _run
  /usr/lib/python3.12/asyncio/base_events.py(1988): _run_once
  /usr/lib/python3.12/asyncio/base_events.py(649): run_forever
  /usr/lib/python3.12/asyncio/base_events.py(687): run_until_complete
  /usr/lib/python3.12/asyncio/runners.py(126): run
  /usr/lib/python3.12/asyncio/runners.py(193): run
  /usr/lib/python3.12/threading.py(1014): run
  /usr/lib/python3.12/threading.py(1077): _bootstrap_inner
  /usr/lib/python3.12/threading.py(1030): _bootstrap
"```

This seems to be an incompatibility between VLLM V1 assuming it is on the main thread and triton inference server running vllm outside of the main thread.

Question here is if the implementation of V1 can be adapted so that it can run outside of the main thread?

### Before submitting a new issue...

- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
@TheCodeWrangler TheCodeWrangler added the bug Something isn't working label Feb 3, 2025
@whybeyoung
Copy link

the same error

@whybeyoung whybeyoung mentioned this issue Feb 4, 2025
6 tasks
@robertgshaw2-redhat
Copy link
Collaborator

Thanks for reporting this. This will not be fixed for 0.7.2 but we can take a look for the official release of V1.

@mgoin mgoin added the v1 label Feb 4, 2025
@robertgshaw2-redhat robertgshaw2-redhat linked a pull request Feb 8, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working v1
Projects
None yet
4 participants