You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
Checklist
Describe the bug
The model hosted by SRT outputs nonsense and is slower under high concurrency.
Is this inevitable?
Reproduction
I launch the server by running:
python -m sglang.launch_server \ --model-path "meta-llama/Meta-Llama-3.1-8B-Instruct" \ --dp 4 --quantization fp8 --kv-cache-dtype fp8_e5m2 \
or
python -m sglang.launch_server \ --model-path "meta-llama/Meta-Llama-3.1-8B-Instruct" \ --tp 4 --quantization fp8 --kv-cache-dtype fp8_e5m2 \
Below is the minimal reproducible demo:
Output of
--dp 4
in my environment:#queue-req
is always0
.Environment
The text was updated successfully, but these errors were encountered: