Skip to content

Actions: vllm-project/vllm

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
87,212 workflow runs
87,212 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Heterogeneous Speculative Decoding (CPU + GPU)
clang-format #12996: Pull request #5065 synchronize by jiqing-feng
September 20, 2024 05:26 26s jiqing-feng:hete_spec_decode
September 20, 2024 05:26 26s
Heterogeneous Speculative Decoding (CPU + GPU)
yapf #22451: Pull request #5065 synchronize by jiqing-feng
September 20, 2024 05:26 2m 34s jiqing-feng:hete_spec_decode
September 20, 2024 05:26 2m 34s
[Bugfix] fix docker build for xpu
clang-format #12995: Pull request #8652 opened by yma11
September 20, 2024 05:26 17s yma11:docker-fix
September 20, 2024 05:26 17s
[Bugfix] fix docker build for xpu
ruff #20863: Pull request #8652 opened by yma11
September 20, 2024 05:26 26s yma11:docker-fix
September 20, 2024 05:26 26s
[Bugfix] fix docker build for xpu
mypy #15880: Pull request #8652 opened by yma11
September 20, 2024 05:26 44s yma11:docker-fix
September 20, 2024 05:26 44s
[Bugfix] fix docker build for xpu
yapf #22450: Pull request #8652 opened by yma11
September 20, 2024 05:26 2m 16s yma11:docker-fix
September 20, 2024 05:26 2m 16s
[Bugfix] fix docker build for xpu
PR Reminder Comment Bot #1119: Pull request #8652 opened by yma11
September 20, 2024 05:26 15s
September 20, 2024 05:26 15s
[Frontend] Batch inference for llm.chat() API
clang-format #12994: Pull request #8648 synchronize by aandyw
September 20, 2024 04:56 17s aandyw:batch-inf-llm-chat
September 20, 2024 04:56 17s
[Frontend] Batch inference for llm.chat() API
yapf #22449: Pull request #8648 synchronize by aandyw
September 20, 2024 04:56 2m 13s aandyw:batch-inf-llm-chat
September 20, 2024 04:56 2m 13s
[Frontend] Batch inference for llm.chat() API
ruff #20862: Pull request #8648 synchronize by aandyw
September 20, 2024 04:56 38s aandyw:batch-inf-llm-chat
September 20, 2024 04:56 38s
[Frontend] Batch inference for llm.chat() API
mypy #15879: Pull request #8648 synchronize by aandyw
September 20, 2024 04:56 43s aandyw:batch-inf-llm-chat
September 20, 2024 04:56 43s
[Core] Default to using per_token quantization for fp8 when cutlass is supported.
PR Reminder Comment Bot #1118: Pull request #8651 opened by elfiegg
September 20, 2024 04:56 13s
September 20, 2024 04:56 13s