-
-
Notifications
You must be signed in to change notification settings - Fork 4.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Implementing disaggregated prefilling, and caching KV cache in CPU/disk/database. #8498
base: main
Are you sure you want to change the base?
Conversation
Is this new feature compatible with continuous batching? I ran a test and found that when a request has not been finished yet if a new request comes in, it seems that an assertion error vllm/vllm/engine/llm_engine.py Lines 1038 to 1039 in 7abba39
It looks like when one of the requests ends, |
Let me test it again -- this issue did not happen previously but maybe some recent changes in vLLM is affecting my current disagg prefill implementation. |
@youkaichao Please review >.< |
will review this week. |
@KuntaiDu hi, this indeed should be a bug. I discovered it when adapting valkey. When decode fails to receive kv cache, this bug occurs. Then I rolled back to the latest version of this pull request. To simulate the failure of decode to receive data, I made changes to the code at this location in the file And it is very easy to reproduce when executing the benchmark.
|
@KuntaiDu Is it correct that |
maybe_disagg_rank = rank + world_size | ||
logger.debug("rank %d is KV consumer, adjust it to %d", rank, | ||
maybe_disagg_rank) | ||
|
||
torch.distributed.init_process_group( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to remove this line, and bypass the global world group. then disagg_group
can have a different port to initialize, and different world size, etc. we don't even need to change this file.
This pull request has merge conflicts that must be resolved before it can be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the great pr and sorry for the long delay!
I think we can simplify the code a lot, by allowing the disagg_group to have totally different initialization parameters. the user interface would be:
vllm serve ... --role [standalone, cache_produer, cache_consumer] --disagg-ratio XpYd --disagg-rank 0...(X+Y-1)
and you can have a separate disagg_config
under the newly added vllm_config
, which is directly passed to the model runner after #9938 .
# Sending KV cache in distributed KV cache transfer setting | ||
# NOTE: the send operation is non-blocking | ||
if self.need_send_kv(model_input, kv_caches): | ||
get_disagg_group().send_kv_caches_and_hidden_states( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this function send all layers kv-cache to decode ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes if pipeline parallism == 1. For pipeline parallism > 1 it will only send out the layers corresponding to this process.
Thanks for @youkaichao 's review! As PR #10072 is merged, now it is the time to update the implementation. Roadmap for this small refactor:
We are also aware other bugs and feature requests --- we will fix them in future PRs.
|
This pull request has merge conflicts that must be resolved before it can be |
Thank you for your contribution to the community. I think this is very valuable PR. I have a little problem with using openai_api_server. Does the Proxy API Server Need to Be Connected to the Network? Or can we deploy Proxy API server, vLLM prefill, and vLLM decode on a local machine? Typically, we expect to deploy the entire system locally, because the server is often not connected to the external network. |
TL; DR: implemented disaggregated prefill with <100 core line change (and most of them are comments)
This PR is a continuation of PR #6170 , with a new design that allows future extension.
Current supported applications:
examples/disagg_prefill/disagg_prefill_example.sh
for an example, andbenchmarks/disagg_prefill
for various benchmarks. Benchmarking script are all one-click runnable (after settingHF_TOKEN
)LMCache
. Examples TBD.Two roles: KV provider (e.g. prefill vLLM instance) and KV consumer (e.g. decode vLLM instance)
insert
: insert a KV cache to a buffer, so that it can be transferred upon requestdrop_select
: select a KV cache based on tokens, transfer the selected KV, and drop this KV out from the bufferExample workflow (the
buffer
in the following figure is the same asinsert
)PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]
for bug fixes.[CI/Build]
for build or continuous integration improvements.[Doc]
for documentation fixes and improvements.[Model]
for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]
For changes on the vLLM frontend (e.g., OpenAI API server,LLM
class, etc.)[Kernel]
for changes affecting CUDA kernels or other compute kernels.[Core]
for changes in the core vLLM logic (e.g.,LLMEngine
,AsyncLLMEngine
,Scheduler
, etc.)[Hardware][Vendor]
for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]
).[Misc]
for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.sh
to format your code.docs/source/
if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Adding or changing kernels
Each custom kernel needs a schema and one or more implementations to be registered with PyTorch.
Tensors
require meta-functions. Meta-functions should be implemented and registered in python so that dynamic dims can be handled automatically. See above documents for a description of meta-functions.torch.libary.opcheck()
to test the function registration and meta-function for any registered ops. Seetests/kernels
for examples.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-required
and might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-required
label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!