-
Notifications
You must be signed in to change notification settings - Fork 8
[Draft] kv aware schedule #134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v0.11.0
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces changes to make the KV cache scheduling aware of multi-modal (mm) features by associating mm_hashes with KV cache blocks and request outputs. The changes are propagated through event definitions, block pool management, and the output processor. My review identified a critical bug in vllm/v1/core/block_pool.py related to an incorrect token index calculation, which could lead to incorrect behavior. I also found a minor issue with a missing type hint in vllm/v1/engine/output_processor.py that should be addressed for better code quality.
| for _ in range(num_full_blocks - | ||
| num_cached_blocks)] | ||
| start_token_idx = num_cached_blocks * block_size | ||
| end_token_idx = num_full_blocks + block_size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There appears to be a bug in the calculation of end_token_idx. It should be num_full_blocks * block_size to correctly represent the end token index of the full blocks. The current calculation num_full_blocks + block_size mixes block count with block size, which will lead to an incorrect token range and cause multi-modal features to be associated with the wrong blocks.
| end_token_idx = num_full_blocks + block_size | |
| end_token_idx = num_full_blocks * block_size |
| top_p: Optional[float] = None, | ||
| n: Optional[int] = None, | ||
| temperature: Optional[float] = None, | ||
| mm_hashes=None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
do we have a related PR in llm-service repo for this scheduling |
Purpose
Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.