Skip to content

Conversation

@VisionTheta
Copy link

@VisionTheta VisionTheta commented Dec 15, 2025

This PR demonstrates an integration of KV-cache sparsity into vLLM. The implementation preserves all tokens for context heads, while blocking context tokens in the KV cache for the remaining heads to reduce memory usage.

To assess the impact of KV-cache sparsity on model quality, this PR also includes a LongBench evaluation script, enabling direct comparison of model performance with and without KV-cache sparsity enabled.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant