-
-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP][[V1][Metrics] Implement max_num_generation_tokens metrics #14055
base: main
Are you sure you want to change the base?
[WIP][[V1][Metrics] Implement max_num_generation_tokens metrics #14055
Conversation
The initial implementation went to great efforts to add parallel sampling as a wrapper at the highest layer of abstraction possible. This resulted in a lot of tricky code to post-process RequestOutputs to aggregate them where necessary. Instead, it probably makes sense to implement parallel sampling at the layer that actually creates RequestOutput objects - i.e. in OutputProcessor. To do this, we simply need to allow for fanning out child requests in LLMEngine.add_request(), passing details of the fan-out to OutputProcessor. This adds some overhead to the n=1 case (see SingularSamplingRequest) in return for significantly less overhead and complication in the parallel sampling case. Signed-off-by: Mark McLoughlin <[email protected]>
Address PR feedback from Nick. Signed-off-by: Mark McLoughlin <[email protected]>
Fixes: TypeError: RequestOutput.__init__() missing 1 required positional argument: 'outputs' Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Keeping the output aggregator on RequestState was just a silly refactoring mistake - it clearly needs to be on the parent request since we are aggregating across child requests. Also address some PR feedback from Nick to make the logic here less confusing. Signed-off-by: Mark McLoughlin <[email protected]>
Now that we're not returning a RequestOutput for all finished requests, we need to perform finished request handling even without a RequestOutput now. Signed-off-by: Mark McLoughlin <[email protected]>
Based on excellent PR feedback from Nick. Signed-off-by: Mark McLoughlin <[email protected]>
Move all the logic for child request output aggregating into parallel_sampling.ParentReq. Signed-off-by: Mark McLoughlin <[email protected]>
PR feedback from Andy. Signed-off-by: Mark McLoughlin <[email protected]>
PR feedback from Andy. Signed-off-by: Mark McLoughlin <[email protected]>
PR feedback from Andy. Signed-off-by: Mark McLoughlin <[email protected]>
Not a perfect set of categories, but should make it easier to navigate. Signed-off-by: Mark McLoughlin <[email protected]>
This metric tracks the maximum of num_generation_tokens across a set of identical requests under a parallel sampling parent. It is the last remaining metric used by the example Grafana dashboard that makes sense in V1. Add some additional tracking of child requests to ParentRequest in order to facilitate this. Signed-off-by: Mark McLoughlin <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
(WIP until #13774 merges)
Part of #10582
This metric tracks the maximum of
num_generation_tokens
across a set of identical requests under a parallel sampling parent.It is the last remaining metric used by the example Grafana dashboard that makes sense in V1.
Add some additional tracking of child requests to
ParentRequest
in order to facilitate this.