-
-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Misc] Accurately capture the time of loading weights #14063
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
8c5d7e8
to
c67a24c
Compare
Signed-off-by: Jun Duan <[email protected]>
c67a24c
to
cbe98bf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
I gave this PR a try with meta-llama/Llama-3.2-1B
and this is what it looks like on V0.
INFO 03-01 07:56:47 [weight_utils.py:273] Time spent downloading weights for meta-llama/Llama-3.2-1B: 14.584169 seconds
INFO 03-01 07:56:47 [weight_utils.py:307] No model.safetensors.index.json found in remote.
Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:05<00:00, 5.31s/it]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:05<00:00, 5.31s/it]
INFO 03-01 07:56:52 [loader.py:423] Loading weights took 5.38 seconds
INFO 03-01 07:56:53 [model_runner.py:1117] Loading model weights took 2.3185 GB and 20.256478 seconds
I left a minor comment - please take a look!
vllm/v1/worker/gpu_model_runner.py
Outdated
logger.info("Loading model weights took %.4f GB and %.6f seconds", | ||
logger.info("Loading model took %.4f GB and %.6f seconds", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's update this on V0 as well?
I also think logger.info("Model loading took %.4f GB and %.6f seconds",
sounds more natural and less confusing!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, thanks!
Latest push tweaks the wording as suggested, and contains the same change for V0.
Signed-off-by: Jun Duan <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for this QoL change!
This PR tries to better capture the time of loading weights.
As we know, there are two major phases when loading a model: (A) downloading model if not in cache, and (B) loading weights.
Due to the lazy nature of generators that
yield
tensors one by one, the downloading phase is not executed until aload_weights
method of the corresponding model begins to iterate over the generator.So, in order to accurately capture the time of loading weights, we need to start our stopwatch after downloading is finished and before the first tensor is
yield
ed. That's the general idea of this PR.