-
-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP][Model] Extend Ultravox to accept audio longer than 30s #13631
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Farzad Abdolhosseini <[email protected]>
Signed-off-by: Farzad Abdolhosseini <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
@@ -15,7 +15,7 @@ | |||
from ....utils import RemoteOpenAIServer | |||
from ...utils import check_logprobs_close | |||
|
|||
MODEL_NAME = "fixie-ai/ultravox-v0_5-llama-3_2-1b" | |||
MODEL_NAME = "fixie-ai/ultravox-v0_3-llama-3_2-1b" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is temporary and will be reverted.
output = super()._call_hf_processor( | ||
prompt=prompt, | ||
mm_data=item_processor_data, | ||
mm_kwargs=mm_kwargs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Processor is updated to handle multiple audio. This is still a bit of a WIP since 1) not all models are updated, only the v0_3 version of 1B is updated, and 2) during this PR I realized that the new processor will break this VLLM implementation, so I have to figure out what to do there.
feature_extractor = self.get_feature_extractor() | ||
max_audio_tokens = math.ceil(feature_extractor.chunk_length * | ||
_AUDIO_TOKENS_PER_SECOND) | ||
|
||
return {"audio": max_audio_tokens} | ||
return {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if I'm doing the right thing here. I just want to say there's no limit to input audio anymore.
FYI @NickLucche for the usage of whisper |
Thanks for the contrib! |
# to handle longer than 30s audio, each audio might be split | ||
# into multiple chunks as such, their batch dimension can be | ||
# higher than the number of audio samples | ||
audio_features=MultiModalFieldConfig.batched("audio_chunked"), | ||
audio_token_len=MultiModalFieldConfig.batched("audio_chunked"), | ||
audio_lens=MultiModalFieldConfig.batched("audio_chunked"), | ||
# num_chunks can convert audio_chunked to audio batch dimension | ||
audio_num_chunks=MultiModalFieldConfig.batched("audio"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be doing something wrong here, perhaps I should do .flat_from_sizes
but I'm not sure how that works closely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use .flat_from_sizes
in the processor to represent variable batch size per audio (assuming that the processor concatenates the result across each audio). The processor cache should still work correctly as long as the processor output for a given audio input doesn't depend on other audio inputs.
Inside the model, you can torch.split
them according to audio_num_chunks
to recover the original batches.
re @NickLucche: Here's the processor link: https://huggingface.co/fixie-ai/ultravox-v0_3-llama-3_2-1b/blob/main/ultravox_processing.py#L209 The logic: for each audio, split to 30 second chunks (but do not pad the last item to 30s, which is the same as before). There are other ways we could've done this, but it matches what we do on the Ultravox side for both some fine-tuning that we do and evals. If we end up updating those I'll update VLLM as well. Also, note that since we don't pad the last chunk, and since in most cases we have smaller than 30s audio, the number of frames do not match across samples. |
Signed-off-by: Farzad Abdolhosseini <[email protected]>
9f00316
to
0c5363e
Compare
Ok I see then that's a naive chunking where you don't account for splitting mid-word nor you have any overlap and/or prompt from previous chunk. This case seems much easier to handle vllm-side, given changes are already in hf. Let's just make sure the batched whisper forward is accounted for by the initial profiler run to avoid oom. |
Currently the Ultravox model input is capped to 30 seconds and extra audio is truncated (AFAIK). Also each sample is fed to Whisper individually (without being batched).
This PR allows using longer audio by chunking them first, using Whisper encoder in batch mode, and then concatenates them.
TODO: