-
-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core][Frontend] Support Passing Processor Kwargs #8657
base: main
Are you sure you want to change the base?
[Core][Frontend] Support Passing Processor Kwargs #8657
Conversation
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
18c6b6f
to
d5f9efa
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, originally I was thinking of dynamic kwargs via multi-modal inputs, e.g.
llm.generate({"multi_modal_data": {"image": {"data": image, "options": image_kwargs}}})
but this works too!
I'll take a closer look later today.
@@ -134,6 +134,7 @@ def __init__( | |||
max_seq_len_to_capture: int = 8192, | |||
disable_custom_all_reduce: bool = False, | |||
disable_async_output_proc: bool = False, | |||
processor_kwargs=None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
processor_kwargs=None, | |
processor_kwargs: Optional[Dict[str, Any]] = None, |
@@ -211,7 +217,7 @@ def _default_input_processor(self, ctx: InputContext, | |||
"""The default input processor is a no-op.""" | |||
return inputs | |||
|
|||
def register_input_processor(self, processor: InputProcessor): | |||
def register_input_processor(self, processor: InputProcessor) -> Callable: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def register_input_processor(self, processor: InputProcessor) -> Callable: | |
def register_input_processor(self, processor: InputProcessor): |
I intentionally didn't specify the return type here so that the type variables are automatically inferred without having to spell them all out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you do want to explicitly specify it, it should return Callable[[N], N]
here.
def process_input(self, model_config: "ModelConfig", | ||
inputs: LLMInputs) -> LLMInputs: | ||
def _process_input(self, inputs: LLMInputs, model_config: "ModelConfig", | ||
processor: Callable, **processor_kwargs) -> LLMInputs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
processor: Callable, **processor_kwargs) -> LLMInputs: | |
processor: InputProcessor, **processor_kwargs: Any) -> LLMInputs: |
return processor(InputContext(model_config), inputs, | ||
**processor_kwargs) | ||
|
||
def create_input_processor(self, model_config: "ModelConfig") -> Callable: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def create_input_processor(self, model_config: "ModelConfig") -> Callable: | |
def create_input_processor(self, model_config: "ModelConfig"): |
**processor_kwargs) | ||
|
||
def _get_model_input_processor(self, | ||
model_config: "ModelConfig") -> Callable: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model_config: "ModelConfig") -> Callable: | |
model_config: "ModelConfig"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you do want to explicitly specify it, it should return InputProcessor
here.
Sounds good, thanks @DarkLight1337! 🙂 I wanted to open this one first since it's useful by itself, and to make sure the overall approach looks reasonable and had good test coverage since it's touching a few different things. I'm happy to add that in a follow-up PR though - I did implement this with that in mind as next steps, and it won't require changes to individual models to support it if additional init time processor kwargs are exposed in the meantime! 🤞 |
In support of #7861 - It's split into two PRs to make it easier to review
This PR adds support for passing
processor_kwargs
at initialization time to override values, e.g., in the image processor config and adds a pattern that we can use to easily implement such overrides, especially in the case of multimodal models. More specifically, it adds a well-defined way to pass validprocessor_kwargs
as expanded keyword-only arguments (which should have default values) to:This is important since the provided overrides may greatly change the number of image tokens per multimodal instance.
It also forwards some args to the video processor, since those weren't being pushed through yet.
Some Implementation Considerations
I've tried to implement this to be easily extensible for potential inference-time configuration and intuitive to use coming from
transformers
. More specifically, I was aiming for:num_crops
comes from is a bad experience, and implementing a per-model mapping of allowed config overrides is also fragile since things could be moved around a bit for models that are written to try to generically support fine-tuned models with different LLMs, etcwarn
that it'll be unused and drop it from the processor kwargs, instead of throwing some nasty unexpected keyword argument deep in the model classExamples
I've opened a second PR on top of this PR which makes
num_crops
at init time for phi3v models which illustrates the usage here; see this commit for what adding aprocessor_kwarg
looks like, as well as example tests for each place it's used in phi3v.You can also try running those examples on this to see what happens if it's unsupported. Since it leverages the default mapper, whose kwargs can't easily be inspected since it gets created through the automodel, there will be some mismatches that eventually crash since the processor kwargs will be used to initialize the auto model, but you'll at least get a bunch of things like:
out in the warmup. In situations where the class has its own mapper, and a bad kwarg was used, this would be ignored like it is everywhere else (same case as the sad not found kwargs tests here)
BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE
PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]
for bug fixes.[CI/Build]
for build or continuous integration improvements.[Doc]
for documentation fixes and improvements.[Model]
for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]
For changes on the vLLM frontend (e.g., OpenAI API server,LLM
class, etc.)[Kernel]
for changes affecting CUDA kernels or other compute kernels.[Core]
for changes in the core vLLM logic (e.g.,LLMEngine
,AsyncLLMEngine
,Scheduler
, etc.)[Hardware][Vendor]
for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]
).[Misc]
for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.sh
to format your code.docs/source/
if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Adding or changing kernels
Each custom kernel needs a schema and one or more implementations to be registered with PyTorch.
Tensors
require meta-functions. Meta-functions should be implemented and registered in python so that dynamic dims can be handled automatically. See above documents for a description of meta-functions.torch.libary.opcheck()
to test the function registration and meta-function for any registered ops. Seetests/kernels
for examples.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-required
and might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-required
label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!