-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Fix loading Qwen VL fp4 ckpt #7301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Caution Review failedFailed to post review comments. Configuration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (3)**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Files:
**/*.py📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Files:
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Files:
🧠 Learnings (1)📚 Learning: 2025-09-03T13:16:06.824ZApplied to files:
🧬 Code graph analysis (1)tensorrt_llm/_torch/models/modeling_qwen2vl.py (4)
🪛 Ruff (0.12.2)tensorrt_llm/_torch/models/modeling_qwen2vl.py373-373: Undefined name (F821) 📝 WalkthroughWalkthroughVision encoder construction moved from HF from_pretrained to config-driven instantiation using AutoConfig and model_class(config=vision_config,...). A new public Qwen2VisionModelBase.load_weights filters and loads only Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Caller
participant Qwen2VL_Init as Qwen2VisionModelBase
participant HF_Config as HF AutoConfig.from_pretrained
participant VisionClass as VisionModelClass
participant Weights as Pretrained Weights Store
participant MM as MM Encoder / Qwen2VLModelBase
Caller->>Qwen2VL_Init: instantiate(model_path, pretrained_config, ...)
Qwen2VL_Init->>HF_Config: AutoConfig.from_pretrained(model_path)
HF_Config-->>Qwen2VL_Init: hf_model_config (contains vision_config)
Qwen2VL_Init->>VisionClass: model_class(config=hf_model_config.vision_config, torch_dtype=..., attn_implementation='flash_attention_2')
VisionClass-->>Qwen2VL_Init: vision_model instance
Qwen2VL_Init->>Qwen2VL_Init: self.visual = vision_model.to(device)
alt Loading weights later
Caller->>Qwen2VL_Init: load_weights(weights)
Qwen2VL_Init->>VisionClass: filter keys prefix 'visual.' and load_state_dict(stripped_keys)
Qwen2VL_Init->>MM: mm_encoder.load_weights(weights)
MM->>Qwen2VL_Init: return
Qwen2VL_Init->>Qwen2VL_Init: init_rotary_cos_sin_ori()
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested reviewers
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
/bot run |
|
PR_Github #16708 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tensorrt_llm/_torch/models/modeling_qwen2vl.py (2)
244-247: Fix under-padding of video frames; current formula yields incorrect multiples.
padded_num_frames = num_frames + num_frames % temporal_patch_sizeproduces 6 for 5 frames with tps=4 (expected 8). This breaks token count for videos.Apply this diff:
- padded_num_frames = num_frames + num_frames % temporal_patch_size + padded_num_frames = ((num_frames + temporal_patch_size - 1) // temporal_patch_size) * temporal_patch_size
593-596: Use torch.cat instead of torch.concatenate.
torch.concatenateis not a stable PyTorch API;torch.catis the canonical call.Apply this diff:
- concat_cos_sin = torch.concatenate((cos, sin), axis=-1) + concat_cos_sin = torch.cat([cos, sin], dim=-1)
🧹 Nitpick comments (3)
tensorrt_llm/_torch/models/modeling_qwen2vl.py (3)
361-366: Avoid silent weight drops when enabling ignore_mismatched_sizes; capture and log loading_info.This unblocks fp4 ckpt load but can silently skip weights (including vision). Capture
loading_infoand warn if anyvisual.*keys are missing/mismatched. Also addlow_cpu_mem_usage=Truefor large checkpoints.Apply this diff:
- model = model_class.from_pretrained( - model_path, - torch_dtype=pretrained_config.torch_dtype, - attn_implementation='flash_attention_2', - ignore_mismatched_sizes=True).eval() + model, loading_info = model_class.from_pretrained( + model_path, + torch_dtype=pretrained_config.torch_dtype, + attn_implementation='flash_attention_2', + ignore_mismatched_sizes=True, + low_cpu_mem_usage=True, + output_loading_info=True) + # Warn if vision encoder wasn't fully loaded + missing = set(loading_info.get("missing_keys", [])) + mismatched = loading_info.get("mismatched_keys", []) + mismatched_keys = { + (m[0] if isinstance(m, (list, tuple)) else m) for m in mismatched + } + problematic = sorted( + k for k in (missing | mismatched_keys) if k.startswith("visual.") + ) + if problematic: + logger.warning( + f"Qwen2-VL vision weights not fully loaded (ignored mismatches): " + f"{problematic[:10]}... total={len(problematic)}" + ) + model = model.eval()
255-257: Use proper typing: Any/Dict rather than built-in any.
anyhere refers to the built-in function, not typing. UseDict[str, Any].Apply this diff:
- def _preprocess(self, text: dict[str, any], mm_data: dict[str, any], - mm_processor_kwargs: Dict[str, Any]): + def _preprocess(self, text: Dict[str, Any], mm_data: Dict[str, Any], + mm_processor_kwargs: Dict[str, Any]):
1-1: Repository guideline: add NVIDIA copyright header.This file lacks the required NVIDIA header for 2025. Please add it in a follow-up to keep this PR focused.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
tensorrt_llm/_torch/models/modeling_qwen2vl.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Code must target Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Preserve module namespaces when importing; import modules/packages and access members via the module (e.g., from package.subpackage import foo; foo.SomeClass())
Python file names should be snake_case
Python class names should be PascalCase
Python functions/methods and local variables should be snake_case; variables beginning with a number should be prefixed with k_ (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE prefixed with G_ (e.g., G_MY_GLOBAL); constants should be UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments should be reserved for in-function or file-local interfaces
Use Google-style docstrings for classes and functions; attributes and variables may be documented inline with trailing string literals
Avoid reflection when simpler, explicit code suffices (e.g., avoid dict(**locals()) patterns)
In try/except, catch the narrowest exceptions possible
For duck-typing patterns, keep the try body minimal and move logic to else to avoid masking unrelated failures
Files:
tensorrt_llm/_torch/models/modeling_qwen2vl.py
**/*.{c,cc,cpp,cxx,h,hh,hpp,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)
Files:
tensorrt_llm/_torch/models/modeling_qwen2vl.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
|
PR_Github #16708 [ run ] completed with state |
Signed-off-by: Pamela <[email protected]>
Signed-off-by: Pamela <[email protected]>
4730bd7 to
b45aee6
Compare
|
/bot run |
|
PR_Github #17842 [ run ] triggered by Bot |
|
PR_Github #17842 [ run ] completed with state |
|
/bot run |
|
PR_Github #17909 [ run ] triggered by Bot |
|
PR_Github #17909 [ run ] completed with state |
Signed-off-by: Pamela <[email protected]>
630b8cd to
e96dda5
Compare
|
/bot run |
|
PR_Github #17956 [ run ] triggered by Bot |
|
PR_Github #17956 [ run ] completed with state |
Signed-off-by: Pamela <[email protected]>
|
Hi @pamelap-nvidia, I submitted the weight-loading fix in this PR(#8680) |
Summary by CodeRabbit
New Features
Bug Fixes
Description
Test Coverage
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.