Skip to content

Conversation

lucaslie
Copy link
Member

@lucaslie lucaslie commented Oct 8, 2025

Summary by CodeRabbit

  • New Features

    • Submodule-aware export flow with dynamic shape capture and post-processing.
    • New run_forward_for_capture helper for safer export-time execution.
    • Patch framework supports disabled-by-default patches; added Pixtral dtype patch (enabled by default).
  • Breaking Changes

    • Metadata prep ops now use position_ids; input_ids parameter removed across attention/conv/SSM paths.
    • Factories now provide export info via get_export_infos; legacy extra-inputs path removed.
    • Transforms operate on full models by default (not per-graph).
  • Deprecations/Removals

    • Mistral3VLM factory and related public export removed.

Description

Note: contains changes from #8157, Please only review final commit

  • Support for capturing text-only portion of VLMs
  • Support for torch-cudagraph and torch-opt for VLMs
  • New and existing model support:
    • Fixing existing support for llama4 and mistral3
    • Added support for torch-opt/torch-cudagraph for llama4 + mistral3
    • new support Qwen/Qwen2.5-VL-7B-Instruct including opt/cudagraph. Was previously blocked on complex dynamism, see Support Qwen 2.5 VL nv-auto-deploy/TensorRT-LLM#127

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Lucas Liebenwein <[email protected]>
@lucaslie lucaslie requested a review from a team as a code owner October 8, 2025 20:57
@lucaslie lucaslie requested a review from Fridah-nv October 8, 2025 20:57
@lucaslie lucaslie moved this from Backlog to In review in AutoDeploy Board Oct 8, 2025
@lucaslie lucaslie self-assigned this Oct 8, 2025
@lucaslie
Copy link
Member Author

lucaslie commented Oct 8, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20822 [ run ] triggered by Bot

Copy link
Contributor

coderabbitai bot commented Oct 8, 2025

📝 Walkthrough

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description is still populated with template placeholders and lacks a concrete summary of changes and specific test coverage details, making it unclear what was implemented and which tests validate it. Please replace the @coderabbitai summary placeholder with a concise overview of the changes, list relevant test cases under “## Test Coverage,” and remove unused template instructions so the description fully covers the required sections.
Docstring Coverage ⚠️ Warning Docstring coverage is 48.09% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title uses the required ticket/type template, correctly summarises the addition of VLM subgraph support along with cudagraph and compile enhancements, and is concise and relevant to the core change.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (8)
tensorrt_llm/_torch/auto_deploy/transform/library/compile_model.py (1)

52-58: Do not pass compile_backend back into the backend ctor.
self.config.model_dump() still includes compile_backend even though we already used it to choose compiler_cls. Every backend constructor I checked (CompileBackendTorchSimple, CompileBackendTorchCompile, etc.) does not accept a compile_backend keyword, so this call will raise a TypeError: __init__() got an unexpected keyword argument 'compile_backend'. Please exclude that field (e.g., self.config.model_dump(exclude={"compile_backend"})) before splatting into the ctor.

tensorrt_llm/_torch/auto_deploy/models/patches/pixtral.py (1)

112-121: Respect caller-provided return_dict.
Line 117 currently forces return_dict=True, so callers requesting return_dict=False (or relying on the config default when None) now get a dict regardless, diverging from the Hugging Face contract. Please plumb the argument through instead of overriding it.

@@
-    out = self.transformer(
+    if return_dict is None:
+        return_dict = getattr(self.config, "use_return_dict", True)
+
+    out = self.transformer(
@@
-        return_dict=True,
+        return_dict=return_dict,
tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py (1)

1-1: Add NVIDIA Apache-2.0 header (2025) at file top.

Required by repo guidelines. Add the standard header before the docstring.

As per coding guidelines

+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#     http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py (1)

1-1: Add NVIDIA Apache-2.0 header (2025) at file top.

Required by repo guidelines.

As per coding guidelines

+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#     http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
tensorrt_llm/_torch/auto_deploy/models/hf.py (1)

1-1: Add NVIDIA Apache-2.0 header (2025) at file top.

Required by repo guidelines.

As per coding guidelines

+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#     http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (1)

1-1: Add NVIDIA Apache-2.0 header (2025) at file top.

Required by repo guidelines.

As per coding guidelines

+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#     http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
tensorrt_llm/_torch/auto_deploy/transform/interface.py (1)

1-1: Add NVIDIA Apache-2.0 header (2025) at file top.

Required by repo guidelines.

As per coding guidelines

+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#     http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (1)

1-1: Add NVIDIA Apache-2.0 header (compliance).

File is missing the required NVIDIA Apache-2.0 header with current year.

As per coding guidelines, prepend:

+# Copyright (c) 2025, NVIDIA CORPORATION.  All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#     http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.

As per coding guidelines

🧹 Nitpick comments (13)
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_mamba_cached_op.py (1)

186-186: Remove obsolete input_ids comment.

Leftover commented code is noise now that the metadata path no longer consumes input_ids. Please drop it to keep the fixture tight.

-    # input_ids = torch.randint(0, 1000, (b, s), device=device)
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_attention_op.py (1)

472-472: Trim the stale input_ids comment.

Same as the mamba test, this commented line can go now that the signature no longer takes input_ids.

-        # input_ids = torch.randint(0, 1000, (batch_size, seq_len_val), device=device)
tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.py (1)

40-41: Silence the unused parameter warning.

model isn’t used; rename it to _model (or similar) so Ruff stops flagging ARG002.

Apply this diff:

-    def get_export_infos(self, model: nn.Module) -> List[SubModuleExportInfo]:
+    def get_export_infos(self, _model: nn.Module) -> List[SubModuleExportInfo]:
tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py (2)

39-45: Silence unused-arg warnings for cm/shared_config.

Prefix with underscores to satisfy linters without behavioral change.

-    def _apply_to_full_model(
-        self,
-        mod: nn.Module,
-        cm: CachedSequenceInterface,
-        factory: ModelFactory,
-        shared_config: SharedConfig,
-    ) -> Tuple[nn.Module, TransformInfo]:
+    def _apply_to_full_model(
+        self,
+        mod: nn.Module,
+        _cm: CachedSequenceInterface,
+        factory: ModelFactory,
+        _shared_config: SharedConfig,
+    ) -> Tuple[nn.Module, TransformInfo]:

46-51: Minor: avoid redundant move when devices match.

If adconfig_checkpoint_device equals target device, the extra move_to_device is redundant. Optional micro-optimization.

tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py (1)

193-207: Add safety guard for missing _gm when using patched forward.

Prevents obscure AttributeError if called before profiling step attaches _gm.

 def forward_with_prepare_metadata(mod: nn.Module, **cm_kwargs):
     """Run prepare_metadata as pre-processing step, add to kwargs, and then run regular forward."""
-    gm = mod._gm
+    assert hasattr(mod, "_gm"), "Expected `mod._gm` set by detection pass before cached forward."
+    gm = mod._gm
tensorrt_llm/_torch/auto_deploy/models/hf.py (1)

453-455: Silence unused-arg warning in get_export_infos.

Rename param to _model to match usage.

-    def get_export_infos(self, model: nn.Module) -> List[SubModuleExportInfo]:
+    def get_export_infos(self, _model: nn.Module) -> List[SubModuleExportInfo]:
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (1)

270-275: Use tuple unpacking instead of concatenation.

Slightly cleaner and matches ruff suggestion.

-        return ("position_ids",) + self._cached_arg_names
+        return ("position_ids", *self._cached_arg_names)
tensorrt_llm/_torch/auto_deploy/transform/interface.py (1)

342-351: Improve exception logging; keep skip-on-error behavior.

Capture stack with ad_logger.exception for easier debugging.

-                except Exception as e:
-                    error_msg = f"Transform {t_name} failed"
-                    ad_logger.warning(f"{error_msg}: {e}")
+                except Exception:
+                    ad_logger.exception("Transform %s failed", t_name)
                     info_apply = TransformInfo(skipped=True, num_matches=0)
tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (2)

51-56: Capture hook: support positional args and silence unused param.

  • Current assert breaks if a submodule is invoked positionally.
  • Inner hook param mod is unused (ARG001).

Refactor to normalize args→kwargs and avoid the assert.

-    def _capture_kwargs(mod: nn.Module, args, kwargs) -> None:
-        assert not args, "positional arguments are not supported for capture"
-        captured_kwargs.clear()
-        captured_kwargs.update(kwargs)
+    def _capture_kwargs(_m: nn.Module, args, kwargs) -> None:
+        # Normalize positional + keyword args to kwargs using the callee's signature.
+        try:
+            sig = inspect.signature(_m.forward)
+            bound = sig.bind_partial(*args, **(kwargs or {}))
+            normalized = bound.arguments
+        except Exception:
+            # Fallback to raw kwargs if signature binding fails.
+            normalized = kwargs or {}
+        captured_kwargs.clear()
+        captured_kwargs.update(normalized)
         return None

161-164: Guard dynamic_shapes lookup to avoid KeyError.

If a captured Tensor arg lacks an entry in dynamic_shape_lookup, this will KeyError.

-            dynamic_shapes = {
-                k: e_info.dynamic_shape_lookup[k] if isinstance(v, torch.Tensor) else None
-                for k, v in captured_kwargs.items()
-            }
+            dynamic_shapes = {
+                k: (e_info.dynamic_shape_lookup.get(k) if isinstance(v, torch.Tensor) else None)
+                for k, v in captured_kwargs.items()
+            }

Optionally log missing keys for visibility.

tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (2)

39-46: Silence unused parameters in signature.

mod, cm, shared_config unused (ARG002). Rename to underscore to appease linters while keeping API.

-    def _apply_to_full_model(
-        self,
-        mod: nn.Module,
-        cm: CachedSequenceInterface,
-        factory: ModelFactory,
-        shared_config: SharedConfig,
-    ) -> Tuple[nn.Module, TransformInfo]:
+    def _apply_to_full_model(
+        self,
+        _mod: nn.Module,
+        _cm: CachedSequenceInterface,
+        factory: ModelFactory,
+        _shared_config: SharedConfig,
+    ) -> Tuple[nn.Module, TransformInfo]:

68-75: Silence unused parameters in signature.

Same here for mod and shared_config.

-    def _apply_to_full_model(
-        self,
-        mod: nn.Module,
-        cm: CachedSequenceInterface,
-        factory: ModelFactory,
-        shared_config: SharedConfig,
-    ) -> Tuple[nn.Module, TransformInfo]:
+    def _apply_to_full_model(
+        self,
+        _mod: nn.Module,
+        cm: CachedSequenceInterface,
+        factory: ModelFactory,
+        _shared_config: SharedConfig,
+    ) -> Tuple[nn.Module, TransformInfo]:
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 80517b7 and c3465d0.

📒 Files selected for processing (43)
  • tensorrt_llm/_torch/auto_deploy/config/default.yaml (4 hunks)
  • tensorrt_llm/_torch/auto_deploy/config/transformers.yaml (1 hunks)
  • tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (6 hunks)
  • tensorrt_llm/_torch/auto_deploy/custom_ops/cuda_backend_causal_conv.py (2 hunks)
  • tensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.py (2 hunks)
  • tensorrt_llm/_torch/auto_deploy/custom_ops/mla.py (2 hunks)
  • tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_attention.py (2 hunks)
  • tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_causal_conv.py (2 hunks)
  • tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_mamba.py (2 hunks)
  • tensorrt_llm/_torch/auto_deploy/custom_ops/triton_attention.py (2 hunks)
  • tensorrt_llm/_torch/auto_deploy/export/export.py (3 hunks)
  • tensorrt_llm/_torch/auto_deploy/export/interface.py (2 hunks)
  • tensorrt_llm/_torch/auto_deploy/models/__init__.py (1 hunks)
  • tensorrt_llm/_torch/auto_deploy/models/factory.py (3 hunks)
  • tensorrt_llm/_torch/auto_deploy/models/hf.py (8 hunks)
  • tensorrt_llm/_torch/auto_deploy/models/mistral3.py (0 hunks)
  • tensorrt_llm/_torch/auto_deploy/models/patches/llama4.py (4 hunks)
  • tensorrt_llm/_torch/auto_deploy/models/patches/mistral3.py (3 hunks)
  • tensorrt_llm/_torch/auto_deploy/models/patches/pixtral.py (5 hunks)
  • tensorrt_llm/_torch/auto_deploy/models/patches/starcoder.py (1 hunks)
  • tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py (0 hunks)
  • tensorrt_llm/_torch/auto_deploy/shim/interface.py (1 hunks)
  • tensorrt_llm/_torch/auto_deploy/transform/interface.py (9 hunks)
  • tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (3 hunks)
  • tensorrt_llm/_torch/auto_deploy/transform/library/compile_model.py (3 hunks)
  • tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (3 hunks)
  • tensorrt_llm/_torch/auto_deploy/transform/library/kvcache.py (5 hunks)
  • tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py (6 hunks)
  • tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py (3 hunks)
  • tensorrt_llm/_torch/auto_deploy/transform/optimizer.py (1 hunks)
  • tensorrt_llm/_torch/auto_deploy/transformations/_graph.py (6 hunks)
  • tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py (2 hunks)
  • tests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.py (1 hunks)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_cuda_causal_conv_cached_op.py (1 hunks)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_attention_op.py (2 hunks)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_causal_conv_cached_op.py (1 hunks)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_mamba_cached_op.py (1 hunks)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_llama4_vlm_patch.py (2 hunks)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3.py (0 hunks)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3_patches.py (3 hunks)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_build_small_single.py (2 hunks)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py (3 hunks)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.py (3 hunks)
💤 Files with no reviewable changes (3)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3.py
  • tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
  • tensorrt_llm/_torch/auto_deploy/models/mistral3.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_causal_conv_cached_op.py
  • tensorrt_llm/_torch/auto_deploy/export/interface.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3_patches.py
  • tensorrt_llm/_torch/auto_deploy/models/patches/starcoder.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_cuda_causal_conv_cached_op.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/cuda_backend_causal_conv.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_causal_conv.py
  • tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py
  • tensorrt_llm/_torch/auto_deploy/transform/optimizer.py
  • tests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/compile_model.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_mamba.py
  • tensorrt_llm/_torch/auto_deploy/shim/interface.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_llama4_vlm_patch.py
  • tensorrt_llm/_torch/auto_deploy/transformations/_graph.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_attention.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_build_small_single.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/mla.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_attention_op.py
  • tensorrt_llm/_torch/auto_deploy/models/patches/mistral3.py
  • tensorrt_llm/_torch/auto_deploy/export/export.py
  • tensorrt_llm/_torch/auto_deploy/models/patches/pixtral.py
  • tensorrt_llm/_torch/auto_deploy/models/__init__.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_mamba_cached_op.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/triton_attention.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/kvcache.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py
  • tensorrt_llm/_torch/auto_deploy/models/hf.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py
  • tensorrt_llm/_torch/auto_deploy/models/patches/llama4.py
  • tensorrt_llm/_torch/auto_deploy/transform/interface.py
  • tensorrt_llm/_torch/auto_deploy/models/factory.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_causal_conv_cached_op.py
  • tensorrt_llm/_torch/auto_deploy/export/interface.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3_patches.py
  • tensorrt_llm/_torch/auto_deploy/models/patches/starcoder.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_cuda_causal_conv_cached_op.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/cuda_backend_causal_conv.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_causal_conv.py
  • tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py
  • tensorrt_llm/_torch/auto_deploy/transform/optimizer.py
  • tests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/compile_model.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_mamba.py
  • tensorrt_llm/_torch/auto_deploy/shim/interface.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_llama4_vlm_patch.py
  • tensorrt_llm/_torch/auto_deploy/transformations/_graph.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_attention.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_build_small_single.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/mla.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_attention_op.py
  • tensorrt_llm/_torch/auto_deploy/models/patches/mistral3.py
  • tensorrt_llm/_torch/auto_deploy/export/export.py
  • tensorrt_llm/_torch/auto_deploy/models/patches/pixtral.py
  • tensorrt_llm/_torch/auto_deploy/models/__init__.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_mamba_cached_op.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/triton_attention.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/kvcache.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py
  • tensorrt_llm/_torch/auto_deploy/models/hf.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py
  • tensorrt_llm/_torch/auto_deploy/models/patches/llama4.py
  • tensorrt_llm/_torch/auto_deploy/transform/interface.py
  • tensorrt_llm/_torch/auto_deploy/models/factory.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_causal_conv_cached_op.py
  • tensorrt_llm/_torch/auto_deploy/export/interface.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3_patches.py
  • tensorrt_llm/_torch/auto_deploy/models/patches/starcoder.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_cuda_causal_conv_cached_op.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/cuda_backend_causal_conv.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_causal_conv.py
  • tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py
  • tensorrt_llm/_torch/auto_deploy/transform/optimizer.py
  • tests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/compile_model.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_mamba.py
  • tensorrt_llm/_torch/auto_deploy/shim/interface.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_llama4_vlm_patch.py
  • tensorrt_llm/_torch/auto_deploy/transformations/_graph.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_attention.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_build_small_single.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/mla.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_attention_op.py
  • tensorrt_llm/_torch/auto_deploy/models/patches/mistral3.py
  • tensorrt_llm/_torch/auto_deploy/export/export.py
  • tensorrt_llm/_torch/auto_deploy/models/patches/pixtral.py
  • tensorrt_llm/_torch/auto_deploy/models/__init__.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_mamba_cached_op.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/triton_attention.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/kvcache.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py
  • tensorrt_llm/_torch/auto_deploy/models/hf.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py
  • tensorrt_llm/_torch/auto_deploy/models/patches/llama4.py
  • tensorrt_llm/_torch/auto_deploy/transform/interface.py
  • tensorrt_llm/_torch/auto_deploy/models/factory.py
🧠 Learnings (1)
📚 Learning: 2025-08-06T03:47:16.802Z
Learnt from: venkywonka
PR: NVIDIA/TensorRT-LLM#6650
File: tests/integration/test_lists/qa/llm_perf_cluster.yml:33-37
Timestamp: 2025-08-06T03:47:16.802Z
Learning: Ministral is a valid model name from Mistral AI, distinct from the regular Mistral models. In TensorRT-LLM test configurations, "ministral_8b" and "ministral_8b_fp8" are correct model identifiers and should not be changed to "mistral_8b".

Applied to files:

  • tests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.py
🧬 Code graph analysis (30)
tensorrt_llm/_torch/auto_deploy/export/interface.py (4)
tensorrt_llm/llmapi/llm_args.py (1)
  • Field (70-97)
tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (1)
  • get_config_class (36-37)
tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (1)
  • get_config_class (122-123)
tensorrt_llm/_torch/auto_deploy/transform/library/kvcache.py (2)
  • get_config_class (81-82)
  • get_config_class (239-240)
tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_mistral3_patches.py (1)
tensorrt_llm/_torch/auto_deploy/models/hf.py (1)
  • get_example_inputs_with_images (609-659)
tensorrt_llm/_torch/auto_deploy/custom_ops/cuda_backend_causal_conv.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (2)
  • _get_sanitized_seq_len (385-425)
  • seq_len (293-294)
tensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.py (2)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (2)
  • seq_len (293-294)
  • _get_sanitized_seq_len (385-425)
tensorrt_llm/_torch/attention_backend/flashinfer.py (1)
  • page_size (185-189)
tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.py (4)
tensorrt_llm/_torch/auto_deploy/models/factory.py (5)
  • FullModelExportInfo (72-91)
  • ModelFactory (94-334)
  • SubModuleExportInfo (27-69)
  • get_export_infos (323-334)
  • model (125-127)
tensorrt_llm/_torch/auto_deploy/models/hf.py (2)
  • get_export_infos (453-454)
  • get_export_infos (668-669)
tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py (1)
  • get_export_infos (44-45)
tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py (1)
  • get_export_infos (40-41)
tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_causal_conv.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (6)
  • SequenceInfo (34-689)
  • _get_sanitized_seq_len (385-425)
  • seq_len (293-294)
  • input_pos (297-298)
  • cache_loc (301-302)
  • pages_per_seq (305-306)
tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py (1)
tensorrt_llm/_torch/auto_deploy/models/factory.py (4)
  • FullModelExportInfo (72-91)
  • SubModuleExportInfo (27-69)
  • get_export_infos (323-334)
  • model (125-127)
tensorrt_llm/_torch/auto_deploy/transform/optimizer.py (2)
tensorrt_llm/_torch/auto_deploy/shim/interface.py (1)
  • CachedSequenceInterface (11-76)
tensorrt_llm/_torch/auto_deploy/transform/interface.py (2)
  • TransformRegistry (503-531)
  • get (519-521)
tensorrt_llm/_torch/auto_deploy/transform/library/compile_model.py (3)
tensorrt_llm/_torch/auto_deploy/transform/interface.py (4)
  • _apply_to_full_model (490-500)
  • SharedConfig (60-66)
  • TransformInfo (121-174)
  • get (519-521)
tensorrt_llm/_torch/auto_deploy/shim/interface.py (1)
  • CachedSequenceInterface (11-76)
tensorrt_llm/_torch/auto_deploy/compile/compiler.py (2)
  • CompileBackendRegistry (12-31)
  • get (25-27)
tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_mamba.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (3)
  • SequenceInfo (34-689)
  • _get_sanitized_seq_len (385-425)
  • seq_len (293-294)
tensorrt_llm/_torch/auto_deploy/shim/interface.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (2)
  • GetCacheCallable (712-713)
  • SequenceInfo (34-689)
tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_llama4_vlm_patch.py (1)
tensorrt_llm/_torch/auto_deploy/export/interface.py (1)
  • apply_export_patches (237-280)
tensorrt_llm/_torch/auto_deploy/transformations/_graph.py (1)
tensorrt_llm/module.py (1)
  • Module (33-226)
tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_attention.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (2)
  • _get_sanitized_num_sequences (428-443)
  • seq_len (293-294)
tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (3)
tensorrt_llm/_torch/auto_deploy/export/export.py (2)
  • run_forward_for_capture (198-250)
  • torch_export_to_gm (253-321)
tensorrt_llm/_torch/auto_deploy/shim/interface.py (2)
  • args (23-25)
  • named_args (28-30)
tensorrt_llm/_torch/auto_deploy/models/factory.py (5)
  • get_example_inputs (310-320)
  • get_export_infos (323-334)
  • dynamic_shape_lookup (36-51)
  • post_process (59-69)
  • post_process (90-91)
tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py (4)
tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py (2)
  • SequenceEmbeddingInfo (48-86)
  • get_export_infos (44-45)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (1)
  • CacheConfig (28-31)
tensorrt_llm/_torch/auto_deploy/export/export.py (1)
  • torch_export_to_gm (253-321)
tensorrt_llm/_torch/auto_deploy/models/factory.py (5)
  • FullModelExportInfo (72-91)
  • ModelFactory (94-334)
  • SubModuleExportInfo (27-69)
  • get_export_infos (323-334)
  • model (125-127)
tensorrt_llm/_torch/auto_deploy/custom_ops/mla.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (2)
  • _get_sanitized_num_sequences (428-443)
  • seq_len (293-294)
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_attention_op.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (4)
  • seq_len (293-294)
  • input_pos (297-298)
  • cache_loc (301-302)
  • pages_per_seq (305-306)
tensorrt_llm/_torch/auto_deploy/models/patches/mistral3.py (1)
tensorrt_llm/_torch/auto_deploy/export/interface.py (3)
  • DisabledBaseExportPatch (142-150)
  • ExportPatchRegistry (186-233)
  • register (192-201)
tensorrt_llm/_torch/auto_deploy/export/export.py (2)
tensorrt_llm/_torch/auto_deploy/export/interface.py (1)
  • apply_export_patches (237-280)
tensorrt_llm/_torch/auto_deploy/transformations/_graph.py (3)
  • lift_to_meta (79-92)
  • tree_to (71-75)
  • load_buffers_and_params (32-68)
tensorrt_llm/_torch/auto_deploy/models/patches/pixtral.py (2)
tensorrt_llm/_torch/auto_deploy/export/interface.py (8)
  • DisabledBaseExportPatch (142-150)
  • ExportPatchRegistry (186-233)
  • register (192-201)
  • _apply_patch (132-134)
  • _apply_patch (174-177)
  • _revert_patch (137-139)
  • _revert_patch (179-183)
  • create_patch (221-228)
tensorrt_llm/_torch/models/modeling_pixtral.py (1)
  • PixtralVisionModel (170-256)
tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (6)
tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (1)
  • _apply_to_full_model (125-197)
tensorrt_llm/_torch/auto_deploy/transform/interface.py (3)
  • _apply_to_full_model (490-500)
  • SharedConfig (60-66)
  • TransformInfo (121-174)
tensorrt_llm/_torch/auto_deploy/transform/library/compile_model.py (1)
  • _apply_to_full_model (42-65)
tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py (2)
  • _apply_to_full_model (39-54)
  • _apply_to_full_model (67-78)
tensorrt_llm/_torch/auto_deploy/shim/interface.py (1)
  • CachedSequenceInterface (11-76)
tensorrt_llm/_torch/auto_deploy/models/factory.py (3)
  • ModelFactory (94-334)
  • model (125-127)
  • build_model (134-173)
tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py (5)
tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (2)
  • _apply_to_full_model (39-52)
  • _apply_to_full_model (68-88)
tensorrt_llm/_torch/auto_deploy/transform/interface.py (3)
  • _apply_to_full_model (490-500)
  • SharedConfig (60-66)
  • TransformInfo (121-174)
tensorrt_llm/_torch/auto_deploy/shim/interface.py (2)
  • CachedSequenceInterface (11-76)
  • to (37-41)
tensorrt_llm/_torch/auto_deploy/models/factory.py (2)
  • ModelFactory (94-334)
  • load_or_random_init (239-280)
tensorrt_llm/_torch/auto_deploy/transformations/_graph.py (1)
  • move_to_device (135-142)
tensorrt_llm/_torch/auto_deploy/custom_ops/triton_attention.py (1)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (2)
  • _get_sanitized_num_sequences (428-443)
  • seq_len (293-294)
tensorrt_llm/_torch/auto_deploy/transform/library/kvcache.py (2)
tensorrt_llm/_torch/auto_deploy/transform/interface.py (4)
  • _apply_to_full_model (490-500)
  • SharedConfig (60-66)
  • TransformInfo (121-174)
  • BaseTransform (213-500)
tensorrt_llm/_torch/auto_deploy/shim/interface.py (3)
  • CachedSequenceInterface (11-76)
  • named_args (28-30)
  • initialize_caches (47-54)
tensorrt_llm/_torch/auto_deploy/models/hf.py (1)
tensorrt_llm/_torch/auto_deploy/models/factory.py (10)
  • FullModelExportInfo (72-91)
  • ModelFactory (94-334)
  • SubModuleExportInfo (27-69)
  • get_export_infos (323-334)
  • model (125-127)
  • post_process (59-69)
  • post_process (90-91)
  • _init_dynamic_shape_lookup (54-56)
  • _init_dynamic_shape_lookup (82-88)
  • init_processor (205-212)
tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py (4)
tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (2)
  • _apply_to_full_model (39-52)
  • _apply_to_full_model (68-88)
tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (1)
  • _apply_to_full_model (125-197)
tensorrt_llm/_torch/auto_deploy/transform/interface.py (4)
  • _apply_to_full_model (490-500)
  • SharedConfig (60-66)
  • TransformInfo (121-174)
  • _apply (475-488)
tensorrt_llm/_torch/auto_deploy/shim/interface.py (2)
  • CachedSequenceInterface (11-76)
  • named_args (28-30)
tensorrt_llm/_torch/auto_deploy/models/patches/llama4.py (1)
tensorrt_llm/_torch/auto_deploy/export/interface.py (4)
  • BaseExportPatch (47-139)
  • DisabledBaseExportPatch (142-150)
  • ExportPatchRegistry (186-233)
  • register (192-201)
tensorrt_llm/_torch/auto_deploy/transform/interface.py (5)
tensorrt_llm/_torch/auto_deploy/shim/interface.py (2)
  • CachedSequenceInterface (11-76)
  • args (23-25)
tensorrt_llm/_torch/auto_deploy/models/factory.py (3)
  • ModelFactory (94-334)
  • get (349-351)
  • model (125-127)
tensorrt_llm/_torch/auto_deploy/transformations/_graph.py (5)
  • run_shape_prop (218-243)
  • named_graphmodules (95-99)
  • canonicalize_graph (174-187)
  • lift_to_meta (79-92)
  • placeholders_on_meta (312-341)
tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py (2)
  • _apply_to_full_model (39-52)
  • _apply_to_full_model (68-88)
tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (1)
  • _apply_to_full_model (125-197)
tensorrt_llm/_torch/auto_deploy/models/factory.py (6)
tests/unittest/_torch/thop/parallel/test_custom_ops.py (1)
  • custom_ops (37-42)
tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py (1)
  • CacheConfig (28-31)
tensorrt_llm/_torch/auto_deploy/models/hf.py (4)
  • _init_dynamic_shape_lookup (527-534)
  • post_process (499-525)
  • get_export_infos (453-454)
  • get_export_infos (668-669)
tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py (1)
  • get_export_infos (44-45)
tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py (1)
  • get_export_infos (40-41)
tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.py (1)
  • get_export_infos (40-41)
🪛 Ruff (0.13.3)
tensorrt_llm/_torch/auto_deploy/custom_ops/cuda_backend_causal_conv.py

83-83: Unused function argument: input_pos

(ARG001)


83-83: Unused function argument: cache_loc

(ARG001)


83-83: Unused function argument: pages_per_seq

(ARG001)


83-83: Unused function argument: page_size

(ARG001)

tensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.py

216-216: Unused function argument: input_pos

(ARG001)


216-216: Unused function argument: pages_per_seq

(ARG001)


216-216: Unused function argument: slot_idx

(ARG001)


216-216: Unused function argument: page_size

(ARG001)

tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_quantization.py

40-40: Unused method argument: model

(ARG002)

tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_causal_conv.py

169-169: Unused function argument: input_pos

(ARG001)


169-169: Unused function argument: cache_loc

(ARG001)


169-169: Unused function argument: pages_per_seq

(ARG001)


169-169: Unused function argument: page_size

(ARG001)

tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py

44-44: Unused method argument: model

(ARG002)

tensorrt_llm/_torch/auto_deploy/transform/library/compile_model.py

46-46: Unused method argument: factory

(ARG002)


47-47: Unused method argument: shared_config

(ARG002)

tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_mamba.py

144-144: Unused function argument: input_pos

(ARG001)


144-144: Unused function argument: cache_loc

(ARG001)


144-144: Unused function argument: pages_per_seq

(ARG001)


144-144: Unused function argument: page_size

(ARG001)

tensorrt_llm/_torch/auto_deploy/custom_ops/torch_backend_attention.py

381-381: Unused function argument: pages_per_seq

(ARG001)


381-381: Unused function argument: slot_idx

(ARG001)


381-381: Unused function argument: page_size

(ARG001)

tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py

51-51: Unused function argument: mod

(ARG001)


85-85: Avoid specifying long messages outside the exception class

(TRY003)


130-130: Unused method argument: shared_config

(ARG002)

tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py

40-40: Unused method argument: model

(ARG002)

tensorrt_llm/_torch/auto_deploy/custom_ops/mla.py

199-199: Unused function argument: position_ids

(ARG001)


199-199: Unused function argument: pages_per_seq

(ARG001)


199-199: Unused function argument: slot_idx

(ARG001)


199-199: Unused function argument: page_size

(ARG001)

tensorrt_llm/_torch/auto_deploy/models/patches/pixtral.py

59-59: Unused function argument: max_width

(ARG001)

tensorrt_llm/_torch/auto_deploy/transform/library/build_model.py

41-41: Unused method argument: mod

(ARG002)


42-42: Unused method argument: cm

(ARG002)


44-44: Unused method argument: shared_config

(ARG002)


70-70: Unused method argument: mod

(ARG002)


73-73: Unused method argument: shared_config

(ARG002)

tensorrt_llm/_torch/auto_deploy/transform/library/load_weights.py

42-42: Unused method argument: cm

(ARG002)


44-44: Unused method argument: shared_config

(ARG002)


71-71: Unused method argument: factory

(ARG002)


72-72: Unused method argument: shared_config

(ARG002)

tensorrt_llm/_torch/auto_deploy/custom_ops/triton_attention.py

311-311: Unused function argument: pages_per_seq

(ARG001)


311-311: Unused function argument: slot_idx

(ARG001)


311-311: Unused function argument: page_size

(ARG001)

tensorrt_llm/_torch/auto_deploy/transform/library/kvcache.py

246-246: Unused method argument: factory

(ARG002)


247-247: Unused method argument: shared_config

(ARG002)


323-323: Unused method argument: factory

(ARG002)


324-324: Unused method argument: shared_config

(ARG002)

tensorrt_llm/_torch/auto_deploy/custom_ops/attention_interface.py

274-274: Consider ("position_ids", *self._cached_arg_names) instead of concatenation

Replace with ("position_ids", *self._cached_arg_names)

(RUF005)

tensorrt_llm/_torch/auto_deploy/models/hf.py

453-453: Unused method argument: model

(ARG002)


508-508: Loop control variable embed_name not used within loop body

(B007)


512-515: Avoid specifying long messages outside the exception class

(TRY003)


560-563: Avoid specifying long messages outside the exception class

(TRY003)

tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py

114-114: Unused method argument: factory

(ARG002)


115-115: Unused method argument: shared_config

(ARG002)

tensorrt_llm/_torch/auto_deploy/transform/interface.py

346-346: Do not catch blind exception: Exception

(BLE001)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (23)
tensorrt_llm/_torch/auto_deploy/models/patches/starcoder.py (1)

3-3: Comment update looks good.

The added TODO note makes the removal condition explicit and keeps maintenance intent clear.

tensorrt_llm/_torch/auto_deploy/config/transformers.yaml (1)

9-31: LGTM! Consistent per-GM execution disablement.

The addition of run_per_gm: false across these transforms aligns with the broader migration toward full-model processing rather than per-graph-module subgraphs, as described in the PR objectives.

tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_build_small_single.py (2)

75-78: LGTM! Enables torch-opt backend for Llama-4.

This change aligns with the PR objective to add torch-opt support for llama4 models and removes the previous skip, expanding test coverage.


89-93: LGTM! Enables torch-cudagraph backend for Mistral.

This change aligns with the PR objective to add torch-cudagraph support for mistral3 models.

tensorrt_llm/_torch/auto_deploy/export/export.py (3)

6-6: LGTM! Required import for new function signature.

The Callable type is needed for the capture_fn parameter in the new run_forward_for_capture function.


198-250: LGTM! Well-designed abstraction for capture operations.

The new run_forward_for_capture function effectively extracts common logic for running capture operations with patches and meta device handling. The implementation is clean, well-documented, and provides good flexibility through the capture_fn parameter.


288-297: LGTM! Clean refactoring with improved separation of concerns.

The refactored torch_export_to_gm now delegates capture orchestration to run_forward_for_capture while keeping export-specific logic in the internal _capture_fn helper. This improves code maintainability and reusability.

tensorrt_llm/_torch/auto_deploy/transform/library/kvcache.py (3)

7-7: LGTM! Required import for nn.Module annotations.

The import is necessary for the updated method signatures that use nn.Module instead of GraphModule.


242-248: LGTM! Consistent interface migration to full-model processing.

The method signature update from _apply with GraphModule to _apply_to_full_model with nn.Module aligns with the broader migration toward full-model transformations described in the PR objectives.

Note: The static analysis warnings about unused factory and shared_config parameters are expected, as these are required by the BaseTransform interface signature (see relevant code snippets from interface.py).


319-325: LGTM! Consistent interface migration.

The InitializeCache transform follows the same interface pattern as ResizeKVCache, consistently updating to full-model processing.

tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_torch_causal_conv_cached_op.py (1)

171-188: LGTM! Test updated to reflect prepare_metadata API changes.

The removal of input_ids from the torch_causal_conv_prepare_metadata call aligns with the broader API refactor that shifts to using position_ids for sequence-length sanitization, as indicated in the AI summary.

tensorrt_llm/_torch/auto_deploy/models/__init__.py (1)

1-2: LGTM! Aligns with mistral3 deprecation.

Removing mistral3 from the public exports is consistent with the broader de-emphasis of mistral3 support in this PR, including marking the mistral3 patch as disabled by default.

tensorrt_llm/_torch/auto_deploy/models/patches/mistral3.py (2)

1-15: LGTM! Clear documentation of disabled patch status.

The updated docstring and import clearly communicate that this patch is disabled by default and retained for potential future use. This aligns with the broader patch system changes introducing disabled patches.


167-169: LGTM! Consistent with disabled patch pattern.

The change to inherit from DisabledBaseExportPatch aligns with the broader patch system modernization, allowing the patch to remain registered while being disabled by default (consistent with similar changes in llama4.py and pixtral.py).

tests/unittest/_torch/auto_deploy/unit/singlegpu/models/test_llama4_vlm_patch.py (2)

60-60: LGTM! Updated to new patch configuration API.

The change from patch_list to patch_configs with an explicit enabled flag provides more flexible control over patch application and aligns with the modernized patch system introduced in this PR.


89-97: LGTM! Consistent patch configuration format.

The migration to patch_configs dictionary format with explicit configurations for each patch improves clarity and aligns with the updated apply_export_patches API that accepts patch configurations (see relevant code snippet from interface.py).

tests/unittest/_torch/auto_deploy/_utils_test/_graph_test_helpers.py (1)

13-17: Export-info override looks good.

Thanks for wiring FakeFactory into the new export-info contract; this keeps the test helper compliant with ModelFactory’s abstract interface.

Also applies to: 44-46

tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_kv_cache.py (2)

1-16: DummyFactory export-info hook LGTM.

Covering the abstract get_export_infos with FullModelExportInfo keeps these tests aligned with the factory API refresh. 👍

Also applies to: 40-42


175-183: run_per_gm flags acknowledged.

Setting run_per_gm=False for the factory and export stages mirrors the new single-pass export flow, so no concerns here.

tensorrt_llm/_torch/auto_deploy/custom_ops/flashinfer_attention.py (1)

176-177: Position-id based sanitization looks correct.

Switching both real and fake paths to sanitize via position_ids keeps flashinfer in sync with the rest of the metadata APIs. Looks good.

Also applies to: 216-219

tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py (1)

124-128: Verify profiling_metadata plumbs through model.forward for all target models.

Some HF model forwards may not expect/forward this kwarg; ensure tested on llama4, mistral3, Qwen2.5-VL.

tensorrt_llm/_torch/auto_deploy/transform/interface.py (1)

83-86: Confirm default run_per_gm=True matches intended full-model flow.

Current default runs per-GraphModule; many transforms now implement _apply_to_full_model. Verify pipeline config overrides as expected.

tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py (1)

186-191: Remove version-compat fallback for set_submodule
torch.nn.Module.set_submodule is supported from PyTorch 2.6 onward; if your project requires ≥2.6, you can drop the getattr/setattr fallback.

Comment on lines +499 to +526
def post_process(self, sub_mod: nn.Module, sub_gm: GraphModule):
"""Post-process the subgraph module and make sure the embedding remains available."""
# make sure get_input_embeddings function is available in the graph module
embed_tokens = sub_mod.get_input_embeddings()
sub_gm.get_input_embeddings = types.MethodType(
sub_mod.get_input_embeddings.__func__, sub_gm
)

# retrieve+replicate expected submodule hierarchy for where the embedding module is located
for embed_name, subsubmod in sub_mod.named_modules():
if subsubmod is embed_tokens:
break
else:
raise RuntimeError(
"Could not find embedding module in model. Expected embedding module to be a "
"submodule of the text submodule."
)
sub_gm.set_submodule(embed_name, embed_tokens)

# add a dummy node to the graph for making the embedding module impure --> impure nodes
# won't be deleted from the graph during cleanup and this way we ensure that the embedding
# module is not deleted from the GraphModule either.
# TODO (lucaslie): is there a better way to make the embedding module "sticky"?
n_embed_tokens = sub_gm.graph.get_attr(f"{embed_name}.weight")
sub_gm.graph.call_function(
torch._assert, args=(n_embed_tokens, "Avoid embedding getting deleted from graph.")
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Recompile GraphModule after mutating its Graph.

After adding nodes, call lint()+recompile() to sync generated code with the graph; prevents stale code issues.

         n_embed_tokens = sub_gm.graph.get_attr(f"{embed_name}.weight")
         sub_gm.graph.call_function(
             torch._assert, args=(n_embed_tokens, "Avoid embedding getting deleted from graph.")
         )
+        # finalize graph edits
+        sub_gm.graph.lint()
+        sub_gm.recompile()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def post_process(self, sub_mod: nn.Module, sub_gm: GraphModule):
"""Post-process the subgraph module and make sure the embedding remains available."""
# make sure get_input_embeddings function is available in the graph module
embed_tokens = sub_mod.get_input_embeddings()
sub_gm.get_input_embeddings = types.MethodType(
sub_mod.get_input_embeddings.__func__, sub_gm
)
# retrieve+replicate expected submodule hierarchy for where the embedding module is located
for embed_name, subsubmod in sub_mod.named_modules():
if subsubmod is embed_tokens:
break
else:
raise RuntimeError(
"Could not find embedding module in model. Expected embedding module to be a "
"submodule of the text submodule."
)
sub_gm.set_submodule(embed_name, embed_tokens)
# add a dummy node to the graph for making the embedding module impure --> impure nodes
# won't be deleted from the graph during cleanup and this way we ensure that the embedding
# module is not deleted from the GraphModule either.
# TODO (lucaslie): is there a better way to make the embedding module "sticky"?
n_embed_tokens = sub_gm.graph.get_attr(f"{embed_name}.weight")
sub_gm.graph.call_function(
torch._assert, args=(n_embed_tokens, "Avoid embedding getting deleted from graph.")
)
def post_process(self, sub_mod: nn.Module, sub_gm: GraphModule):
"""Post-process the subgraph module and make sure the embedding remains available."""
# make sure get_input_embeddings function is available in the graph module
embed_tokens = sub_mod.get_input_embeddings()
sub_gm.get_input_embeddings = types.MethodType(
sub_mod.get_input_embeddings.__func__, sub_gm
)
# retrieve+replicate expected submodule hierarchy for where the embedding module is located
for embed_name, subsubmod in sub_mod.named_modules():
if subsubmod is embed_tokens:
break
else:
raise RuntimeError(
"Could not find embedding module in model. Expected embedding module to be a "
"submodule of the text submodule."
)
sub_gm.set_submodule(embed_name, embed_tokens)
# add a dummy node to the graph for making the embedding module impure --> impure nodes
# won't be deleted from the graph during cleanup and this way we ensure that the embedding
# module is not deleted from the GraphModule either.
# TODO (lucaslie): is there a better way to make the embedding module "sticky"?
n_embed_tokens = sub_gm.graph.get_attr(f"{embed_name}.weight")
sub_gm.graph.call_function(
torch._assert, args=(n_embed_tokens, "Avoid embedding getting deleted from graph.")
)
# finalize graph edits
sub_gm.graph.lint()
sub_gm.recompile()
🧰 Tools
🪛 Ruff (0.13.3)

508-508: Loop control variable embed_name not used within loop body

(B007)


512-515: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In tensorrt_llm/_torch/auto_deploy/models/hf.py around lines 499 to 526, after
mutating sub_gm.graph (adding the dummy assert node to keep the embedding
module), re-run FX graph validation and regenerate the GraphModule code by
calling sub_gm.graph.lint() (or torch.fx.graph_module.graph.lint()) followed by
sub_gm.recompile() so the GraphModule's generated code is synced with the
modified graph; insert these calls immediately after the call_function that adds
the dummy node and before returning or using sub_gm.

@@ -1,11 +1,15 @@
"""A simple wrapper transform to export a model to a graph module."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add NVIDIA Apache-2.0 header (compliance).

File is missing the required NVIDIA Apache-2.0 header with current year.

As per coding guidelines, prepend:

+# Copyright (c) 2025, NVIDIA CORPORATION.  All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#     http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.

As per coding guidelines

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"""A simple wrapper transform to export a model to a graph module."""
# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A simple wrapper transform to export a model to a graph module."""
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py around
lines 1 to 1, the file is missing the required NVIDIA Apache-2.0 header; prepend
the standard NVIDIA Apache-2.0 license header at the very top of the file
including the current year (2025) and the copyright holder (NVIDIA CORPORATION &
AFFILIATES), ensure the header text matches the project’s canonical Apache-2.0
header wording and includes the SPDX identifier or full notice, and save so the
header appears before any module docstring or code.

Comment on lines +89 to +106
reset_signature = False
if hasattr(forward_func, "__signature__"):
signature_attribute = mod.forward.__signature__
reset_signature = True

# construct signature object from kwargs
params_list = []
if is_method:
# heuristic to identify the self parameter
param_keys = list(signature_inspected.parameters.keys())
self_key = "self" if "self" in param_keys else param_keys[0]
params_list.append(signature_inspected.parameters[self_key].replace())
# the rest of the parameters as keyword only
params_list.extend(
[Parameter(k, kind=Parameter.KEYWORD_ONLY, annotation=type(v)) for k, v in kwargs.items()]
)
forward_func.__signature__ = Signature(parameters=params_list)
try:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bug: restoring signature from bound method instead of function.

When resetting, you read signature from mod.forward; if present only on forward_func, this loses it. Use forward_func consistently.

-    reset_signature = False
-    if hasattr(forward_func, "__signature__"):
-        signature_attribute = mod.forward.__signature__
-        reset_signature = True
+    reset_signature = False
+    if hasattr(forward_func, "__signature__"):
+        signature_attribute = forward_func.__signature__
+        reset_signature = True

Optional: avoid misleading annotations; prefer no annotation.

-    params_list.extend(
-        [Parameter(k, kind=Parameter.KEYWORD_ONLY, annotation=type(v)) for k, v in kwargs.items()]
-    )
+    params_list.extend(
+        [Parameter(k, kind=Parameter.KEYWORD_ONLY, annotation=inspect._empty) for k, v in kwargs.items()]
+    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
reset_signature = False
if hasattr(forward_func, "__signature__"):
signature_attribute = mod.forward.__signature__
reset_signature = True
# construct signature object from kwargs
params_list = []
if is_method:
# heuristic to identify the self parameter
param_keys = list(signature_inspected.parameters.keys())
self_key = "self" if "self" in param_keys else param_keys[0]
params_list.append(signature_inspected.parameters[self_key].replace())
# the rest of the parameters as keyword only
params_list.extend(
[Parameter(k, kind=Parameter.KEYWORD_ONLY, annotation=type(v)) for k, v in kwargs.items()]
)
forward_func.__signature__ = Signature(parameters=params_list)
try:
reset_signature = False
if hasattr(forward_func, "__signature__"):
signature_attribute = forward_func.__signature__
reset_signature = True
# construct signature object from kwargs
params_list = []
if is_method:
# heuristic to identify the self parameter
param_keys = list(signature_inspected.parameters.keys())
self_key = "self" if "self" in param_keys else param_keys[0]
params_list.append(signature_inspected.parameters[self_key].replace())
# the rest of the parameters as keyword only
params_list.extend(
[Parameter(k, kind=Parameter.KEYWORD_ONLY, annotation=inspect._empty) for k, v in kwargs.items()]
)
forward_func.__signature__ = Signature(parameters=params_list)
try:

Comment on lines +137 to 145
# check if any submodules to be exported are children of other submodules that need to be
# exported. We don't allow for this since this may imply that the submodules are not
# independent, which would conflict with graph capture logic, i.e., you cannot graph-capture
# "model" and "model.text_model" for example. However, you can export "model.text_model" and
# "model.vision_model" separately.
sub_keys = [info.submodule_name for info in export_infos]
assert all(not k1.startswith(k2) for k1 in sub_keys for k2 in sub_keys if k1 != k2), (
f"Cannot export submodules of already exported submodules, {sub_keys=}"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix parent/child submodule check (dot-boundary + root handling).

startswith without a dot causes false positives (e.g., "a.b2" vs "a.b") and presence of "" blocks any other submodule.

-        sub_keys = [info.submodule_name for info in export_infos]
-        assert all(not k1.startswith(k2) for k1 in sub_keys for k2 in sub_keys if k1 != k2), (
-            f"Cannot export submodules of already exported submodules, {sub_keys=}"
-        )
+        sub_keys = [info.submodule_name for info in export_infos]
+        def _is_child(child: str, parent: str) -> bool:
+            return bool(parent) and child.startswith(parent + ".")
+        assert all(
+            not (_is_child(k1, k2) or _is_child(k2, k1))
+            for k1 in sub_keys for k2 in sub_keys if k1 != k2
+        ), f"Cannot export parent/child submodules together: {sub_keys=}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# check if any submodules to be exported are children of other submodules that need to be
# exported. We don't allow for this since this may imply that the submodules are not
# independent, which would conflict with graph capture logic, i.e., you cannot graph-capture
# "model" and "model.text_model" for example. However, you can export "model.text_model" and
# "model.vision_model" separately.
sub_keys = [info.submodule_name for info in export_infos]
assert all(not k1.startswith(k2) for k1 in sub_keys for k2 in sub_keys if k1 != k2), (
f"Cannot export submodules of already exported submodules, {sub_keys=}"
)
# check if any submodules to be exported are children of other submodules that need to be
# exported. We don't allow for this since this may imply that the submodules are not
# independent, which would conflict with graph capture logic, i.e., you cannot graph-capture
# "model" and "model.text_model" for example. However, you can export "model.text_model" and
# "model.vision_model" separately.
sub_keys = [info.submodule_name for info in export_infos]
def _is_child(child: str, parent: str) -> bool:
return bool(parent) and child.startswith(parent + ".")
assert all(
not (_is_child(k1, k2) or _is_child(k2, k1))
for k1 in sub_keys for k2 in sub_keys if k1 != k2
), f"Cannot export parent/child submodules together: {sub_keys=}"
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/auto_deploy/transform/library/export_to_gm.py around
lines 137 to 145, the current parent/child check uses startswith which yields
false positives (e.g., "a.b2" vs "a.b") and an empty string blocks all
submodules; change the logic to detect parent-child only when the child key
begins with the parent plus a dot boundary: treat an empty string as the root
and disallow exporting root together with any other submodule, and otherwise
assert that no k1 equals k2 and no k1 starts with k2 + "." for any distinct
pair; update the assertion accordingly to use explicit dot-boundary checks and
root handling.

Comment on lines +110 to 117
def _apply_to_full_model(
self,
gm: GraphModule,
mod: nn.Module,
cm: CachedSequenceInterface,
factory: ModelFactory,
shared_config: SharedConfig,
) -> Tuple[GraphModule, TransformInfo]:
model = gm.factory_model

# Register profiler attn operator
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix return type annotation to match actual return (nn.Module).

Currently annotated as Tuple[GraphModule, TransformInfo] but returns mod (nn.Module).

-    ) -> Tuple[GraphModule, TransformInfo]:
+    ) -> Tuple[nn.Module, TransformInfo]:

Also silence unused args:

-        factory: ModelFactory,
-        shared_config: SharedConfig,
+        _factory: ModelFactory,
+        _shared_config: SharedConfig,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _apply_to_full_model(
self,
gm: GraphModule,
mod: nn.Module,
cm: CachedSequenceInterface,
factory: ModelFactory,
shared_config: SharedConfig,
) -> Tuple[GraphModule, TransformInfo]:
model = gm.factory_model
# Register profiler attn operator
def _apply_to_full_model(
self,
mod: nn.Module,
cm: CachedSequenceInterface,
_factory: ModelFactory,
_shared_config: SharedConfig,
) -> Tuple[nn.Module, TransformInfo]:
# Register profiler attn operator
🧰 Tools
🪛 Ruff (0.13.3)

114-114: Unused method argument: factory

(ARG002)


115-115: Unused method argument: shared_config

(ARG002)

🤖 Prompt for AI Agents
In tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py
around lines 110 to 117, the function _apply_to_full_model is annotated to
return Tuple[GraphModule, TransformInfo] but actually returns a single
nn.Module; change the return type annotation to nn.Module to match the
implementation, and silence the unused parameters (e.g., rename unused args with
a leading underscore or reference them briefly like _ = cm) so linter warnings
go away while keeping behavior unchanged.

Comment on lines +265 to 270
# switch to cached attn implementation but _only_ for submodules/configs that have a cached
# attn node (we don't want to switch to cached attn implementation for all modules)
for mod in gm.factory_model.modules():
if hasattr(mod, "_node_ref"):
mod.config._attn_implementation = "ad_cached_mha"
for submod in mod.modules():
if hasattr(submod, "_node_ref"):
submod.config._attn_implementation = "ad_cached_mha"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard access to submodule config when switching to cached attention.

Some modules with _node_ref may not have a config attribute; add a check to avoid AttributeError.

-        for submod in mod.modules():
-            if hasattr(submod, "_node_ref"):
-                submod.config._attn_implementation = "ad_cached_mha"
+        for submod in mod.modules():
+            if hasattr(submod, "_node_ref") and hasattr(submod, "config"):
+                submod.config._attn_implementation = "ad_cached_mha"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# switch to cached attn implementation but _only_ for submodules/configs that have a cached
# attn node (we don't want to switch to cached attn implementation for all modules)
for mod in gm.factory_model.modules():
if hasattr(mod, "_node_ref"):
mod.config._attn_implementation = "ad_cached_mha"
for submod in mod.modules():
if hasattr(submod, "_node_ref"):
submod.config._attn_implementation = "ad_cached_mha"
# switch to cached attn implementation but _only_ for submodules/configs that have a cached
# attn node (we don't want to switch to cached attn implementation for all modules)
for submod in mod.modules():
if hasattr(submod, "_node_ref") and hasattr(submod, "config"):
submod.config._attn_implementation = "ad_cached_mha"
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/auto_deploy/transform/library/kvcache_transformers.py
around lines 265 to 270, the loop assumes every submodule with _node_ref has a
config attribute and sets submod.config._attn_implementation, which can raise
AttributeError; change the guard to check both hasattr(submod, "_node_ref") and
hasattr(submod, "config") (or submod.config is not None) before assigning, only
set _attn_implementation when the config exists and is writable.

"mistralai/Mistral-Small-3.1-24B-Instruct-2503": {
"llm_models_subdir": "Mistral-Small-3.1-24B-Instruct-2503",
"model_factory": "Mistral3VLM",
"model_factory": "AutoModelForImageTextToText",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

LGTM! Verify complete removal of "Mistral3VLM" references.

The change to use "AutoModelForImageTextToText" aligns with the standard transformers factory for VLMs and is consistent with the Llama-4-Scout configuration (Line 398). This supports the PR's goal of refactoring away from Mistral3-specific export paths.

Run the following script to verify all references to the old factory have been removed:


🏁 Script executed:

#!/bin/bash
# Description: Verify that "Mistral3VLM" has been completely removed from the codebase.

# Search for any remaining references to "Mistral3VLM"
rg -n "Mistral3VLM" --type py

Length of output: 998


Remove remaining Mistral3VLM references

  • tests/unittest/_torch/modeling/test_modeling_mistral.py:150,185,309,488
  • tensorrt_llm/_torch/models/init.py:16,50
  • tensorrt_llm/_torch/models/modeling_mistral.py:322,323,335
    Replace these usages with the new AutoModelForImageTextToText factory.
🤖 Prompt for AI Agents
In tests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.py around
line 442 and the other listed locations
(tests/unittest/_torch/modeling/test_modeling_mistral.py:150,185,309,488;
tensorrt_llm/_torch/models/__init__.py:16,50;
tensorrt_llm/_torch/models/modeling_mistral.py:322,323,335), replace any
remaining references to the old Mistral3VLM factory/class with the new
AutoModelForImageTextToText factory; update import paths if necessary to import
AutoModelForImageTextToText, remove or rename any Mistral3VLM variables/usages
to the new factory name, and run tests to ensure all references are consistently
updated.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20822 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15744 completed with status: 'FAILURE'


b, s = 4, 6
input_ids = torch.randint(0, 1000, (b, s), device=device)
# input_ids = torch.randint(0, 1000, (b, s), device=device)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: remove commented out lines, also in other attention op tests

class TextModelExportInfo(SubModuleExportInfo):
"""An export configuration for the text model portion of a VLM."""

def post_process(self, sub_mod: nn.Module, sub_gm: GraphModule):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain a bit more about how the export process looks like now for VLM?
Seems to me that:

  1. We only capture text model whose input args are inputs_embed and position_ids
  2. The whole model sees inputs_ids, (pixel_values), position_ids, we need to glue get_input_embeddings and the text model graph together.
  3. This process is done by adding get_input_embeddings to the graph. But I didn't find specific logic for the gluing part

# won't be deleted from the graph during cleanup and this way we ensure that the embedding
# module is not deleted from the GraphModule either.
# TODO (lucaslie): is there a better way to make the embedding module "sticky"?
n_embed_tokens = sub_gm.graph.get_attr(f"{embed_name}.weight")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feel that I'm missing some part, but why would embedding module get deleted? It should be the input to the graph.


# retrieve sanitzed metadata
seq_len = SequenceInfo._get_sanitized_seq_len(input_ids, seq_len)
seq_len = SequenceInfo._get_sanitized_seq_len(position_ids, seq_len)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a simple reason as to why "seq len sanitation" need to be done in the prepara_attention_metadata ops, instead of the preprocessing that happens before the graph is launched?

)

def _init_dynamic_shape_lookup(self) -> Dict[str, DynamicShape]:
batch_size_dyn = Dim.DYNAMIC
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: from a readability standpoint, please let's not abbreviate arbitrarily (unless there is a strong reason to truncate the variable name here?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In review
Development

Successfully merging this pull request may close these issues.

4 participants