Skip to content

Conversation

shuyixiong
Copy link
Collaborator

@shuyixiong shuyixiong commented Oct 7, 2025

Summary by CodeRabbit

  • Bug Fixes

    • Improved shutdown stability by avoiding Ray cleanup when Ray isn’t initialized.
    • Prevented crashes during interpreter exit and refined placement group cleanup.
    • More robust behavior in environments without Ray or with MPI disabled.
  • Tests

    • Enhanced error reporting by unwrapping Ray errors to surface root causes.
    • Strengthened test cleanup logic to avoid leaks and crashes if initialization fails.
    • Adjusted test semantics to run previously skipped Ray-related scenarios with clearer failures.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@shuyixiong shuyixiong requested a review from a team as a code owner October 7, 2025 11:22
@shuyixiong shuyixiong requested a review from hchings October 7, 2025 11:22
@shuyixiong shuyixiong changed the title [None][fix] Fix Ray resource cleanup and error handling in LoRA test [None] [fix] Fix Ray resource cleanup and error handling in LoRA test Oct 7, 2025
Copy link
Contributor

coderabbitai bot commented Oct 7, 2025

📝 Walkthrough

Walkthrough

Adds guards in Ray executor to perform cleanup only when Ray is initialized. Test utilities gain conditional Ray imports and a new context manager to unwrap Ray errors. Test setup/teardown logic is made robust to partial initialization, and a PyTorch test now uses the new error-exposing helper and removes a skip decorator.

Changes

Cohort / File(s) Summary
Executor shutdown guards
tensorrt_llm/executor/ray_executor.py
Gate cleanup on ray.is_initialized(): remove placement group and call ray.shutdown() only if initialized; avoid cleanup during interpreter exit when Ray isn’t initialized.
Test utility: Ray error unwrapping
tests/unittest/utils/util.py
Add try_expose_error_in_ray(error_type) context manager to unwrap RayActorError/RayTaskError to underlying exception; switch to guarded Ray import with fallback to tensorrt_llm.ray_stub.
Test LLM API utils cleanup
tests/unittest/llmapi/lora_test_utils.py
Add mpi_disabled import; guarded Ray import with stub fallback; revise init/cleanup to handle partial initialization: on finally, shutdown llm if created; else if mpi_disabled(), call ray.shutdown().
PyTorch test adjustments
tests/unittest/llmapi/test_llm_pytorch.py
Import try_expose_error_in_ray; remove @skip_ray from one test; wrap previous RuntimeError assertions with try_expose_error_in_ray(RuntimeError).

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant T as Test code
  participant CM as try_expose_error_in_ray
  participant R as Ray worker/task
  Note over CM: MPI disabled path
  T->>CM: enter context(error_type=RuntimeError)
  T->>R: invoke remote call
  R-->>T: RayActorError/RayTaskError (with cause/error_msg)
  T->>CM: exception bubbles into context
  alt has nested cause of expected type
    CM-->>T: raise underlying RuntimeError
  else has formatted error_msg matching expected type
    CM-->>T: raise RuntimeError with extracted message
  else no match
    CM-->>T: re-raise original Ray error
  end
Loading
sequenceDiagram
  autonumber
  participant E as RayExecutor.shutdown()
  participant RI as ray.is_initialized()
  participant PG as PlacementGroup
  participant Ray as ray
  E->>RI: check initialized
  alt Ray initialized
    E->>PG: remove if present
    E->>Ray: ray.shutdown()
  else not initialized
    Note over E: Skip Ray cleanup
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.86% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ⚠️ Warning The pull request description includes only the template placeholders without a real title, description text, or test coverage details, and the PR checklist remains largely unfilled, so it does not satisfy the repository’s required template structure. Please replace the placeholder comments with a valid PR title following the template convention, provide a concise Description and Test Coverage section that explain the issue, solution, and relevant tests, and complete the PR Checklist by marking appropriate items to demonstrate that all requirements have been addressed.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title clearly states that the pull request addresses fixing Ray resource cleanup and error handling specifically in the LoRA test, which corresponds directly to the main adjustments made in the tests and related cleanup logic and is concise and specific.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9298f1b and 960c65f.

📒 Files selected for processing (4)
  • tensorrt_llm/executor/ray_executor.py (1 hunks)
  • tests/unittest/llmapi/lora_test_utils.py (2 hunks)
  • tests/unittest/llmapi/test_llm_pytorch.py (2 hunks)
  • tests/unittest/utils/util.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/executor/ray_executor.py
  • tests/unittest/llmapi/lora_test_utils.py
  • tests/unittest/llmapi/test_llm_pytorch.py
  • tests/unittest/utils/util.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/executor/ray_executor.py
  • tests/unittest/llmapi/lora_test_utils.py
  • tests/unittest/llmapi/test_llm_pytorch.py
  • tests/unittest/utils/util.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/executor/ray_executor.py
  • tests/unittest/llmapi/lora_test_utils.py
  • tests/unittest/llmapi/test_llm_pytorch.py
  • tests/unittest/utils/util.py
🧬 Code graph analysis (2)
tests/unittest/llmapi/lora_test_utils.py (2)
tensorrt_llm/_utils.py (1)
  • mpi_disabled (518-520)
tensorrt_llm/executor/ray_executor.py (1)
  • shutdown (217-236)
tests/unittest/llmapi/test_llm_pytorch.py (3)
tests/unittest/utils/util.py (1)
  • try_expose_error_in_ray (462-487)
tests/unittest/llmapi/lora_test_utils.py (1)
  • check_llama_7b_multi_lora_from_request_test_harness (82-133)
tensorrt_llm/llmapi/llm_args.py (1)
  • PeftCacheConfig (973-1039)
🪛 Ruff (0.13.3)
tests/unittest/utils/util.py

472-472: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


484-484: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


487-487: Use raise without specifying exception name

Remove exception name

(TRY201)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (7)
tensorrt_llm/executor/ray_executor.py (1)

227-236: LGTM! Guards prevent Ray auto-initialization during shutdown.

The added ray.is_initialized() checks correctly prevent triggering Ray's auto-initialization behavior during interpreter exit when cleanup is attempted but Ray is no longer active. This resolves potential crashes or hangs during teardown.

tests/unittest/utils/util.py (1)

28-31: LGTM! Guarded import supports environments without Ray.

The fallback to ray_stub allows tests to run in environments where Ray is unavailable, consistent with other files in this PR.

tests/unittest/llmapi/test_llm_pytorch.py (2)

31-32: LGTM! Import enables error unwrapping in Ray mode.

Importing try_expose_error_in_ray allows this test file to surface nested errors when running under Ray orchestration.


416-451: Test now runs in Ray mode with proper error surfacing.

The removal of @skip_ray and addition of try_expose_error_in_ray context managers correctly enable this test to run under Ray orchestration while still catching the expected RuntimeError. The helper unwraps Ray's error wrapping to expose the underlying exception for proper assertion.

tests/unittest/llmapi/lora_test_utils.py (3)

12-12: LGTM! Import enables conditional Ray cleanup.

The mpi_disabled import is used correctly in the finally block to determine when manual Ray cleanup is needed.


16-19: LGTM! Guarded import supports environments without Ray.

Consistent with the pattern in tests/unittest/utils/util.py, this fallback allows tests to function when Ray is unavailable.


115-130: LGTM! Robust cleanup handles partial initialization.

The pattern correctly handles cleanup in both success and failure cases:

  • If llm initialization succeeds, its shutdown() method handles Ray cleanup
  • If initialization fails (llm remains None), manual ray.shutdown() prevents resource leaks that would trigger pytest-threadleak detection

@shuyixiong shuyixiong self-assigned this Oct 8, 2025
@shuyixiong
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20781 [ run ] triggered by Bot

@shuyixiong shuyixiong changed the title [None] [fix] Fix Ray resource cleanup and error handling in LoRA test [None] [fix] Fix ray resource cleanup and error handling in LoRA test Oct 8, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #20781 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15709 completed with status: 'FAILURE'

@shuyixiong
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20800 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20800 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15725 completed with status: 'FAILURE'

@shuyixiong
Copy link
Collaborator Author

/bot run --stage-list "H100_PCIe-PyTorch-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20828 [ run ] triggered by Bot

@shuyixiong
Copy link
Collaborator Author

/bot kill

@shuyixiong
Copy link
Collaborator Author

/bot help

Copy link

github-actions bot commented Oct 9, 2025

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@shuyixiong
Copy link
Collaborator Author

/bot run --reuse-test 15725

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20834 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20828 [ run ] completed with state ABORTED
LLM/main/L0_MergeRequest_PR #15749 (Blue Ocean) completed with status: ABORTED

Copy link
Member

@tongyuantongyu tongyuantongyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's refine the ray_stub.py a bit:

Subject: [PATCH] refine stub
---
Index: tensorrt_llm/ray_stub.py
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/tensorrt_llm/ray_stub.py b/tensorrt_llm/ray_stub.py
--- a/tensorrt_llm/ray_stub.py	(revision 3a4ffaed2a588d4d4bb2588c490df0348103b5d0)
+++ b/tensorrt_llm/ray_stub.py	(revision 44d40b337f14acb492f28efd2362a1669abd1526)
@@ -12,11 +12,11 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-import functools
+from functools import wraps as _wraps
 
-from tensorrt_llm._utils import mpi_disabled
+from tensorrt_llm._utils import mpi_disabled as _mpi_disabled
 
-if mpi_disabled():
+if _mpi_disabled():
     raise RuntimeError(
         "Ray requested (TLLM_DISABLE_MPI=1), but not installed. Please install Ray."
     )
@@ -27,10 +27,11 @@
     def decorator(func):
         # Returns a function that always raises.
         # Decorated class depends on ray, but ray is not installed.
-        @functools.wraps(func)
+        @_wraps(func)
         def stub_checker(*_, **__):
             raise RuntimeError(
-                "Ray not installed, cannot use Ray based feature.")
+                f'Ray not installed, so the remote function / actor "{func.__name__}" is not available.'
+            )
 
         return stub_checker
 
@@ -38,3 +39,9 @@
         return decorator(args[0])
 
     return decorator
+
+
+def __getattr__(name):
+    raise RuntimeError(
+        f'Ray not installed, so "ray.{name}" is unavailable. Please install Ray.'
+    )

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20834 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15754 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

Copy link
Collaborator

@hchings hchings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM.

A minor thing is that it's not too ideal that ray utils (_ray_utils.py and ray_stub.py, etc) are scattered like right now in the codebase. We can address this along with future MRs if the team agrees on a better approach.

@shuyixiong
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20950 [ run ] triggered by Bot

@shuyixiong shuyixiong force-pushed the user/shuyix/fix_ray_lora_test branch from 3ee0877 to ccd59b1 Compare October 11, 2025 14:25
@shuyixiong
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21080 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21080 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15932 completed with status: 'FAILURE'

@shuyixiong shuyixiong force-pushed the user/shuyix/fix_ray_lora_test branch from ccd59b1 to 397756c Compare October 13, 2025 07:08
@shuyixiong
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21167 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21167 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15981 completed with status: 'FAILURE'

@shuyixiong
Copy link
Collaborator Author

/bot run --reuse-test --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21207 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21207 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16010 completed with status: 'FAILURE'

@shuyixiong shuyixiong force-pushed the user/shuyix/fix_ray_lora_test branch from 397756c to 80e6505 Compare October 14, 2025 01:48
@shuyixiong
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21278 [ run ] triggered by Bot

@shuyixiong shuyixiong changed the title [None] [fix] Fix ray resource cleanup and error handling in LoRA test [TRTLLM-8507][fix] Fix ray resource cleanup and error handling in LoRA test Oct 14, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #21278 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16064 completed with status: 'FAILURE'

@shuyixiong
Copy link
Collaborator Author

/bot run --reuse-test --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21338 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21338 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16106 completed with status: 'SUCCESS'

@joyang-nv joyang-nv merged commit 6776caa into NVIDIA:main Oct 14, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants