-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Updates for Pytorch 2.7 #8429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Updates for Pytorch 2.7 #8429
Conversation
Signed-off-by: Eric Kerfoot <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR updates MONAI’s torch dependency to a newer version aiming for improved compatibility with PyTorch. The changes include:
- Increasing the minimum torch version in pyproject.toml from 2.3.0 to 2.4.1.
- Updating the installation command in the GitHub Actions workflow (pythonapp.yml) accordingly.
- Modifying the torch version matrix in the minimal workflow (pythonapp-min.yml).
Reviewed Changes
Copilot reviewed 4 out of 6 changed files in this pull request and generated 3 comments.
File | Description |
---|---|
pyproject.toml | Updated minimum torch dependency and Black target versions |
.github/workflows/pythonapp.yml | Updated torch installation command to new dependency |
.github/workflows/pythonapp-min.yml | Revised torch version matrix for testing |
Files not reviewed (2)
- docs/requirements.txt: Language not supported
- setup.cfg: Language not supported
Signed-off-by: Eric Kerfoot <[email protected]>
Signed-off-by: Eric Kerfoot <[email protected]>
It's possible the CPU provided by the Windows runner is too old for PyTorch 2.7 which may now require instructions it doesn't have. |
The issue with Windows appears to be related to float 64 calculations, specifically with |
I think I've traced the issue to apparently a bug in |
Thank you for looking into this! Instead of waiting for a fix from PyTorch, do you think it's possible to implement a workaround by altering the dtype for Windows operating systems? |
I'm looking into that now and will hopefully have something soon. We may have to convert to float32 and back in places so we may have knock-on precision issues. |
We may need waiting for the release from torch-tensorrt to support PyTorch2.7. |
Signed-off-by: Eric Kerfoot <[email protected]>
Signed-off-by: Eric Kerfoot <[email protected]>
Signed-off-by: Eric Kerfoot <[email protected]>
for more information, see https://pre-commit.ci
…uld be removed when PyTorch is updated. Signed-off-by: Eric Kerfoot <[email protected]>
for more information, see https://pre-commit.ci
Signed-off-by: Eric Kerfoot <[email protected]>
Signed-off-by: Eric Kerfoot <[email protected]>
Hi @KumoLiu this got through the Windows tests now. I raised the issue with PyTorch so hopefully version 2.7.1 will resolve the issue, in the meantime we can run the blossom tests and discuss whether to merge this. |
Signed-off-by: Eric Kerfoot <[email protected]>
/build |
Error log:
|
/build |
Signed-off-by: Eric Kerfoot <[email protected]>
The blossom issue is related to the current pytype version, so I've added |
/build |
Seems related to the new version of the pip: https://pypi.org/project/pip/#history raise issue here: google/pytype#1909 |
Signed-off-by: Eric Kerfoot <[email protected]>
Signed-off-by: Eric Kerfoot <[email protected]>
Signed-off-by: Eric Kerfoot <[email protected]>
WalkthroughThe updates relax upper version constraints for the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant AffineTransform/Resample
participant PyTorch grid_sample
User->>AffineTransform/Resample: Call with input tensor
AffineTransform/Resample->>AffineTransform/Resample: Prepare contiguous or unsqueezed tensor (cache in local variable)
AffineTransform/Resample->>PyTorch grid_sample: Call grid_sample with prepared tensor
PyTorch grid_sample-->>AffineTransform/Resample: Return output
AffineTransform/Resample-->>User: Return output
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
.github/workflows/pythonapp-min.yml (1)
154-158
:pip install torch
ignores the Windows exclusion and the CPU-only wheel sourceIn the “min-dep-pytorch” job we fall back to a bare
pip install torch
when the matrix value islatest
.
Two problems:
- The wheel resolver may pick up a CUDA build, defeating the “CPU-only” objective used elsewhere (
--index-url …/cpu
).- Although this job runs on Linux, the same pattern appears in the
min-dep-os
job for Windows (line 57) and will happily install 2.7.0, bypassing the!=2.7.0
guard you just added inrequirements.txt
.- python -m pip install torch + # CPU-only latest build, respecting platform pins + python -m pip install "torch!=2.7.0" --index-url https://download.pytorch.org/whl/cpuApply the same logic to the Windows path (line 57) to honour the exclusion.
♻️ Duplicate comments (2)
pyproject.toml (1)
5-5
: Consider raising minimum version to align with PR objectives.The upper bound removal enables PyTorch 2.7 compatibility as intended. However, the previous review comment raises a valid point about potentially updating the minimum version to 2.7.0 since this PR specifically targets PyTorch 2.7 updates.
Please confirm whether the minimum version should remain at 2.4.1 for backward compatibility or be raised to 2.7.0 to match the PR's focus on PyTorch 2.7 updates.
.github/workflows/pythonapp-min.yml (1)
127-127
: Still missing explicit2.7.0
in the test matrixThe PR title claims PyTorch 2.7 support, yet the matrix lists the pinned versions up to 2.6.0 plus
latest
. Relying onlatest
to pick up 2.7.0 is brittle (as soon as 2.7.1 lands, 2.7.0 will silently drop out of CI).Add an explicit
2.7.0
entry and keeplatest
for forward-looking coverage.
🧹 Nitpick comments (1)
requirements.txt (1)
1-2
: Two overlappingtorch
lines – consolidate for clarityHaving both a global
torch>=2.4.1
and a Windows-specific exclusion works, but it is easy to mis-read and invites drift. Consider collapsing into a single, self-documenting marker:-torch>=2.4.1 -torch>=2.4.1, !=2.7.0; platform_system == "Windows" +torch>=2.4.1; platform_system != "Windows" +torch>=2.4.1,!=2.7.0; platform_system == "Windows"Same behaviour, clearer intent.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
.github/workflows/pythonapp-min.yml
(1 hunks)docs/requirements.txt
(1 hunks)monai/networks/layers/spatial_transforms.py
(3 hunks)monai/transforms/spatial/array.py
(2 hunks)pyproject.toml
(1 hunks)requirements-dev.txt
(1 hunks)requirements.txt
(1 hunks)setup.cfg
(1 hunks)tests/integration/test_pad_collation.py
(3 hunks)tests/lazy_transforms_utils.py
(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
tests/integration/test_pad_collation.py (1)
monai/utils/misc.py (1)
set_determinism
(335-393)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-pytorch (2.4.1)
- GitHub Check: min-dep-pytorch (latest)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: build-docs
- GitHub Check: packaging
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: quick-py3 (windows-latest)
🔇 Additional comments (12)
tests/integration/test_pad_collation.py (2)
14-14
: LGTM: Import changes are well-justified.The new imports (
os
,redirect_stderr
,first
) are all properly used in the test method to suppress error output during expected failures.Also applies to: 17-17, 40-40
102-104
: LGTM: Improved test clarity with proper error suppression.The change to use
first(loader_fail)
with stderr suppression is a good improvement. It makes the test more explicit while preventing PyTorch error messages from cluttering the test output during expected failures.tests/lazy_transforms_utils.py (2)
14-14
: LGTM: Import is properly used for platform detection.The
sys
import is correctly used for Windows platform detection in the tolerance adjustment logic.
67-71
: LGTM: Reasonable workaround for PyTorch Windows compatibility.The platform-specific tolerance adjustment is a sensible temporary fix for the known PyTorch numerical precision issues on Windows with float64 CPU tensors. The increased tolerances (1e-4) are still reasonable for testing purposes.
Please monitor the referenced GitHub issue and remove this workaround once PyTorch fixes the underlying
grid_sample
issue.monai/transforms/spatial/array.py (2)
17-17
: LGTM - Sys import added for platform detectionThe sys import is correctly added to support the Windows-specific workaround implementation.
2111-2134
: Well-implemented workaround for Windows PyTorch grid_sample bugThe implementation correctly addresses the known PyTorch issue with float64 tensors on Windows CPU by:
- Targeted detection: Only applies the workaround when all conditions are met (Windows platform, float64 dtype, CPU device)
- Minimal scope: Conversions are applied only to the specific tensors passed to
grid_sample
- Proper restoration: Converts the output back to float64 to maintain expected data types
- Clean implementation: The workaround is clearly documented and doesn't affect other code paths
The logic follows the pattern described in the PR objectives and should resolve the Windows test failures without impacting other platforms or data types.
monai/networks/layers/spatial_transforms.py (3)
14-14
: LGTM: Import added for platform detection.The
sys
import is correctly placed and necessary for the Windows-specific workaround implementation.
530-535
: LGTM: Well-implemented platform-specific workaround.The workaround logic correctly identifies the specific conditions (Windows + float64 + CPU) that trigger the PyTorch grid_sample issue. The inline comment provides good context with a reference to the related PR.
591-603
: LGTM: Proper workaround implementation with precision considerations.The implementation correctly handles the dtype conversion to work around the PyTorch grid_sample issue on Windows. The conversion back to float64 maintains API consistency, though there may be minor precision implications as noted in the PR discussion.
This is a temporary workaround for PyTorch issue #152385. Consider monitoring for PyTorch 2.7.1 release to remove this workaround when the upstream issue is fixed.
requirements-dev.txt (1)
23-23
: LGTM: Version constraint addresses build failures.The upper bound constraint on pytype (<=2024.4.11) appropriately addresses the build failures mentioned in the PR discussion. The platform-specific exclusion for Windows is also correct.
docs/requirements.txt (1)
2-2
: LGTM: Consistent torch version constraint relaxation.The removal of the upper bound constraint aligns with the PR's objective to enable PyTorch 2.7 compatibility for documentation builds.
setup.cfg (1)
45-45
: LGTM: Consistent torch version constraint relaxation.The removal of the upper bound constraint (
<2.7.0
) enables PyTorch 2.7 compatibility and is consistent with similar changes across other configuration files.
Signed-off-by: Eric Kerfoot <[email protected]>
Signed-off-by: Eric Kerfoot <[email protected]>
Signed-off-by: Eric Kerfoot <[email protected]>
Hi @KumoLiu this should pass tests now so we can run blossom and merge. Thanks! |
Description
This will update MONAI to be compatible with PyTorch 2.7.1. There appear to be few code changes with this release so hopefully this will be simply a matter of updating versions. The versions tested in the actions are now fixed to explicit version strings rather than including "latest" as a PyTorch version, this will avoid new breaking versions being released and rendering PRs unmergable.
Types of changes
./runtests.sh -f -u --net --coverage
../runtests.sh --quick --unittests --disttests
.make html
command in thedocs/
folder.Summary by CodeRabbit
Summary by CodeRabbit
Bug Fixes
Chores