Skip to content

Conversation

@MGAMZ
Copy link
Contributor

@MGAMZ MGAMZ commented Oct 26, 2025

This is a sub-PR of #1665

Brief

According to PyTorch:

torch.cuda.amp.GradScaler(args...) is deprecated. Please use torch.amp.GradScaler("cuda", args...) instead.

This includes two related replacement:

  1. amp_optimizer_wrapper
  2. test_optimizer_wrapper

PyTest Result

pytest tests/test_optim/test_optimizer/test_optimizer_wrapper.py

========================================================== test session starts ===========================================================
platform linux -- Python 3.13.9, pytest-8.4.1, pluggy-1.6.0
rootdir: /home/mgam/mgam_repos/mmengine
configfile: pytest.ini
plugins: anyio-4.9.0, hydra-core-1.3.2
collected 37 items                                                                                                                       

tests/test_optim/test_optimizer/test_optimizer_wrapper.py sssssssssssssssssss..................                                    [100%]

============================================================ warnings summary ============================================================
mmengine/utils/misc.py:477
  /home/mgam/mgam_repos/mmengine/mmengine/utils/misc.py:477: DeprecationWarning: 'maxsplit' is passed as positional argument
    summary_and_body = re.split(pattern, docstring, 1)

../../miniforge3/envs/pt/lib/python3.13/site-packages/pydantic/typing.py:400
  /home/mgam/miniforge3/envs/pt/lib/python3.13/site-packages/pydantic/typing.py:400: DeprecationWarning: Failing to pass a value to the 'type_params' parameter of 'typing._eval_type' is deprecated, as it leads to incorrect behaviour when calling typing._eval_type on a stringified annotation that references a PEP 695 type parameter. It will be disallowed in Python 3.15.
    value = _eval_type(value, base_globals, None)

tests/test_optim/test_optimizer/test_optimizer_wrapper.py::TestAmpOptimWrapper::test_optim_context_4
  /home/mgam/mgam_repos/mmengine/mmengine/runner/amp.py:119: DeprecationWarning: torch.get_autocast_gpu_dtype() is deprecated. Please use torch.get_autocast_dtype('cuda') instead. (Triggered internally at /pytorch/torch/csrc/autograd/init.cpp:887.)
    dtype = torch.get_autocast_gpu_dtype()

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
========================================= 18 passed, 19 skipped, 3 warnings in 86.75s (0:01:26) ==========================================

@MGAMZ MGAMZ marked this pull request as ready for review October 26, 2025 08:29
Copilot AI review requested due to automatic review settings October 26, 2025 08:29
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR updates the codebase to use the new torch.amp.GradScaler API instead of the deprecated torch.cuda.amp.GradScaler API, following PyTorch's deprecation guidance.

Key Changes:

  • Replaced deprecated torch.cuda.amp.GradScaler imports with torch.amp.GradScaler using a partial function to set the device to 'cuda'
  • Applied this change to both production code (amp_optimizer_wrapper.py) and test code (test_optimizer_wrapper.py)

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
mmengine/optim/optimizer/amp_optimizer_wrapper.py Updated GradScaler import from deprecated torch.cuda.amp to torch.amp with device parameter
tests/test_optim/test_optimizer/test_optimizer_wrapper.py Updated GradScaler import in tests to use the new torch.amp API

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +3 to 7
from functools import partial

import unittest
from unittest import TestCase
from unittest.mock import MagicMock
Copy link

Copilot AI Oct 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The blank line after from functools import partial creates inconsistent import grouping. Move the functools import to be with other standard library imports (os, unittest) before the blank line that separates standard library imports from third-party imports.

Suggested change
from functools import partial
import unittest
from unittest import TestCase
from unittest.mock import MagicMock
import unittest
from unittest import TestCase
from unittest.mock import MagicMock
from functools import partial

Copilot uses AI. Check for mistakes.
from parameterized import parameterized
from torch.cuda.amp import GradScaler
from torch.amp import GradScaler as amp_GradScaler
GradScaler = partial(amp_GradScaler, device='cuda')
Copy link

Copilot AI Oct 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Creating a module-level variable GradScaler through partial assignment makes the code less maintainable and harder to understand. Consider either: (1) using amp_GradScaler('cuda', ...) directly at call sites, or (2) creating a proper wrapper function with a docstring explaining the device binding.

Suggested change
GradScaler = partial(amp_GradScaler, device='cuda')
def get_cuda_grad_scaler(*args, **kwargs):
"""Return a torch.amp.GradScaler instance bound to the 'cuda' device.
Args:
*args: Positional arguments for torch.amp.GradScaler.
**kwargs: Keyword arguments for torch.amp.GradScaler.
Returns:
amp_GradScaler: An instance of GradScaler with device='cuda'.
"""
return amp_GradScaler(*args, device='cuda', **kwargs)

Copilot uses AI. Check for mistakes.
Comment on lines +22 to 24
GradScaler = partial(amp_GradScaler, device='cuda')


Copy link

Copilot AI Oct 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Creating a module-level variable GradScaler through partial assignment makes the code less maintainable and harder to understand. Consider either: (1) using amp_GradScaler('cuda', ...) directly at call sites, or (2) creating a proper wrapper function with a docstring explaining the device binding.

Suggested change
GradScaler = partial(amp_GradScaler, device='cuda')
def get_grad_scaler(*args, **kwargs):
"""Create a torch.amp.GradScaler instance bound to device='cuda'.
Args:
*args: Positional arguments passed to torch.amp.GradScaler.
**kwargs: Keyword arguments passed to torch.amp.GradScaler.
Returns:
amp_GradScaler: An instance of torch.amp.GradScaler with device='cuda'.
"""
return amp_GradScaler(*args, device='cuda', **kwargs)

Copilot uses AI. Check for mistakes.
@MGAMZ
Copy link
Contributor Author

MGAMZ commented Oct 26, 2025

@HAOCHENYE This one is ready to be reviewed.

The copilot review suggested a different implementation which also LGTM. You may choose one to merge~~

@HAOCHENYE
Copy link
Collaborator

This workaround LGTM, I'll merge it after fixing the lint

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants