Skip to content

Commit bee0fbe

Browse files
committed
Fix test script: To disable validation during training, It's better to set val_begin to infinite instead of val_interval. Because the logic of EpochBasedTrainLoop will call validate at the end of the last epoch. And this will not happen when self._epoch < self.val_begin.
1 parent b49853d commit bee0fbe

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

tests/test_hooks/test_empty_cache_hook.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ def test_with_runner(self):
1515
with patch('torch.cuda.empty_cache') as mock_empty_cache:
1616
cfg = self.epoch_based_cfg
1717
cfg.custom_hooks = [dict(type='EmptyCacheHook')]
18-
cfg.train_cfg.val_interval = 1e6 # disable validation during training # noqa: E501
18+
cfg.train_cfg.val_begin = 1e6 # disable validation during training # noqa: E501
1919
runner = self.build_runner(cfg)
2020

2121
runner.train()

0 commit comments

Comments
 (0)