Official PyTorch implementation of
“One-Step Generalization Ratio Guided Optimization for Domain Generalization” (ICML 2025, Oral)
GENIE is a novel optimizer that leverages the One-Step Generalization Ratio (OSGR) to dynamically balance parameter contributions, mitigating source-domain overfitting and achieving superior generalization performance in Domain Generalization.
- 📄 Paper (OpenReview): https://openreview.net/forum?id=Tv2JDGw920
- 📄 PMLR proceedings: https://proceedings.mlr.press/v267/cho25c.html
- 🎥 ICML 2025 talk (SlidesLive): https://slideslive.com/39041578
Domain Generalization (DG) aims to train models on source domains that generalize to unseen target domains. Standard optimizers often:
- Overfit to domain-specific / spurious correlations
- Let a small subset of parameters dominate the update
- Fail to explicitly control gradient alignment and parameter-wise contribution to generalization
GENIE (Generalization-ENhancing Iterative Equalizer) addresses this by:
- Introducing OSGR (One-Step Generalization Ratio) as a per-parameter metric measuring:
- Contribution to loss reduction
- Alignment of gradients with generalization
- Applying a preconditioning factor that equalizes OSGR across parameters
- Preventing over-confident parameters from dominating optimization
- Promoting domain-invariant feature learning while keeping SGD-level convergence rate
GENIE consistently improves DG performance and can be plugged into existing DG pipelines as a drop-in optimizer.
- ✅ New optimizer based on One-Step Generalization Ratio (OSGR)
- ✅ Preconditioning scheme that balances convergence & gradient alignment
- ✅ Plug-and-play: drop-in replacement for SGD/Adam in PyTorch training loops
- ✅ Compatible with standard DG benchmarks (e.g., PACS, VLCS, OfficeHome, TerraIncognita, DomainNet)
- ✅ Works with both DG-specific algorithms and single-DG baselines
Clone this repository:
git clone https://github.com/00ssum/GENIE.git
cd GENIE