Hi, thanks for your great work!
I have a question about the training configuration:
Why is there no learning rate scheduling policy in the current implementation?
I found that the code defines a learning rate scheduler:
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.5)
However, it seems that the scheduler is never actually applied during training (e.g., no scheduler.step() call inside the training loop). As a result, the learning rate remains constant throughout training.
Any clarification would be greatly appreciated. Thanks!