-
Notifications
You must be signed in to change notification settings - Fork 1
Added L2-norm atom #138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added L2-norm atom #138
Conversation
|
Thanks for making this PR @alexahuxn. Could you get your tests to pass the CI? Then I will take a look at the code. |
Thank you. I fixed the issue and passed the CI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good for the most part.
The L2Norm atom needs to be added to the docs. It also needs to be added to the __init__.py file in rlaopt.atoms.
Also please merge the contents of the main branch into your branch!
rlaopt/atoms/l2_norm.py
Outdated
| lam = scaling * prox_scaling | ||
|
|
||
| def prox_l2(x: torch.Tensor) -> torch.Tensor: | ||
| norm = torch.linalg.norm(x, ord=2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checking equality for 0 could be unsafe. Could you tweak your implementation to use the compact form at the bottom of pg. 143 in this book: https://www.jacobaguirre.com/First%20Order%20Methods%20in%20Optimization.pdf?
|
pratikrathore8
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!



Added L2-norm atom #131