-
Notifications
You must be signed in to change notification settings - Fork 1
Enable L0 hyperparameter tuning #78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
baogorek
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Maria, I just ran the test and it ran very smoothly and I saw the sparse output. I see that you were able to leverage an open-source parameter optimizer (Optuna), which was smart. One thing I would like to see is a bit of information about the holdouts once tuning is one. There should be an error for each of the holdout sets and a way to see which targets were held out. I'm probably not going to review every line of code here, but if I could see that, I would feel very good about putting my checkmark here.
baogorek
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a lot in here, and I don't want to block useful functionality from going in. I am worried about data leakage from the training folds into the holdout folds, which is virtually guaranteed to happen given the way we define targets. All I would ask is that there be some warning added so the user can see it and we will be reminded to make it more robust in the future.
Fix #77
Fix #76
This enables tuning the L0 parameters to achieve a combination of loss reduction, estimates being within 10% of targets, and sparsity (proportion of weights that are 0). The objective function combines the three, and the parameter "objectives_contribution" allows changing how much we want to prioritize each objective, making it flexible to tune for different needs.
It also supports target holdout.
n_holdout_setscan be set to 1 with aholdout_fractionof 0 if we dont want to holdout targets.