Skip to content

Conversation

@juaristi22
Copy link
Collaborator

@juaristi22 juaristi22 commented Jul 30, 2025

Fix #77
Fix #76

This enables tuning the L0 parameters to achieve a combination of loss reduction, estimates being within 10% of targets, and sparsity (proportion of weights that are 0). The objective function combines the three, and the parameter "objectives_contribution" allows changing how much we want to prioritize each objective, making it flexible to tune for different needs.

It also supports target holdout. n_holdout_sets can be set to 1 with a holdout_fraction of 0 if we dont want to holdout targets.

@vercel
Copy link

vercel bot commented Jul 30, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
microcalibrate Ready Ready Preview Comment Aug 22, 2025 11:01am

Copy link
Collaborator

@baogorek baogorek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Maria, I just ran the test and it ran very smoothly and I saw the sparse output. I see that you were able to leverage an open-source parameter optimizer (Optuna), which was smart. One thing I would like to see is a bit of information about the holdouts once tuning is one. There should be an error for each of the holdout sets and a way to see which targets were held out. I'm probably not going to review every line of code here, but if I could see that, I would feel very good about putting my checkmark here.

Copy link
Collaborator

@baogorek baogorek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a lot in here, and I don't want to block useful functionality from going in. I am worried about data leakage from the training folds into the holdout folds, which is virtually guaranteed to happen given the way we define targets. All I would ask is that there be some warning added so the user can see it and we will be reminded to make it more robust in the future.

@juaristi22 juaristi22 merged commit 356b855 into main Aug 22, 2025
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement optuna l0 hyperparameter tuning Create target holdout validation function

3 participants