-
Notifications
You must be signed in to change notification settings - Fork 37
ENH Add support for GLasso and Adaptive (reweighted) GLasso #280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
Perceptronium
wants to merge
31
commits into
scikit-learn-contrib:main
Choose a base branch
from
Perceptronium:graphical_lasso
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
31 commits
Select commit
Hold shift + click to select a range
c199221
add support for graphical lasso and adaptive (reweighted) graphical l…
Perceptronium 5696d40
Update estimators to use barebones solver, make it at least as fast a…
Perceptronium 3e3df79
add Reweighted GLasso regularization path example
Perceptronium c9da9d4
Merge branch 'main' of https://github.com/scikit-learn-contrib/skglm …
Perceptronium ca6960f
fix issues in glasso reg path example
Perceptronium 4538002
fix glasso solver issues, move estimator to own file, create dedicate…
Perceptronium a4ea3fd
remove snakecase function names in test functions
Perceptronium ff9c2fe
Merge branch 'graphical_lasso' of https://github.com/Perceptronium/sk…
floriankozikowski 3148357
adjust weight updates, still need to test
floriankozikowski 082c00d
trying out different update methods, no success so far
floriankozikowski 7fc0f21
Merge branch 'main' of github.com:scikit-learn-contrib/skglm into gra…
mathurinm a4e192f
empty
floriankozikowski 33ca52c
add explicit handling of zeros in penalties.derivative, works now, to…
floriankozikowski fc70009
leave original name of old strategy based version, so tests dont fail
floriankozikowski a927e50
fix minor name dependency
floriankozikowski b62990d
ci trigger
mathurinm 02f0fb5
Merge remote-tracking branch 'upstream/main' into graphical_lasso
floriankozikowski 264faed
Merge branch 'graphical_lasso' of https://github.com/Perceptronium/sk…
floriankozikowski 16dbe4e
fix linters
floriankozikowski c50f5d3
fix pytests
floriankozikowski c9d1545
clean up & fix adaptive method, update example, unit test accordingly
floriankozikowski 3f833cd
edit what's new, and api
floriankozikowski ad857f2
make sklearn compatible, add docstrings, improve init. of cov. matrix…
floriankozikowski 653a431
fix doc example, make_dummy_covariance_data
floriankozikowski b79859e
adjust author block
floriankozikowski 08c1896
verify alternatives to barebones solver, allow gram as input for Gram…
floriankozikowski d7824c1
option to disable subdiff computation inn cd_gram
mathurinm ce0938f
remove debugging, clean up, add math to doc
floriankozikowski f6273bd
Merge remote-tracking branch 'upstream/main' into graphical_lasso to …
floriankozikowski 183adfa
edit whats new after merge conflict
floriankozikowski e4333c7
revert gram file back to original as no changes needed in there for t…
floriankozikowski File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,169 @@ | ||
""" | ||
======================================================================= | ||
Regularization paths for the Graphical Lasso and its Adaptive variation | ||
======================================================================= | ||
This example demonstrates how non-convex penalties in the Adaptive Graphical Lasso | ||
can achieve superior sparsity recovery compared to the standard L1 penalty. | ||
|
||
The Adaptive Graphical Lasso uses iterative reweighting to approximate non-convex | ||
penalties, following Candès et al. (2007). Non-convex penalties often produce | ||
better sparsity patterns by more aggressively shrinking small coefficients while | ||
preserving large ones. | ||
|
||
We compare three approaches: | ||
- **L1**: Standard Graphical Lasso with L1 penalty | ||
- **Log**: Adaptive approach with logarithmic penalty | ||
- **L0.5**: Adaptive approach with L0.5 penalty | ||
|
||
The plots show normalized mean square error (NMSE) for reconstruction accuracy | ||
and F1 score for sparsity pattern recovery across different regularization levels. | ||
""" | ||
|
||
# Authors: Can Pouliquen | ||
# Mathurin Massias | ||
# Florian Kozikowski | ||
|
||
import numpy as np | ||
from numpy.linalg import norm | ||
import matplotlib.pyplot as plt | ||
from sklearn.metrics import f1_score | ||
|
||
from skglm.covariance import GraphicalLasso, AdaptiveGraphicalLasso | ||
from skglm.penalties.separable import LogSumPenalty, L0_5 | ||
from skglm.utils.data import make_dummy_covariance_data | ||
|
||
# %% | ||
# Generate synthetic sparse precision matrix data | ||
# =============================================== | ||
|
||
p = 100 | ||
n = 1000 | ||
S, _, Theta_true, alpha_max = make_dummy_covariance_data(n, p) | ||
alphas = alpha_max*np.geomspace(1, 1e-4, num=10) | ||
|
||
# %% | ||
# Setup models with different penalty functions | ||
# ============================================ | ||
|
||
penalties = ["L1", "Log", "L0.5"] | ||
n_reweights = 5 # Number of adaptive reweighting iterations | ||
models_tol = 1e-4 | ||
|
||
models = [ | ||
# Standard Graphical Lasso with L1 penalty | ||
GraphicalLasso(algo="primal", warm_start=True, tol=models_tol), | ||
|
||
# Adaptive Graphical Lasso with logarithmic penalty | ||
AdaptiveGraphicalLasso(warm_start=True, | ||
penalty=LogSumPenalty(alpha=1.0, eps=1e-10), | ||
n_reweights=n_reweights, | ||
tol=models_tol), | ||
|
||
# Adaptive Graphical Lasso with L0.5 penalty | ||
AdaptiveGraphicalLasso(warm_start=True, | ||
penalty=L0_5(alpha=1.0), | ||
n_reweights=n_reweights, | ||
tol=models_tol), | ||
] | ||
|
||
# %% | ||
# Compute regularization paths | ||
# ============================ | ||
|
||
nmse_results = {penalty: [] for penalty in penalties} | ||
f1_results = {penalty: [] for penalty in penalties} | ||
|
||
|
||
# Fit models across regularization path | ||
for i, (penalty, model) in enumerate(zip(penalties, models)): | ||
print(f"Fitting {penalty} penalty across {len(alphas)} regularization values...") | ||
for alpha_idx, alpha in enumerate(alphas): | ||
print( | ||
f" alpha {alpha_idx+1}/{len(alphas)}: " | ||
f"lambda/lambda_max = {alpha/alpha_max:.1e}", | ||
end="") | ||
|
||
model.alpha = alpha | ||
model.fit(S, mode='precomputed') | ||
|
||
Theta_est = model.precision_ | ||
nmse = norm(Theta_est - Theta_true)**2 / norm(Theta_true)**2 | ||
f1_val = f1_score(Theta_est.flatten() != 0., Theta_true.flatten() != 0.) | ||
|
||
nmse_results[penalty].append(nmse) | ||
f1_results[penalty].append(f1_val) | ||
|
||
print(f"NMSE: {nmse:.3f}, F1: {f1_val:.3f}") | ||
print(f"{penalty} penalty complete!\n") | ||
|
||
|
||
# %% | ||
# Plot results | ||
# ============ | ||
fig, axarr = plt.subplots(2, 1, sharex=True, figsize=([6.11, 3.91]), | ||
layout="constrained") | ||
cmap = plt.get_cmap("tab10") | ||
for i, penalty in enumerate(penalties): | ||
|
||
for j, ax in enumerate(axarr): | ||
|
||
if j == 0: | ||
metric = nmse_results | ||
best_idx = np.argmin(metric[penalty]) | ||
ystop = np.min(metric[penalty]) | ||
else: | ||
metric = f1_results | ||
best_idx = np.argmax(metric[penalty]) | ||
ystop = np.max(metric[penalty]) | ||
|
||
ax.semilogx(alphas/alpha_max, | ||
metric[penalty], | ||
color=cmap(i), | ||
linewidth=2., | ||
label=penalty) | ||
|
||
ax.vlines( | ||
x=alphas[best_idx] / alphas[0], | ||
ymin=0, | ||
ymax=ystop, | ||
linestyle='--', | ||
color=cmap(i)) | ||
line = ax.plot( | ||
[alphas[best_idx] / alphas[0]], | ||
0, | ||
clip_on=False, | ||
marker='X', | ||
color=cmap(i), | ||
markersize=12) | ||
|
||
ax.grid(which='both', alpha=0.9) | ||
|
||
axarr[0].legend(fontsize=14) | ||
axarr[0].set_title(f"{p=},{n=}", fontsize=18) | ||
axarr[0].set_ylabel("NMSE", fontsize=18) | ||
axarr[1].set_ylabel("F1 score", fontsize=18) | ||
_ = axarr[1].set_xlabel(r"$\lambda / \lambda_\mathrm{{max}}$", fontsize=18) | ||
# %% | ||
# Results summary | ||
# =============== | ||
|
||
print("Performance at optimal regularization:") | ||
print("-" * 50) | ||
|
||
for penalty in penalties: | ||
best_nmse = min(nmse_results[penalty]) | ||
best_f1 = max(f1_results[penalty]) | ||
print(f"{penalty:>4}: NMSE = {best_nmse:.3f}, F1 = {best_f1:.3f}") | ||
|
||
# %% [markdown] | ||
# | ||
# **Metrics explanation:** | ||
# | ||
# * **NMSE (Normalized Mean Square Error)**: Measures reconstruction accuracy | ||
# of the precision matrix. Lower values = better reconstruction. | ||
# * **F1 Score**: Measures sparsity pattern recovery (correctly identifying | ||
# which entries are zero/non-zero). Higher values = better sparsity. | ||
# | ||
# **Key finding**: Non-convex penalties achieve significantly | ||
# better sparsity recovery (F1 score) while maintaining | ||
# competitive reconstruction accuracy (NMSE). |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.