Skip to content

ENH Add support for GLasso and Adaptive (reweighted) GLasso #280

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 31 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
c199221
add support for graphical lasso and adaptive (reweighted) graphical l…
Perceptronium Feb 10, 2025
5696d40
Update estimators to use barebones solver, make it at least as fast a…
Perceptronium Mar 20, 2025
3e3df79
add Reweighted GLasso regularization path example
Perceptronium Mar 21, 2025
c9da9d4
Merge branch 'main' of https://github.com/scikit-learn-contrib/skglm …
Perceptronium Apr 2, 2025
ca6960f
fix issues in glasso reg path example
Perceptronium Apr 2, 2025
4538002
fix glasso solver issues, move estimator to own file, create dedicate…
Perceptronium Apr 2, 2025
a4ea3fd
remove snakecase function names in test functions
Perceptronium Apr 2, 2025
ff9c2fe
Merge branch 'graphical_lasso' of https://github.com/Perceptronium/sk…
floriankozikowski Apr 11, 2025
3148357
adjust weight updates, still need to test
floriankozikowski Apr 11, 2025
082c00d
trying out different update methods, no success so far
floriankozikowski Apr 14, 2025
7fc0f21
Merge branch 'main' of github.com:scikit-learn-contrib/skglm into gra…
mathurinm Apr 22, 2025
a4e192f
empty
floriankozikowski Apr 22, 2025
33ca52c
add explicit handling of zeros in penalties.derivative, works now, to…
floriankozikowski Apr 22, 2025
fc70009
leave original name of old strategy based version, so tests dont fail
floriankozikowski Apr 22, 2025
a927e50
fix minor name dependency
floriankozikowski Apr 22, 2025
b62990d
ci trigger
mathurinm Apr 30, 2025
02f0fb5
Merge remote-tracking branch 'upstream/main' into graphical_lasso
floriankozikowski Jun 26, 2025
264faed
Merge branch 'graphical_lasso' of https://github.com/Perceptronium/sk…
floriankozikowski Jun 26, 2025
16dbe4e
fix linters
floriankozikowski Jun 26, 2025
c50f5d3
fix pytests
floriankozikowski Jun 26, 2025
c9d1545
clean up & fix adaptive method, update example, unit test accordingly
floriankozikowski Jun 27, 2025
3f833cd
edit what's new, and api
floriankozikowski Jun 27, 2025
ad857f2
make sklearn compatible, add docstrings, improve init. of cov. matrix…
floriankozikowski Jun 30, 2025
653a431
fix doc example, make_dummy_covariance_data
floriankozikowski Jul 1, 2025
b79859e
adjust author block
floriankozikowski Jul 1, 2025
08c1896
verify alternatives to barebones solver, allow gram as input for Gram…
floriankozikowski Jul 7, 2025
d7824c1
option to disable subdiff computation inn cd_gram
mathurinm Jul 8, 2025
ce0938f
remove debugging, clean up, add math to doc
floriankozikowski Jul 8, 2025
f6273bd
Merge remote-tracking branch 'upstream/main' into graphical_lasso to …
floriankozikowski Jul 8, 2025
183adfa
edit whats new after merge conflict
floriankozikowski Jul 8, 2025
e4333c7
revert gram file back to original as no changes needed in there for t…
floriankozikowski Jul 8, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,18 @@ Estimators
WeightedLasso


Covariance
==========

.. currentmodule:: skglm

.. autosummary::
:toctree: generated/

GraphicalLasso
AdaptiveGraphicalLasso


Penalties
=========

Expand Down
2 changes: 2 additions & 0 deletions doc/changes/0.5.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,5 @@ Version 0.5 (in progress)
- Add experimental :ref:`QuantileHuber <skglm.experimental.quantile_huber.QuantileHuber>` and :ref:`SmoothQuantileRegressor <skglm.experimental.quantile_huber.SmoothQuantileRegressor>` for quantile regression, and an example script (PR: :gh:`312`).
- Add :ref:`GeneralizedLinearEstimatorCV <skglm.cv.GeneralizedLinearEstimatorCV>` for cross-validation with automatic parameter selection for L1 and elastic-net penalties (PR: :gh:`299`)
- Add :class:`skglm.datafits.group.PoissonGroup` datafit for group-structured Poisson regression. (PR: :gh:`317`)
- Add :ref:`GraphicalLasso <skglm.covariance.GraphicalLasso>` for sparse inverse covariance estimation with both primal and dual algorithms
- Add :ref:`AdaptiveGraphicalLasso <skglm.covariance.AdaptiveGraphicalLasso>` for non-convex penalty variations using iterative reweighting strategy
169 changes: 169 additions & 0 deletions examples/plot_reweighted_glasso_reg_path.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
"""
=======================================================================
Regularization paths for the Graphical Lasso and its Adaptive variation
=======================================================================
This example demonstrates how non-convex penalties in the Adaptive Graphical Lasso
can achieve superior sparsity recovery compared to the standard L1 penalty.

The Adaptive Graphical Lasso uses iterative reweighting to approximate non-convex
penalties, following Candès et al. (2007). Non-convex penalties often produce
better sparsity patterns by more aggressively shrinking small coefficients while
preserving large ones.

We compare three approaches:
- **L1**: Standard Graphical Lasso with L1 penalty
- **Log**: Adaptive approach with logarithmic penalty
- **L0.5**: Adaptive approach with L0.5 penalty

The plots show normalized mean square error (NMSE) for reconstruction accuracy
and F1 score for sparsity pattern recovery across different regularization levels.
"""

# Authors: Can Pouliquen
# Mathurin Massias
# Florian Kozikowski

import numpy as np
from numpy.linalg import norm
import matplotlib.pyplot as plt
from sklearn.metrics import f1_score

from skglm.covariance import GraphicalLasso, AdaptiveGraphicalLasso
from skglm.penalties.separable import LogSumPenalty, L0_5
from skglm.utils.data import make_dummy_covariance_data

# %%
# Generate synthetic sparse precision matrix data
# ===============================================

p = 100
n = 1000
S, _, Theta_true, alpha_max = make_dummy_covariance_data(n, p)
alphas = alpha_max*np.geomspace(1, 1e-4, num=10)

# %%
# Setup models with different penalty functions
# ============================================

penalties = ["L1", "Log", "L0.5"]
n_reweights = 5 # Number of adaptive reweighting iterations
models_tol = 1e-4

models = [
# Standard Graphical Lasso with L1 penalty
GraphicalLasso(algo="primal", warm_start=True, tol=models_tol),

# Adaptive Graphical Lasso with logarithmic penalty
AdaptiveGraphicalLasso(warm_start=True,
penalty=LogSumPenalty(alpha=1.0, eps=1e-10),
n_reweights=n_reweights,
tol=models_tol),

# Adaptive Graphical Lasso with L0.5 penalty
AdaptiveGraphicalLasso(warm_start=True,
penalty=L0_5(alpha=1.0),
n_reweights=n_reweights,
tol=models_tol),
]

# %%
# Compute regularization paths
# ============================

nmse_results = {penalty: [] for penalty in penalties}
f1_results = {penalty: [] for penalty in penalties}


# Fit models across regularization path
for i, (penalty, model) in enumerate(zip(penalties, models)):
print(f"Fitting {penalty} penalty across {len(alphas)} regularization values...")
for alpha_idx, alpha in enumerate(alphas):
print(
f" alpha {alpha_idx+1}/{len(alphas)}: "
f"lambda/lambda_max = {alpha/alpha_max:.1e}",
end="")

model.alpha = alpha
model.fit(S, mode='precomputed')

Theta_est = model.precision_
nmse = norm(Theta_est - Theta_true)**2 / norm(Theta_true)**2
f1_val = f1_score(Theta_est.flatten() != 0., Theta_true.flatten() != 0.)

nmse_results[penalty].append(nmse)
f1_results[penalty].append(f1_val)

print(f"NMSE: {nmse:.3f}, F1: {f1_val:.3f}")
print(f"{penalty} penalty complete!\n")


# %%
# Plot results
# ============
fig, axarr = plt.subplots(2, 1, sharex=True, figsize=([6.11, 3.91]),
layout="constrained")
cmap = plt.get_cmap("tab10")
for i, penalty in enumerate(penalties):

for j, ax in enumerate(axarr):

if j == 0:
metric = nmse_results
best_idx = np.argmin(metric[penalty])
ystop = np.min(metric[penalty])
else:
metric = f1_results
best_idx = np.argmax(metric[penalty])
ystop = np.max(metric[penalty])

ax.semilogx(alphas/alpha_max,
metric[penalty],
color=cmap(i),
linewidth=2.,
label=penalty)

ax.vlines(
x=alphas[best_idx] / alphas[0],
ymin=0,
ymax=ystop,
linestyle='--',
color=cmap(i))
line = ax.plot(
[alphas[best_idx] / alphas[0]],
0,
clip_on=False,
marker='X',
color=cmap(i),
markersize=12)

ax.grid(which='both', alpha=0.9)

axarr[0].legend(fontsize=14)
axarr[0].set_title(f"{p=},{n=}", fontsize=18)
axarr[0].set_ylabel("NMSE", fontsize=18)
axarr[1].set_ylabel("F1 score", fontsize=18)
_ = axarr[1].set_xlabel(r"$\lambda / \lambda_\mathrm{{max}}$", fontsize=18)
# %%
# Results summary
# ===============

print("Performance at optimal regularization:")
print("-" * 50)

for penalty in penalties:
best_nmse = min(nmse_results[penalty])
best_f1 = max(f1_results[penalty])
print(f"{penalty:>4}: NMSE = {best_nmse:.3f}, F1 = {best_f1:.3f}")

# %% [markdown]
#
# **Metrics explanation:**
#
# * **NMSE (Normalized Mean Square Error)**: Measures reconstruction accuracy
# of the precision matrix. Lower values = better reconstruction.
# * **F1 Score**: Measures sparsity pattern recovery (correctly identifying
# which entries are zero/non-zero). Higher values = better sparsity.
#
# **Key finding**: Non-convex penalties achieve significantly
# better sparsity recovery (F1 score) while maintaining
# competitive reconstruction accuracy (NMSE).
1 change: 1 addition & 0 deletions skglm/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,4 @@
SparseLogisticRegression, GeneralizedLinearEstimator, CoxEstimator, GroupLasso,
)
from .cv import GeneralizedLinearEstimatorCV # noqa F401
from .covariance import GraphicalLasso, AdaptiveGraphicalLasso # noqa F401
Loading