Skip to content

Latest commit

 

History

History
63 lines (63 loc) · 2.76 KB

2024-06-30-kelner24a.md

File metadata and controls

63 lines (63 loc) · 2.76 KB
title section abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Lasso with Latents: Efficient Estimation, Covariate Rescaling, and Computational-Statistical Gaps
Original Papers
It is well-known that the statistical performance of Lasso can suffer significantly when the covariates of interest have strong correlations. In particular, the prediction error of Lasso becomes much worse than computationally inefficient alternatives like Best Subset Selection. Due to a large conjectured computational-statistical tradeoff in the problem of sparse linear regression, it may be impossible to close this gap in general. In this work, we propose a natural sparse linear regression setting where strong correlations between covariates arise from unobserved latent variables. In this setting, we analyze the problem caused by strong correlations and design a surprisingly simple fix. While Lasso with standard normalization of covariates fails, there exists a heterogeneous scaling of the covariates with which Lasso will suddenly obtain strong provable guarantees for estimation. Moreover, we design a simple, efficient procedure for computing such a “smart scaling.” The sample complexity of the resulting “rescaled Lasso” algorithm incurs (in the worst case) quadratic dependence on the sparsity of the underlying signal. While this dependence is not information-theoretically necessary, we give evidence that it is optimal among the class of polynomial-time algorithms, via the method of low-degree polynomials. This argument reveals a new connection between sparse linear regression and a special version of sparse PCA with a \emph{near-critical negative spike}. The latter problem can be thought of as a real-valued analogue of learning a sparse parity. Using it, we also establish the first computational-statistical gap for the closely related problem of learning a Gaussian Graphical Model.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
kelner24a
0
Lasso with Latents: Efficient Estimation, Covariate Rescaling, and Computational-Statistical Gaps
2840
2886
2840-2886
2840
false
Kelner, Jonathan and Koehler, Frederic and Meka, Raghu and Rohatgi, Dhruv
given family
Jonathan
Kelner
given family
Frederic
Koehler
given family
Raghu
Meka
given family
Dhruv
Rohatgi
2024-06-30
Proceedings of Thirty Seventh Conference on Learning Theory
247
inproceedings
date-parts
2024
6
30