title | section | abstract | layout | series | publisher | issn | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | container-title | volume | genre | issued | extras | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
List Sample Compression and Uniform Convergence |
Original Papers |
List learning is a variant of supervised classification where the learner outputs multiple plausible labels for each instance rather than just one. We investigate classical principles related to generalization within the context of list learning. Our primary goal is to determine whether classical principles in the PAC setting retain their applicability in the domain of list PAC learning. We focus on uniform convergence (which is the basis of Empirical Risk Minimization) and on sample compression (which is a powerful manifestation of Occam’s Razor). In classical PAC learning, both uniform convergence and sample compression satisfy a form of ‘completeness’: whenever a class is learnable, it can also be learned by a learning rule that adheres to these principles. We ask whether the same completeness holds true in the list learning setting. We show that uniform convergence remains equivalent to learnability in the list PAC learning setting. In contrast, our findings reveal surprising results regarding sample compression: we prove that when the label space is |
inproceedings |
Proceedings of Machine Learning Research |
PMLR |
2640-3498 |
hanneke24b |
0 |
List Sample Compression and Uniform Convergence |
2360 |
2388 |
2360-2388 |
2360 |
false |
Hanneke, Steve and Moran, Shay and Tom, Waknine |
|
2024-06-30 |
Proceedings of Thirty Seventh Conference on Learning Theory |
247 |
inproceedings |
|