Skip to content

Latest commit

 

History

History
48 lines (48 loc) · 1.73 KB

2024-06-30-dmitriev24a.md

File metadata and controls

48 lines (48 loc) · 1.73 KB
title section abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective
Original Papers
In this paper, we provide lower bounds for Differentially Private (DP) Online Learning algorithms. Our result shows that, for a broad class of $(\epsilon,\delta)$-DP online algorithms, for number of rounds $T$ such that $\log T\leq O\left(1 / \delta\right)$, the expected number of mistakes incurred by the algorithm grows as \(\Omega\left(\log T\right)\). This matches the upper bound obtained by Golowich and Livni (2021) and is in contrast to non-private online learning where the number of mistakes is independent of \(T\). To the best of our knowledge, our work is the first result towards settling lower bounds for DP–Online learning and partially addresses the open question in Sanyal and Ramponi (2022).
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
dmitriev24a
0
On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective
1379
1398
1379-1398
1379
false
Dmitriev, Daniil and Szab{\'o}, Krist{\'o}f and Sanyal, Amartya
given family
Daniil
Dmitriev
given family
Kristóf
Szabó
given family
Amartya
Sanyal
2024-06-30
Proceedings of Thirty Seventh Conference on Learning Theory
247
inproceedings
date-parts
2024
6
30