Skip to content

Latest commit

 

History

History
47 lines (47 loc) · 1.73 KB

2024-06-30-azize24a.md

File metadata and controls

47 lines (47 loc) · 1.73 KB
title section abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Open Problem: What is the Complexity of Joint Differential Privacy in Linear Contextual Bandits?
Open Problems
Contextual bandits serve as a theoretical framework to design recommender systems, which often rely on user-sensitive data, making privacy a critical concern. However, a significant gap remains between the known upper and lower bounds on the regret achievable in linear contextual bandits under Joint Differential Privacy (JDP), which is a popular privacy definition used in this setting. In particular, the best regret upper bound is known to be $O\left(d \sqrt{T} \log(T) + \textcolor{blue}{d^{3/4} \sqrt{T \log(1/\delta)} / \sqrt{\epsilon}} \right)$, while the lower bound is $\Omega \left(\sqrt{d T \log(K)} + \textcolor{blue}{d/(\epsilon + \delta)}\right)$. We discuss the recent progress on this problem, both from the algorithm design and lower bound techniques, and posit the open questions.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
azize24a
0
Open Problem: What is the Complexity of Joint Differential Privacy in Linear Contextual Bandits?
5306
5311
5306-5311
5306
false
Azize, Achraf and Basu, Debabrota
given family
Achraf
Azize
given family
Debabrota
Basu
2024-06-30
Proceedings of Thirty Seventh Conference on Learning Theory
247
inproceedings
date-parts
2024
6
30