Skip to content

Latest commit

 

History

History
60 lines (60 loc) · 2.63 KB

2024-06-30-jia24a.md

File metadata and controls

60 lines (60 loc) · 2.63 KB
title section abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Offline Reinforcement Learning: Role of State Aggregation and Trajectory Data
Original Papers
We revisit the problem of offline reinforcement learning with value function realizability but without Bellman completeness. Previous work by Xie and Jiang (2021) and Foster et al. (2022) left open the question of whether bounded (all-policy) concentrability coefficient along with trajectory-based offline data admits a polynomial sample complexity. In this work, we provide a negative answer to this question for the task of offline policy evaluation. In addition to addressing this question, we provide a rather complete picture for offline policy evaluation with only value function realizability. Our primary findings are threefold: 1) The sample complexity of offline policy evaluation is governed by the concentrability coefficient in an aggregated Markov Transition Model jointly determined by the function class and the offline data distribution, rather than that in the original MDP. This unifies and generalizes the ideas of Xie and Jiang (2021) and Foster et al. (2022), 2) The concentrability coefficient in the aggregated Markov Transition Model may grow exponentially with the horizon length, even when the concentrability coefficient in the original MDP is small and the offline data is \emph{admissible} (i.e., the data distribution equals the occupancy measure of some policy), 3) Under value function realizability, there is a generic reduction that can convert any hard instance with admissible data to a hard instance with trajectory data, implying that trajectory data offers no extra benefits over admissible data. These three pieces jointly resolve the open problem, though each of them could be of independent interest.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
jia24a
0
Offline Reinforcement Learning: Role of State Aggregation and Trajectory Data
2644
2719
2644-2719
2644
false
Jia, Zeyu and Rakhlin, Alexander and Sekhari, Ayush and Wei, Chen-Yu
given family
Zeyu
Jia
given family
Alexander
Rakhlin
given family
Ayush
Sekhari
given family
Chen-Yu
Wei
2024-06-30
Proceedings of Thirty Seventh Conference on Learning Theory
247
inproceedings
date-parts
2024
6
30