Skip to content

Latest commit

 

History

History
54 lines (54 loc) · 2.17 KB

2024-06-30-han24a.md

File metadata and controls

54 lines (54 loc) · 2.17 KB
title section abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Prediction from compression for models with infinite memory, with applications to hidden Markov and renewal processes
Original Papers
Consider the problem of predicting the next symbol given a sample path of length $n$, whose joint distribution belongs to a distribution class that may have long-term memory. The goal is to compete with the conditional predictor that knows the true model. For both hidden Markov models (HMMs) and renewal processes, we determine the optimal prediction risk in Kullback-Leibler divergence up to universal constant factors. Extending existing results in finite-order Markov models (Han et al. (2023)) and drawing ideas from universal compression, the proposed estimator has a prediction risk bounded by redundancy of the distribution class and a memory term that accounts for the long-range dependency of the model. Notably, for HMMs with bounded state and observation spaces, a polynomial-time estimator based on dynamic programming is shown to achieve the optimal prediction risk $\Theta(\frac{\log n}{n})$; prior to this work, the only known result of this type is $O(\frac{1}{\log n})$ obtained using Markov approximation (Sharan et al. (2018)). Matching minimax lower bounds are obtained by making connections to redundancy and mutual information via a reduction argument.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
han24a
0
Prediction from compression for models with infinite memory, with applications to hidden Markov and renewal processes
2270
2307
2270-2307
2270
false
Han, Yanjun and Jiang, Tianze and Wu, Yihong
given family
Yanjun
Han
given family
Tianze
Jiang
given family
Yihong
Wu
2024-06-30
Proceedings of Thirty Seventh Conference on Learning Theory
247
inproceedings
date-parts
2024
6
30