title | section | abstract | layout | series | publisher | issn | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | container-title | volume | genre | issued | extras | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Refined Sample Complexity for Markov Games with Independent Linear Function Approximation (Extended Abstract) |
Original Papers |
Markov Games (MG) is an important model for Multi-Agent Reinforcement Learning (MARL). It was long believed that the “curse of multi-agents” (i.e., the algorithmic performance drops exponentially with the number of agents) is unavoidable until several recent works (Daskalakis et al., 2023; Cui et al., 2023; Wang et al., 2023). While these works resolved the curse of multi-agents, when the state spaces are prohibitively large and (linear) function approximations are deployed, they either had a slower convergence rate of |
inproceedings |
Proceedings of Machine Learning Research |
PMLR |
2640-3498 |
dai24a |
0 |
Refined Sample Complexity for Markov Games with Independent Linear Function Approximation (Extended Abstract) |
1260 |
1261 |
1260-1261 |
1260 |
false |
Dai, Yan and Cui, Qiwen and Du, Simon S. |
|
2024-06-30 |
Proceedings of Thirty Seventh Conference on Learning Theory |
247 |
inproceedings |
|