Loading web-font TeX/Math/Italic
Minimax Optimal Q Learning With Nearest Neighbors | IEEE Journals & Magazine | IEEE Xplore

Minimax Optimal Q Learning With Nearest Neighbors


Abstract:

Markov decision process (MDP) is an important model of sequential decision making problems. Existing theoretical analysis focus primarily on finite state spaces. For cont...Show More

Abstract:

Markov decision process (MDP) is an important model of sequential decision making problems. Existing theoretical analysis focus primarily on finite state spaces. For continuous state spaces, a recent interesting work (Shah and Xie, 2018) proposes a nearest neighbor Q learning approach. Under the streaming setting, in shich samples are received in a sequential manner, the sample complexity of this method is \tilde {O}\left ({{\frac {|\mathcal {A}|}{\epsilon ^{d+3}(1-\gamma)^{d+7}}}}\right) for \epsilon -accurate Q function estimation of infinite horizon discounted MDP with discount factor \gamma , in which |\mathcal {A}| is the size of the action space. However, the sample complexity is not optimal, and the method is suitable only for bounded state spaces. In this paper, we propose two new nearest neighbor Q learning methods, one for the offline setting and the other for the streaming setting. We show that the sample complexities of these two methods are \tilde {O}\left ({{\frac {|\mathcal {A}|}{\epsilon ^{d+2}(1-\gamma)^{d+2}}}}\right) and \tilde {O}\left ({{\frac {|\mathcal {A}|}{\epsilon ^{d+2}(1-\gamma)^{d+3}}}}\right) for offline and streaming settings respectively, which significantly improve over existing results and have minimax optimal dependence over \epsilon . We achieve such improvement by utilizing samples more efficiently. In particular, the method by Shah and Xie, 2018, clears up all samples after each iteration, thus these samples are somewhat wasted. On the other hand, our offline method does not remove any samples, and our streaming method only removes samples with time earlier than \beta t at time t, thus our methods significantly reduce the loss of information. Apart from the sample complexity, our methods also have additional advantages of better computational complexity, as well as suitability to unbounded state spaces. Finally, we extend our work to the case where both state and action spaces are continuous.
Published in: IEEE Transactions on Information Theory ( Volume: 71, Issue: 2, February 2025)
Page(s): 1300 - 1322
Date of Publication: 25 December 2024

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.