Processing math: 100%
Minimax Optimal Q Learning With Nearest Neighbors | IEEE Journals & Magazine | IEEE Xplore

Minimax Optimal Q Learning With Nearest Neighbors


Abstract:

Markov decision process (MDP) is an important model of sequential decision making problems. Existing theoretical analysis focus primarily on finite state spaces. For cont...Show More

Abstract:

Markov decision process (MDP) is an important model of sequential decision making problems. Existing theoretical analysis focus primarily on finite state spaces. For continuous state spaces, a recent interesting work (Shah and Xie, 2018) proposes a nearest neighbor Q learning approach. Under the streaming setting, in shich samples are received in a sequential manner, the sample complexity of this method is \tilde {O}\left ({{\frac {|\mathcal {A}|}{\epsilon ^{d+3}(1-\gamma)^{d+7}}}}\right) for \epsilon -accurate Q function estimation of infinite horizon discounted MDP with discount factor \gamma , in which |\mathcal {A}| is the size of the action space. However, the sample complexity is not optimal, and the method is suitable only for bounded state spaces. In this paper, we propose two new nearest neighbor Q learning methods, one for the offline setting and the other for the streaming setting. We show that the sample complexities of these two methods are \tilde {O}\left ({{\frac {|\mathcal {A}|}{\epsilon ^{d+2}(1-\gamma)^{d+2}}}}\right) and \tilde {O}\left ({{\frac {|\mathcal {A}|}{\epsilon ^{d+2}(1-\gamma)^{d+3}}}}\right) for offline and streaming settings respectively, which significantly improve over existing results and have minimax optimal dependence over \epsilon . We achieve such improvement by utilizing samples more efficiently. In particular, the method by Shah and Xie, 2018, clears up all samples after each iteration, thus these samples are somewhat wasted. On the other hand, our offline method does not remove any samples, and our streaming method only removes samples with time earlier than \beta t at time t, thus our methods significantly reduce the loss of information. Apart from the sample complexity, our methods also have additional advantages of better computational complexity, as well as suitability to unbounded state spaces. Finally, we extend our work to the case where both state and action spaces are continuous.
Published in: IEEE Transactions on Information Theory ( Volume: 71, Issue: 2, February 2025)
Page(s): 1300 - 1322
Date of Publication: 25 December 2024

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

In nonparametric statistics, optimal rates have been established for various statistical tasks [2], [3], [4], [5], with most of them focusing on identical and independently distributed (i.i.d) data, while problems with non-i.i.d samples are rarely explored. Among these problems, the Markov decision process (MDP) is an important one, which is a stochastic control process that models various practical sequential decision making problems [6], [7], [8], [9], [10]. In MDPs, at each time step, an agent selects an action from a set and then moves to another state and receives a reward. Compared with nonparametric estimation for i.i.d data [2], [3], [4], [5], [11] and MDPs with finite state spaces [12], [13], [14], [15], the design of learning algorithms for MDPs with continuous state spaces faces the following two challenges. Firstly, states, actions, and rewards are received sequentially. In early steps, estimates of the value function are inevitably inaccurate due to limited information. Since later estimates depend on earlier results, estimation errors in the early stages will have a negative impact on later estimates. A proper handling of early steps is thus crucially needed. Secondly, with a continuous state space, states do not appear repeatedly, thus the value function cannot be updated step-by-step as in the discrete state space. It is therefore necessary to design new update rules to use the information from neighboring states.

Select All
1.
D. Shah and Q. Xie, “Q-learning with nearest neighbors,” in Proc. Adv. Neural Inf. Process. Syst., 2018, pp. 3111–3121.
2.
A. B. Tsybakov, Introduction to Nonparametric Estimation. New York, NY, USA : Springer, 2009.
3.
Y. Yang, “Minimax nonparametric classification.I. Rates of convergence,” IEEE Trans. Inf. Theory, vol. 45, no. 7, pp. 2271–2284, Jan. 1999.
4.
C. Scott and R. D. Nowak, “Minimax-optimal classification with dyadic decision trees,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1335–1353, Apr. 2006.
5.
G. Raskutti, M. J. Wainwright, and B. Yu, “Minimax rates of estimation for high-dimensional linear regression over ℓ q -balls,” IEEE Trans. Inf. Theory, vol. 57, no. 10, pp. 6976–6994, Oct. 2011.
6.
D. J. White, “A survey of applications of Markov decision processes,” J. Oper. Res. Soc., vol. 44, no. 11, pp. 1073–1096, Nov. 1993.
7.
E. A. Feinberg and A. Shwartz, Handbook of Markov Decision Processes: Methods and Applications, vol. 40. Cham, Switzerland : Springer, 2012.
8.
M. A. Alsheikh, D. T. Hoang, D. Niyato, H.-P. Tan, and S. Lin, “Markov decision processes with applications in wireless sensor networks: A survey,” IEEE Commun. Surveys Tuts., vol. 17, no. 3, pp. 1239–1267, 3rd Quart., 2015.
9.
N. Bäuerle and U. Rieder, Markov Decision Processes With Applications to Finance. Cham, Switzerland : Springer, 2011.
10.
M. Lauri, D. Hsu, and J. Pajarinen, “Partially observable Markov decision processes in robotics: A survey,” IEEE Trans. Robot., vol. 39, no. 1, pp. 21–40, Feb. 2023.
11.
P. Zhao, J. Wu, Z. Liu, and H. Wu, “Contextual bandits for unbounded context distributions,” 2024, arXiv:2408.09655.
12.
E. Even-Dar, Y. Mansour, and P. Bartlett, “Learning rates for Q-learning,” J. Mach. Learn. Res., vol. 5, no. 1, pp. 1–25, 2003.
13.
C. L. Beck and R. Srikant, “Error bounds for constant step-size Q-learning,” Syst. Control Lett., vol. 61, no. 12, pp. 1203–1208, Dec. 2012.
14.
Z. Chen, S. T. Maguluri, S. Shakkottai, and K. Shanmugam, “Finite-sample analysis of stochastic approximation using smooth convex envelopes,” 2020, arXiv:2002.00874.
15.
G. Li, C. Cai, Y. Chen, Y. Wei, and Y. Chi, “Is Q-learning minimax optimal? A tight sample complexity analysis,” Operations Res., vol. 72, no. 1, pp. 222–236, Jan. 2024.
16.
H. Jiang, “Non-asymptotic uniform rates of consistency for k-NN regression,” in Proc. AAAI Conf. Artif. Intell., vol. 33, Jul. 2019, pp. 3999–4006.
17.
S. Mobin, J. Arnemann, and F. Sommer, “Information-based learning by agents in unbounded state spaces,” in Proc. Adv. Neural Inf. Process. Syst., vol. 27, Dec. 2014, pp. 3023–3031.
18.
J. He, “Deep reinforcement learning with a natural language action space,” 2015, arXiv:1511.04636.
19.
S. R. Sinclair, S. Banerjee, and C. Lee Yu, “Adaptive discretization for episodic reinforcement learning in metric spaces,” ACM SIGMETRICS Perform. Eval. Rev., vol. 48, no. 1, pp. 17–18, Jul. 2020.
20.
S. R. Sinclair, S. Banerjee, and C. L. Yu, “Adaptive discretization in online reinforcement learning,” Operations Res., vol. 71, no. 5, pp. 1636–1652, Sep. 2023.
21.
C. J. C. H. Watkins and P. Dayan, “Q-learning,” Mach. Learn., vol. 8, pp. 279–292, May 1992.
22.
M. G. Azar, R. Munos, and H. J. Kappen, “Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model,” Mach. Learn., vol. 91, no. 3, pp. 325–349, Jun. 2013.
23.
M. J. Wainwright, “Stochastic approximation with cone-contractive operators: Sharp ℓ ∞ -bounds for Q -learning,” 2019, arXiv:1905.06265.
24.
C. Jin, Z. Allen-Zhu, S. Bubeck, and M. I. Jordan, “Is Q-learning provably efficient?,” in Proc. Adv. Neural Inf. Process. Syst., vol. 31, Jan. 2018, pp. 4863–4873.
25.
K. Dong, Y. Wang, X. Chen, and L. Wang, “Q-learning with UCB exploration is sample efficient for infinite-horizon MDP,” 2019, arXiv:1901.09311.
26.
K. Lakshmanan, R. Ortner, and D. Ryabko, “Improved regret bounds for undiscounted continuous reinforcement learning,” in Proc. Int. Conf. Mach. Learn., Jul. 2015, pp. 524–532.
27.
Y. Bai, T. Xie, N. Jiang, and Y. X. Wang, “Provably efficient Q-learning with low switching cost,” in Proc. Adv. Neural Inf. Process. Syst., vol. 32, Jan. 2019, pp. 8002–8011.
28.
Z. Zhang, Y. Zhou, and X. Ji, “Almost optimal model-free reinforcement learning via reference-advantage decomposition,” in Proc. Adv. Neural Inf. Process. Syst., vol. 33, 2020, pp. 15198–15207.
29.
G. Li, L. Shi, Y. Chen, and Y. Chi, “Breaking the sample complexity barrier to regret-optimal model-free reinforcement learning,” in Proc. Adv. Neural Inf. Process. Syst., vol. 12, Dec. 2022, pp. 969–1043.
30.
J. He, D. Zhou, and Q. Gu, “Nearly minimax optimal reinforcement learning for discounted MDPs,” in Proc. Adv. Neural Inf. Process. Syst., Jan. 2020, pp. 22288–22300.

Contact IEEE to Subscribe

References

References is not available for this document.