I. Introduction
In nonparametric statistics, optimal rates have been established for various statistical tasks [2], [3], [4], [5], with most of them focusing on identical and independently distributed (i.i.d) data, while problems with non-i.i.d samples are rarely explored. Among these problems, the Markov decision process (MDP) is an important one, which is a stochastic control process that models various practical sequential decision making problems [6], [7], [8], [9], [10]. In MDPs, at each time step, an agent selects an action from a set and then moves to another state and receives a reward. Compared with nonparametric estimation for i.i.d data [2], [3], [4], [5], [11] and MDPs with finite state spaces [12], [13], [14], [15], the design of learning algorithms for MDPs with continuous state spaces faces the following two challenges. Firstly, states, actions, and rewards are received sequentially. In early steps, estimates of the value function are inevitably inaccurate due to limited information. Since later estimates depend on earlier results, estimation errors in the early stages will have a negative impact on later estimates. A proper handling of early steps is thus crucially needed. Secondly, with a continuous state space, states do not appear repeatedly, thus the value function cannot be updated step-by-step as in the discrete state space. It is therefore necessary to design new update rules to use the information from neighboring states.