Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP), based on a single trajectory of Markovian samples induced by a behavior policy. Focusing on a $\gamma$-discounted MDP with state space $\mathcal{S}$ and action space $\mathcal{A}$, we demonstrate that the $\ell_{\infty}$-based sample complexity of classical asynchronous Q-learning --- namely, the number of samples needed to yield an entrywise $\varepsilon$-accurate estimate of the Q-function --- is at most on the order of $\frac{1}{\mu_{\min}(1-\gamma)^5\varepsilon^2}+ \frac{t_{mix}}{\mu_{\min}(1-\gamma)}$ up to some logarithmic factor, provided that a proper constant learning rate is adopted. Here, $t_{mix}$ and $\mu_{\min}$ denote respectively the mixing time and the minimum state-action occupancy probability of the sample trajectory. The first term of this bound matches the sample complexity in the synchronous case with independent samples drawn from the stationary distribution of the trajectory. The second term reflects the cost taken for the empirical distribution of the Markovian trajectory to reach a steady state, which is incurred at the very beginning and becomes amortized as the algorithm runs. Encouragingly, the above bound improves upon the state-of-the-art result \cite{qu2020finite} by a factor of at least $|\mathcal{S}||\mathcal{A}|$ for all scenarios, and by a factor of at least $t_{mix}|\mathcal{S}||\mathcal{A}|$ for any sufficiently small accuracy level $\varepsilon$. Further, we demonstrate that the scaling on the effective horizon $\frac{1}{1-\gamma}$ can be improved by means of variance reduction.


Introduction
Model-free algorithms such as Q-learning (Watkins and Dayan, 1992) play a central role in recent breakthroughs of reinforcement learning (RL) (Mnih et al., 2015). In contrast to model-based algorithms that decouple model estimation and planning, model-free algorithms attempt to directly interact with the environment -in the form of a policy that selects actions based on perceived states of the environment -from the collected data samples, without modeling the environment explicitly. Therefore, model-free algorithms are able to process data in an online fashion and are often memory-efficient. Understanding and improving the sample efficiency of model-free algorithms lie at the core of recent research activity (Dulac-Arnold et al., 2019), whose importance is particularly evident for the class of RL applications in which data collection is costly and time-consuming (such as clinical trials, online advertisements, and so on).
The current paper concentrates on Q-learning, an off-policy model-free algorithm that seeks to learn the optimal action-value function by observing what happens under a behavior policy. The off-policy feature makes it appealing in various RL applications where it is infeasible to change the policy under evaluation on the fly. There are two basic update models in Q-learning. The first one is termed a synchronous setting, which hypothesizes on the existence of a simulator (also called a generative model); at each time, the simulator generates an independent sample for every state-action pair, and the estimates are updated simultaneously across all state-action pairs. The second model concerns an asynchronous setting, where only a single sample Algorithm Sample complexity Learning rate Asynchronous Q-learning (tcover) 1 1−γ (1−γ) 4 ε 2 linear: 1 t Even- Dar and Mansour (2003) Asynchronous Q-learning t 1+3ω

cover
(1−γ) 4 ε 2 1 ω + tcover 1−γ 1 1−ω polynomial: 1 t ω , ω ∈ ( 1 2 , 1) Even- Dar and Mansour (2003) Asynchronous Q-learning  Table 1: Sample complexity of asynchronous Q-learning and its variants to compute an ε-optimal Q-function in the ∞ norm, where we hide all logarithmic factors. With regards to the Markovian trajectory induced by the behavior policy, we denote by t cover , t mix , and µ min the cover time, mixing time, and minimum state-action occupancy probability of the associated stationary distribution, respectively.
trajectory following a behavior policy is accessible; at each time, the algorithm updates its estimate of a single state-action pair using one state transition from the trajectory. Obviously, understanding the asynchronous setting is considerably more challenging than the synchronous model, due to the Markovian (and hence non-i.i.d.) nature of its sampling process. Focusing on an infinite-horizon Markov decision process (MDP) with state space S and action space A, this work investigates asynchronous Q-learning on a single Markovian trajectory induced by a behavior policy. We ask a fundamental question: How many samples are needed for asynchronous Q-learning to learn the optimal Q-function?
Despite a considerable number of prior works analyzing this algorithm (ranging from the classical works Jaakkola et al. (1994); Tsitsiklis (1994) to the very recent paper Qu and Wierman (2020)), it remains unclear whether existing sample complexity analysis of asynchronous Q-learning is tight. As we shall elucidate momentarily, there exists a large gap -at least as large as |S||A| -between the state-of-the-art sample complexity bound for asynchronous Q-learning  and the one derived for the synchronous counterpart (Wainwright, 2019a). This raises a natural desire to examine whether there is any bottleneck intrinsic to the asynchronous setting that significantly limits its performance.

Main contributions
This paper develops a refined analysis framework that sharpens our understanding about the sample efficiency of classical asynchronous Q-learning on a single sample trajectory. Setting the stage, consider an infinitehorizon MDP with state space S, action space A, and a discount factor γ ∈ (0, 1). What we have access to is a sample trajectory of the MDP induced by a stationary behavior policy. In contrast to the synchronous setting with i.i.d. samples, we single out two parameters intrinsic to the Markovian sample trajectory: (i) the mixing time t mix , which characterizes how fast the trajectory disentangles itself from the initial state; (ii) the smallest state-action occupancy probability µ min of the stationary distribution of the trajectory, which captures how frequent each state-action pair has been at least visited.
With these parameters in place, our findings unveil that: the sample complexity required for asynchronous Q-learning to yield an ε-optimal Q-function estimate -in a strong ∞ sense -is at most 1 O 1 µ min (1 − γ) 5 ε 2 + t mix µ min (1 − γ) . (1) The first component of (1) is consistent with the sample complexity derived for the setting with independent samples drawn from the stationary distribution of the trajectory (Wainwright, 2019a). In comparison, the second term of (1) -which is unaffected by the accuracy level ε -is intrinsic to the Markovian nature of the trajectory; in essence, this term reflects the cost taken for the empirical distribution of the sample trajectory to converge to a steady state, and becomes amortized as the algorithm runs. In other words, the behavior of asynchronous Q-learning would resemble what happens in the setting with independent samples, as long as the algorithm has been run for reasonably long. In addition, our analysis framework readily yields another sample complexity bound where t cover stands for the cover time -namely, the time taken for the trajectory to visit all state-action pairs at least once. This facilitates comparisons with several prior results based on the cover time. Furthermore, we leverage the idea of variance reduction to improve the scaling with the discount complexity 1 1−γ . We demonstrate that a variance-reduced variant of asynchronous Q-learning attains ε-accuracy using at most samples, matching the complexity of its synchronous counterpart if ε ≤ min 1, (Wainwright, 2019b). Moreover, by taking the action space to be a singleton set, the aforementioned results immediately lead to ∞ -based sample complexity guarantees for temporal difference (TD) learning (Sutton, 1988) on Markovian samples.
Comparisons with past results. A large fraction of the classical literature focused on asymptotic convergence analysis of asynchronous Q-learning (e.g. Jaakkola et al. (1994); Szepesvári (1998); Tsitsiklis (1994)); these results, however, did not lead to non-asymptotic sample complexity bounds. The state-of-the-art sample complexity analysis was due to the recent work Qu and Wierman (2020), which derived a sample complexity bound O t mix µ 2 min (1−γ) 5 ε 2 . Given the obvious lower bound 1/µ min ≥ |S||A|, our result (1) improves upon that of Qu and Wierman (2020) by a factor at least on the order of |S||A| min t mix , 1 (1−γ) 4 ε 2 . In particular, for sufficiently small accuracy level ε, our improvement exceeds a factor of at least t mix |S||A|.
In addition, we note that several prior works (Beck and Srikant, 2012;Even-Dar and Mansour, 2003) developed sample complexity bounds in terms of the cover time t cover of the sample trajectory; our result strengthens these bounds by a factor of at least The interested reader is referred to Table 1 for more precise comparisons, and to Section 5 for a discussion of further related works.
1.2 Paper organization, notation, and basic concept The remainder of the paper is organized as follows. Section 2 formulates the problem and introduces some basic quantities and assumptions. Section 3 presents the asynchronous Q-learning algorithm along with its 1 Let X := |S|, |A|, 1 1−γ , 1 ε . The notation f (X ) = O(g(X )) means there exists a universal constant C 1 > 0 such that f ≤ C 1 g. The notation O(·) is defined analogously except that it hides any logarithmic factor. theoretical guarantees, whereas Section 4 accommodates the extension: asynchronous variance-reduced Qlearning. A more detailed account of related works is given in Section 5. The analyses of our main theorems are described in Sections 6-9. We conclude this paper with a summary of our results and a list of future directions in Section 10. Several preliminary facts about Markov chains and the proofs of technical lemmas are postponed to the appendix.
Next, we introduce a set of notation that will be used throughout the paper. Denote by ∆(S) (resp. ∆(A)) the probability simplex over the set S (resp. A). For any vector z = [z i ] 1≤i≤n ∈ R n , we overload the notation √ · and |·| to denote entry-wise operations, such that √ z := [ √ z i ] 1≤i≤n and |z| := [|z i |] 1≤i≤n . For any vectors z = [a i ] 1≤i≤n and w = [w i ] 1≤i≤n , the notation z ≥ w (resp. z ≤ w) means z i ≥ w i (resp. z i ≤ w i ) for all 1 ≤ i ≤ n. Additionally, we denote by 1 the all-one vector, I the identity matrix, and 1{·} the indicator function. For any matrix P = [P ij ], we denote P 1 := max i j |P ij |. Throughout this paper, we use c, c 0 , c 1 , · · · to denote universal constants that do not depend either on the parameters of the MDP or the target levels (ε, δ), and their exact values may change from line to line. Finally, let us introduce the concept of uniform ergodicity for Markov chains. Consider any Markov chain (X 0 , X 1 , X 2 , · · · ) with transition kernel P , finite state space X and stationary distribution µ, and denote by P t (· | x) the distribution of X t conditioned on X 0 = x ∈ X . This Markov chain is said to be uniformly ergodic if, for some ρ < 1 and M < ∞, one has where d TV (µ, ν) stands for the total variation distance between two distributions µ and ν (Tsybakov and Zaiats, 2009):

Models and background
This paper studies an infinite-horizon MDP with discounted rewards, as represented by a quintuple M = (S, A, P, r, γ). Here, S and A denote respectively the (finite) state space and action space, whereas γ ∈ (0, 1) indicates the discount factor. Particular emphasis is placed on the scenario with large state/action space and long effective horizon, namely, |S|, |A| and the effective horizon 1 1−γ can all be quite large. We use P : S × A → ∆(S) to represent the probability transition kernel of the MDP, where for each state-action pair (s, a) ∈ S × A, P (s | s, a) denotes the probability of transiting to state s from state s when action a is executed. The reward function is represented by r : S × A → [0, 1], such that r(s, a) denotes the immediate reward from state s when action a is taken; for simplicity, we assume throughout that all rewards lie within [0, 1]. We focus on the tabular setting which, despite its basic form, has not yet been well understood. See Bertsekas (2017) for an in-depth introduction of this model. Q-function and Bellman operator. An action selection rule is termed a policy and represented by a mapping π : S → ∆(A), which maps a state to a distribution over the set of actions. A policy is said to be stationary if it is time-invariant. We denote by {s t , a t , r t } ∞ t=0 a sample trajectory, where s t (resp. a t ) denotes the state (resp. the action taken) at time t, and r t = r(s t , a t ) denotes the reward received at time t. It is assumed throughout that the rewards are deterministic and depend solely upon the current state-action pair. We denote by V π : S → R the value function of a policy π, namely, which is the expected discounted cumulative reward received when (i) the initial state is s 0 = s, (ii) the actions are taken based on the policy π (namely, a t ∼ π(s t ) for all t ≥ 0) and the trajectory is generated based on the transition kernel (namely, s t+1 ∼ P (·|s t , a t )). It can be easily verified that 0 ≤ V π (s) ≤ 1 1−γ for any π. The action-value function (also Q-function) Q π : S × A → R of a policy π is defined by where the actions are taken according to the policy π except the initial action (i.e. a t ∼ π(s t ) for all t ≥ 1).
As is well-known, there exists an optimal policy -denoted by π -that simultaneously maximizes V π (s) and Q π (s, a) uniformly over all state-action pairs (s, a) ∈ (S × A). Here and throughout, we shall denote by V := V π and Q := Q π the optimal value function and the optimal Q-function, respectively. In addition, the Bellman operator T , which is a mapping from R |S|×|A| to itself, is defined such that the (s, a)-th entry of T (Q) is given by It is well known that the optimal Q-function Q is the unique fixed point of the Bellman operator.
Sample trajectory and behavior policy. Imagine we have access to a sample trajectory {s t , a t , r t } ∞ t=0 generated by the MDP M under a given stationary policy π b -called a behavior policy. The behavior policy is deployed to help one learn the "behavior" of the MDP under consideration, which often differs from the optimal policy being sought. Given the stationarity of π b , the sample trajectory can be viewed as a sample path of a time-homogeneous Markov chain over the set of state-action pairs {(s, a) | s ∈ S, a ∈ A}. Throughout this paper, we impose the following uniform ergodicity assumption (Paulin, 2015) (see the definition of uniform ergodicity in Section 1.2).
Assumption 1. The Markov chain induced by the stationary behavior policy π b is uniformly ergodic.
There are several properties concerning the behavior policy and its resulting Markov chain that play a crucial role in learning the optimal Q-function. Specifically, denote by µ π b the stationary distribution (over all state-action pairs) of the aforementioned behavior Markov chain, and define Intuitively, µ min reflects an information bottleneck; that is, the smaller µ min is, the more samples are needed in order to ensure all state-action pairs are visited sufficiently many times. In addition, we define the associated mixing time of the chain as where P t (·|s 0 , a 0 ) denotes the distribution of (s t , a t ) conditional on the initial state-action pair (s 0 , a 0 ), and d TV (µ, ν) is the total variation distance between µ and ν (see (5)). In words, the mixing time t mix captures how fast the sample trajectory decorrelates from its initial state. Moreover, we define the cover time associated with this Markov chain as follows where B t denotes the event such that all (s, a) ∈ S × A have been visited at least once between time 0 and time t, and P B t | s 0 , a 0 denotes the probability of B t conditional on the initial state (s 0 , a 0 ).
Remark 1. It is known that for a finite-state Markov chain, having a finite mixing time t mix implies uniform ergodicity of the chain (Paulin, 2015, Page 4). Thus, our uniform ergodicity assumption is equivalent to the assumption imposed in Qu and Wierman (2020) (which assumes ergodicity in addition to a finite t mix ).
Goal. Given a single sample trajectory {s t , a t , r t } ∞ t=0 generated by the behavior policy π b , we aim to compute/approximate the optimal Q-function Q in an ∞ sense. This setting -in which a state-action pair can be updated only when the Markovian trajectory reaches it -is commonly referred to as asynchronous Qlearning Tsitsiklis, 1994) in tabular RL. The current paper focuses on characterizing, in a non-asymptotic manner, the sample efficiency of classical Q-learning and its variance-reduced variant.
3 Asynchronous Q-learning on a single Markovian trajectory

Algorithm
The Q-learning algorithm (Watkins and Dayan, 1992) is arguably one of the most famous off-policy algorithms aimed at learning the optimal Q-function. Given the Markovian trajectory {s t , a t , r t } ∞ t=0 generated by the behavior policy π b , the asynchronous Q-learning algorithm maintains a Q-function estimate Q t : S × A → R at each time t and adopts the following iterative update rule for any t ≥ 0, whereas η t denotes the learning rate or the stepsize. Here, T t denotes the empirical Bellman operator w.r.t. the t-th sample, that is, It is worth emphasizing that at each time t, only a single entry -the one corresponding to the sampled state-action pair (s t−1 , a t−1 ) -is updated, with all remaining entries unaltered. While the estimate Q 0 can be initialized to arbitrary values, we shall set Q 0 (s, a) = 0 for all (s, a) unless otherwise noted. The corresponding value function estimate V t : S → R at time t is thus given by The complete algorithm is described in Algorithm 1.

Theoretical guarantees for asynchronous Q-learning
We are in a position to present our main theory regarding the non-asymptotic sample complexity of asynchronous Q-learning, for which the key parameters µ min and t mix defined respectively in (7) and (8) play a vital role. The proof of this result is provided in Section 6.
Theorem 1 (Asynchronous Q-learning). For the asynchronous Q-learning algorithm detailed in Algorithm 1, there exist some universal constants c 0 , c 1 > 0 such that for any 0 < δ < 1 and 0 < ε ≤ 1 1−γ , one has ∀(s, a) ∈ S × A : |Q T (s, a) − Q (s, a)| ≤ ε with probability at least 1 − δ, provided that the iteration number T and the learning rates η t ≡ η obey Remark 2. The careful reader might immediately remark that the learning rate η studied in Theorem 1 relies on prior knowledge of ε, δ and T . This is more stringent than the learning rates in Qu and Wierman (2020), which do not require pre-determining these parameters. To address this issue, we will explore a more adaptive learning rate schedule shortly in Section 3.4, which achieves the same sample complexity without the need of knowing these parameters a priori.
Theorem 1 delivers a finite-sample/finite-time analysis of asynchronous Q-learning, given that a fixed learning rate is adopted and chosen appropriately. The ∞ -based sample complexity required for Algorithm 1 to attain ε accuracy is at most A few implications are in order.
Dependency on the minimum state-action occupancy probability µ min . Our sample complexity bound (14) scales linearly in 1/µ min , which is in general unimprovable. Consider, for instance, the ideal scenario where state-action occupancy is nearly uniform across all state-action pairs, in which case 1/µ min is on the order of |S||A|. In such a "near-uniform" case, the sample complexity scales linearly with |S||A|, and this dependency matches the known minimax lower bound Azar et al. (2013) derived for the setting with independent samples. In comparison, Qu and Wierman (2020, Theorem 7) depends at least quadratically on 1/µ min , which is at least |S||A| times larger than our result (14).
Dependency on the effective horizon 1 1−γ . The sample size bound (14) scales as 1 (1−γ) 5 ε 2 , which coincides with both Chen et al. (2020); Wainwright (2019a) (for the synchronous setting) and Beck and Srikant (2012); Qu and Wierman (2020) (for the asynchronous setting) with either a rescaled linear learning rate or a constant learning rate. This turns out to be the sharpest scaling known to date for the classical form of Q-learning.
Dependency on the mixing time t mix . The second additive term of our sample complexity (14) depends linearly on the mixing time t mix and is (almost) independent of the target accuracy ε. The influence of this mixing term is a consequence of the expense taken for the Markovian trajectory to reach a steady state, which is a one-time cost that can be amortized over later iterations if the algorithm is run for reasonably long. Put another way, if the behavior chain mixes not too slowly with respect to ε (in the sense that t mix ≤ 1 (1−γ) 4 ε 2 ), then the algorithm behaves as if the samples were independently drawn from the stationary distribution of the trajectory. In comparison, the influences of t mix and 1 (1−γ) 5 ε 2 in Qu and Wierman (2020) (cf. Table 1) are multiplicative regardless of the value of ε, thus resulting in a much higher sample complexity. For instance, if ε = O 1 (1−γ) 2 √ t mix , then the sample complexity result therein is at least times larger than our result (modulo some log factor).
Schedule of learning rates. An interesting aspect of our analysis lies in the adoption of a time-invariant learning rate, under which the ∞ error decays linearly -down to some error floor whose value is dictated by the learning rate. Therefore, a desired statistical accuracy can be achieved by properly setting the learning rate based on the target accuracy level ε and then determining the sample complexity accordingly. In comparison, classical analyses typically adopted a (rescaled) linear or a polynomial learning rule Even-Dar and Mansour (2003); Qu and Wierman (2020). While the work Beck and Srikant (2012) studied Q-learning with a constant learning rate, their bounds were conservative and fell short of revealing the optimal scaling. Furthermore, we note that adopting time-invariant learning rates is not the only option that enables the advertised sample complexity; as we shall elucidate in Section 3.4, one can also adopt carefully designed diminishing learning rates to achieve the same performance guarantees.
Mean estimation error. The high-probability bound in Theorem 1 readily translates to a mean estimation error guarantee. To see this, let us first make note of the following basic crude bound (see e.g. Beck and Srikant (2012); Gosavi (2006)) for all t ≥ 0 and all (s, a) ∈ S × A. By taking δ = ε(1 − γ) in Theorem 1, we immediately reach provided that T obeys (13a). As a result, the sample complexity remains unchanged (up to some logarithmic factor) when the goal is to achieve the mean error bound E max s,a Q T (s, a) − Q (s, a) ≤ 2ε.
In addition, our analysis framework immediately leads to another sample complexity guarantee stated in terms of the cover time t cover (cf. (9)), which facilitates comparisons with several past work Beck and Srikant (2012); Even-Dar and Mansour (2003). The proof follows essentially that of Theorem 1, with a sketch provided in Section 7.
Theorem 2. For the asynchronous Q-learning algorithm detailed in Algorithm 1, there exist some universal constants c 0 , c 1 > 0 such that for any 0 < δ < 1 and 0 < ε ≤ 1 1−γ , one has with probability at least 1 − δ, provided that the iteration number T and the learning rates η t ≡ η obey Remark 3. The main difference between the cover-time-based analysis and the mixing-time-based analysis lies in the number of visits to each state-action pair (s, a) in every time frame. Owing to the measure concentration of Markov chains, we can see that the number of visits to each (s, a) concentrates around its expected value in each time frame, which in turn ensures that all state-action pairs have been visited at least once as long as the time frame is sufficiently long. This important property allows one to establish an intimate connection between the analysis of Theorem 1 and that of Theorem 2.
In a nutshell, this theorem tells us that the ∞ -based sample complexity of classical asynchronous Qlearning is bounded above by which scales linearly with the cover time. This improves upon the prior result Even-Dar and Mansour (2003) (resp. Beck and Srikant (2012)) by an order of at least t 3.29 cover ≥ |S| 3.29 |A| 3.29 (resp. t 2 cover |S||A| ≥ |S| 3 |A| 3 ).
See Table 1 for detailed comparisons. We shall further make note of some connections between t cover and t mix /µ min to help compare Theorem 1 and Theorem 2: (i) in general, t cover = O(t mix /µ min ) for uniformly ergodic chains; (ii) one can find some cases where t mix /µ min = O(t cover ). Consequently, while Theorem 1 does not strictly dominate Theorem 2 in all instances, the aforementioned connections reveal that Theorem 1 is tighter for the worst-case scenarios. The interested reader is referred to Section A.2 for details.

A special case: TD learning
In the special circumstance that the set of allowable actions A is a singleton, the corresponding MDP reduces to a Markov reward process (MRP), where the state transition kernel P : S → ∆(S) describes the probability of transitioning between different states, and r : S → [0, 1] denotes the reward function (so that r(s) is the immediate reward in state s). The goal is to estimate the value function V : S → R from the trajectory {s t , r t } ∞ t=0 , which arises commonly in the task of policy evaluation for a given deterministic policy. The Q-learning procedure in this special setting reduces to the well-known TD learning algorithm, which maintains an estimate V t : S → R at each time t and proceeds according to the following iterative update 2 As usual, η t denotes the learning rate at time t, and V 0 is taken to be 0. Consequently, our analysis for asynchronous Q-learning with a Markovian trajectory immediately leads to non-asymptotic ∞ guarantees for TD learning, stated below as a corollary of Theorem 1. A similar result can be stated in terms of the cover time as a corollary to Theorem 2, which we omit for brevity.
Corollary 1 (Asynchronous TD learning). Consider the TD learning algorithm (19). There exist some universal constants c 0 , c 1 > 0 such that for any 0 < δ < 1 and 0 < ε ≤ 1 1−γ , one has ∀s ∈ S : with probability at least 1 − δ, provided that the iteration number T and the learning rates η t ≡ η obey The above result reveals that the ∞ -sample complexity for TD learning is at most provided that an appropriate constant learning rate is adopted. We note that prior finite-sample analysis on asynchronous TD learning typically focused on (weighted) 2 estimation errors with linear function approximation (Bhandari et al., 2018;Srikant and Ying, 2019), and it is hence difficult to make fair comparisons. The recent paper Khamaru et al. (2020) developed ∞ guarantees for TD learning, focusing on the synchronous settings with i.i.d. samples rather than Markovian samples.

Adaptive and implementable learning rates
As alluded to previously, the learning rates recommended in (13b) depend on the mixing time t mix , a parameter that might be either a priori unknown or difficult to estimate. Fortunately, it is feasible to adopt a more adaptive learning rate schedule, which does not rely on prior knowledge of t mix while still being capable of achieving the performance advertised in Theorem 1.
Learning rates. In order to describe our new learning rate schedule, we need to keep track of the following quantities for all (s, a) ∈ S × A: • K t (s, a): the number of times that the sample trajectory visits (s, a) during the first t iterations.
2 When A = {a} is a singleton, the Q-learning update rule (10) reduces to the TD update rule (19) by relating Q(s, a) = V (s).
In addition, we maintain an estimate µ min,t of µ min , computed recursively as follows With the above quantities in place, we propose the following learning rate schedule: where c η > 0 is some universal constant independent of any MDP parameter 3 and x denotes the nearest integer less than or equal to x. If µ min,t forms a reliable estimate of µ min , then one can view (23) as a sort of "piecewise constant approximation" of the rescaled linear stepsizes cη log t µ min (1−γ)γ 2 t ; in fact, this can be viewed as a sort of "doubling trick" -reducing the learning rate by a constant factor every once a while -to approximate rescaled linear learning rates. Theorem 1 can then be readily applied to analyze the performance for each constant segment of this learning rate schedule (23). Noteworthily, such learning rates are fully data-driven and do no rely on any prior knowledge about the Markov chain (like t mix and µ min ) or the target accuracy level ε.
Performance guarantees. Encouragingly, our theoretical framework can be readily extended without difficulty to accommodate this adaptive learning rate choice. Specifically, for the Q-function estimates where Q t is provided by the Q-learning iterations (cf. (10)). We can then establish the following theoretical guarantees, whose proof is deferred to Section 8.
Theorem 3. Consider asynchronous Q-learning with learning rates (23) and the output (24). There exists some universal constant C > 0 such that: for any 0 < δ < 1 and 0 < ε ≤ 1 1−γ , one has with probability at least 1 − δ, provided that Remark 4. The interested reader might wonder whether our sample complexity guarantees continue to hold under the linear learning rate η t = 1 Kt(st,at) -a learning rate schedule that has been previously studied in Even-Dar and Mansour (2003); Tsitsiklis (1994). Nevertheless, as discussed in Wainwright (2019a, Section 3.3.1), this linear learning rate can lead to a sample complexity that scales exponentially in the effective horizon 1 1−γ , which is clearly outperformed by a properly rescaled linear learning rate.

Algorithm
In order to accelerate the convergence, it is instrumental to reduce the variability of the empirical Bellman operator T t employed in the update rule (10) of classical Q-learning. This can be achieved via the following means. Simply put, assuming we have access to (i) a reference Q-function estimate, denoted by Q, and (ii) an estimate of T (Q), denoted by T (Q), the variance-reduced Q-learning update rule is given by where T t denotes the empirical Bellman operator at time t (cf. (11)). The empirical estimate T (Q) can be computed using a set of samples; more specifically, by drawing N consecutive sample transitions {(s i , a i , s i+1 )} 0≤i<N from the observed trajectory, we compute Compared with the classical form (10), the original update term T t (Q t−1 ) has been replaced by T t (Q t−1 ) − T t (Q) + T (Q), in the hope of achieving reduced variance as long as Q (which serves as a proxy to Q ) is chosen properly. We now take a moment to elucidate the rationale behind the variance-reduced update rule (27). In the vanilla Q-learning update rule (10), the variability in each iteration (conditional on the past) comes primarily from the stochastic term T t (Q t−1 ). In order to accelerate convergence, it is advisable to reduce the variability of this term. Suppose now that we have access to a reference point Q that is close to Q t−1 . By replacing we see that the variability of the first term T t (Q t−1 ) − T t (Q) can be small if Q t−1 ≈ Q, while the uncertainty of the second term T (Q) can also be well controlled via the use of batch data. Motivated by this simple idea, the variance-reduced Q-learning rule attempts to operate in an epoch-based manner, computing T (Q) once every epoch (so as not to increase the overall sampling burden) and leveraging it to help reduce variability. For convenience of presentation, we introduce the following notation to represent the above-mentioned update rule, which starts with a reference point Q and operates upon a total number of N + t epoch consecutive sample transitions. The first N samples are employed to construct T (Q) via (28), with the remaining samples employed in t epoch iterative updates (27); see Algorithm 3. To achieve the desired acceleration, the proxy Q needs to be periodically updated so as to better approximate the truth Q and hence reduce the bias. It is thus natural to run the algorithm in a multi-epoch manner.
Specifically, we divide the samples into contiguous subsets called epochs, each containing t epoch iterations and using N + t epoch samples. We then proceed as follows where M is the total number of epochs, and Q epoch m denotes the output of the m-th epoch. The whole procedure is summarized in Algorithm 2. Clearly, the total number of samples used in this algorithm is given by M (N + t epoch ). We remark that the idea of performing variance reduction in RL is certainly not new, and has been explored in a number of recent works (Du et al., 2017;Khamaru et al., 2020;Sidford et al., 2018a,b;Wainwright, 2019b;.

Theoretical guarantees for variance-reduced Q-learning
This subsection develops a non-asymptotic sample complexity bound for asynchronous variance-reduced Qlearning on a single trajectory. Before presenting our theoretical guarantees, there are several algorithmic parameters that we shall specify; for given target levels (ε, δ), choose where c 0 > 0 is some sufficiently small constant, c 1 , c 2 > 0 are some sufficiently large constants, and we recall the definitions of µ min and t mix in (7) and (8), respectively. Note that the learning rate (31a) chosen here could be larger than the choice (13b) for the classical form by a factor of O

1
(1−γ) 2 (which happens if t mix is not too large), allowing the algorithm to progress more aggressively.
Theorem 4 (Asynchronous variance-reduced Q-learning). Let Q epoch M be the output of Algorithm 2 with parameters chosen according to (31). There exists some constant c 3 > 0 such that for any 0 < δ < 1 and with probability at least 1 − δ, provided that the total number of epochs exceeds The proof of this result is postponed to Section 9. In view of Theorem 4, the ∞ -based sample complexity for variance-reduced Q-learning to yield ε accuracy -which is characterized by M (N + t epoch ) -can be as low as Except for the second term that depends on the mixing time, the first term matches the result of Wainwright (2019b) derived for the synchronous settings with independent samples. In the range ε ∈ (0, min{1, (1−γ) 3 matches the minimax lower bound derived in Azar et al. (2013) for the synchronous setting.
Once again, we can immediately deduce guarantees for asynchronous variance-reduced TD learning by reducing the action space to a singleton set (akin to Section 3.3), which extends the analysis Khamaru et al. (2020) to Markovian noise. In addition, similar to Section 3.4, we can also employ adaptive learning rates in variance-reduced Q-learning -which do not require prior knowledge of t mix and µ min -without compromising the sample complexity. For the sake of brevity, we omit these extensions in the current paper.

Related works
In this section, we review several recent lines of works and compare our results with them.
Algorithm 2: Asynchronous variance-reduced Q-learning 1 input parameters: number of epochs M , epoch length t epoch , recentering length N , learning rate η.
Update Q t according to (27).
The Q-learning algorithm and its variants. The Q-learning algorithm, originally proposed in Watkins (1989), has been analyzed in the asymptotic regime by Borkar and Meyn (2000); Jaakkola et al. (1994); Szepesvári (1998); Tsitsiklis (1994) since more than two decades ago. Additionally, finite-time performance of Q-learning and its variants have been analyzed by Beck and Srikant (2012) Finite-sample ∞ guarantees for Q-learning. We now expand on non-asymptotic ∞ guarantees available in prior literature, which are the most relevant to the current work. An interesting aspect that we shall highlight is the importance of learning rates. For instance, when a linear learning rate (i.e. η t = 1/t) is adopted, the sample complexity results derived in past works (Even-Dar and Mansour, 2003;Szepesvári, 1998) exhibit an exponential blow-up in 1 1−γ , which is clearly undesirable. In the synchronous setting, Beck and Srikant (2012) (1−γ) 5 ε 2 , achieved via either a rescaled linear learning rate Wainwright, 2019a) or a constant learning rate . When it comes to asynchronous Q-learning (in its classical form), our work provides the first analysis that achieves linear scaling with 1/µ min or t cover ; see Table 1 for detailed comparisons. Going beyond classical Q-learning, the speedy Q-learning algorithm, which adds a momentum term in the update by using previous Q-function estimates, provably achieves a sample complexity of O tcover (1−γ) 4 ε 2  in the asynchronous setting, whose update rule takes twice the storage of classical Q-learning. However, the proof idea adopted in the speedy Q-learning paper relies heavily on the specific update rules of speedy Q-learning, which cannot be readily used here to help improve the sample complexity of asynchronous Q-learning in terms of its dependency on 1 1−γ . In comparison, our analysis of the variance-reduced Q-learning algorithm achieves a sample complexity of O 1 µ min (1−γ) 3 ε 2 + t mix µ min (1−γ) when ε < 1.
Finite-sample guarantees for model-free algorithms. Convergence properties of several model-free RL algorithms have been studied recently in the presence of Markovian data, including but not limited to TD learning and its variants (Bhandari et al., 2018;Dalal et al., 2018a,b;Doan et al., 2019;Gupta et al., 2019;Kaledin et al., 2020;Lee and He, 2019;Lin et al., 2020;Mou et al., 2020;Srikant and Ying, 2019;Xu et al., , 2019, Q-learning (Chen et al., 2019;Xu and Gu, 2020), and SARSA . However, these recent papers typically focused on the (weighted) 2 error rather than the ∞ risk, where the latter is often more relevant in the context of RL. In addition, Khamaru et al. (2020) investigated the ∞ bounds of (variance-reduced) TD learning, although they did not account for Markovian noise.
Finite-sample guarantees for model-based algorithms. Another contrasting approach for learning the optimal Q-function is the class of model-based algorithms, which has been shown to enjoy minimaxoptimal sample complexity in the synchronous setting. More precisely, it is known that by planning over an empirical MDP constructed from O |S||A| (1−γ) 3 ε 2 samples, we are guaranteed to find not only an ε-optimal Q-function but also an ε-optimal policy (Agarwal et al., 2019;Azar et al., 2013;Li et al., 2020a). It is worth emphasizing that the minimax optimality of model-based approach has been shown to hold for the entire ε-range; in comparison, the sample optimality of the model-free approach has only been shown for a smaller range of accuracy level ε in the synchronous setting. We also remark that existing sample complexity analysis for model-based approaches might be generalizable to Markovian data.

Analysis of asynchronous Q-learning
This section is devoted to establishing Theorem 1. Before proceeding, we find it convenient to introduce some matrix notation. Let Λ t ∈ R |S||A|×|S||A| be a diagonal matrix obeying where η > 0 is the learning rate. In addition, we use the vector Q t ∈ R |S||A| (resp. V t ∈ R |S| ) to represent our estimate Q t (resp. V t ) in the t-th iteration, so that the (s, a)-th (resp. sth) entry of Q t (resp. V t ) is given by Q t (s, a) (resp. V t (s)). Similarly, let the vectors Q ∈ R |S||A| and V ∈ R |S| represent the optimal Q-function Q and the optimal value function V , respectively. We also let the vector r ∈ R |S||A| stand for the reward function r, so that the (s, a)-th entry of r is given by r(s, a). In addition, we define the matrix P t ∈ {0, 1} |S||A|×|S| such that P t (s, a), s := 1, if (s, a, s ) = (s t−1 , a t−1 , s t ), 0, otherwise.
Clearly, this set of notation allows us to express the Q-learning update rule (10) in the following matrix form

Error decay in the presence of constant learning rates
The main step of the analysis is to establish the following result concerning the dynamics of asynchronous Q-learning. In order to state it formally, we find it convenient to introduce several auxiliary quantities With these quantities in mind, we have the following result.
In words, Theorem 5 asserts that the ∞ estimation error decays linearly -in a blockwise manner -to some error floor that scales with √ η. This result suggests how to set the learning rate based on the target accuracy level, which in turn allows us to pin down the sample complexity under consideration. In what follows, we shall first establish Theorem 5, and then return to prove Theorem 1 using this result. Before embarking on the proof of Theorem 5, we would like to point out a few key technical ingredients: (i) an epoch-based analysis that focuses on macroscopic dynamics as opposed to per-iteration dynamics, (ii) measure concentration of Markov chains (see Section A.1) that helps reveal the similarity between epochbased dynamics and the synchronous counterpart, and (iii) careful analysis of recursive relations. These key ingredients taken collectively lead to a sample complexity bound that improves upon prior analysis in Qu and Wierman (2020).

Proof of Theorem 5
We are now positioned to outline the proof of Theorem 5. We remind the reader that for any two vectors z = [z i ] and w = [w i ], the notation z ≤ w (resp. z ≥ w) denotes entrywise comparison (cf. Section 1), meaning that z i ≤ w i (resp. z i ≥ w i ) holds for all i. As a result, for any non-negative matrix A, one has Az ≤ Aw as long as z ≤ w.

Key decomposition and a recursive formula
The starting point of our proof is the following elementary decomposition for any t > 0, where the first line results from the update rule (36), and the penultimate line follows from the Bellman equation Q = r + γP V (see Bertsekas (2017)). Applying this relation recursively gives Applying the triangle inequality, we obtain where we recall the notation |z| := [|z i |] 1≤i≤n for any vector z = [z i ] 1≤i≤n . In what follows, we shall look at these terms separately.
• First of all, given that I − Λ j and Λ j are both non-negative diagonal matrices and that • Next, the term β 1,t can be controlled by exploiting some sort of statistical independence across different transitions and applying the Bernstein inequality. This is summarized in the following lemma, with the proof deferred to Section B.1.
Lemma 1. Consider any fixed vector V ∈ R |S| . There exists some universal constant c > 0 such that for any 0 < δ < 1, one has with probability at least 1 − δ, provided that 0 < η log |S||A|T δ < 1. Here, we define • Additionally, we develop an upper bound on the term β 3,t , which follows directly from the concentration of the empirical distribution of the Markov chain (see Lemma 8). The proof is deferred to Section B.2.
Lemma 2. For any δ > 0, recall the definition of t frame in (37a). Suppose that T > t frame and 0 < η < 1. Then with probability exceeding 1 − δ one has uniformly over all t obeying T ≥ t ≥ t frame and all vector ∆ 0 ∈ R |S||A| .
Moreover, in the case where t < t frame , we make note of the straightforward bound given that I − Λ j is a diagonal non-negative matrix whose entries are bounded by 1 − η < 1.
Substituting the preceding bounds into (41), we arrive at with probability at least 1 − 2δ, where t frame is defined in (37a). The rest of the proof is thus dedicated to bounding |∆ t | based on the above recursive formula (47).

Recursive analysis
A crude bound. We start by observing the following recursive relation which is a direct consequence of (47). In the sequel, we invoke mathematical induction to establish, for all 1 ≤ t ≤ T , the following crude upper bound which implies the stability of the asynchronous Q-learning updates.
Towards this, we first observe that (49) holds trivially for the base case (namely, t = 0). Now suppose that the inequality (49) holds for all iterations up to t − 1. In view of (48) and the induction hypotheses, where we invoke the fact that the vector t j=i+1 (I − Λ j )Λ i 1 is non-negative. Next, define the diagonal matrix M i := t j=i+1 (I − Λ j )Λ i , and denote by N j i (s, a) the number of visits to the state-action pair (s, a) between the i-th and the j-th iterations (including i and j). Then the diagonal entries of M i satisfy Letting e (s,a) ∈ R |S||A| be a standard basis vector whose only nonzero entry is the (s, a)-th entry, we can easily verify that Combining the above relations with the inequality (50), one deduces that thus establishing (49) for the t-th iteration. This induction analysis thus validates (49) for all 1 ≤ t ≤ T .
Refined analysis. Now, we strengthen the bound (49) by means of a recursive argument. To begin with, it is easily seen that the term (1 − η) 1 2 tµ min ∆ 0 ∞ is bounded above by (1 − γ)ε for any t > t th , where we remind the reader of the definition of t th in (37b) and the fact that ∆ 0 ∞ = Q ∞ ≤ 1 1−γ . It is assumed that T > t th . To facilitate our argument, we introduce a collection of auxiliary quantities u t as follows These auxiliary quantities are useful as they provide upper bounds on ∆ t ∞ , as asserted by the following lemma. The proof is deferred to Section B.3.
Lemma 3. Recall the definition (44) of τ 1 in Lemma 1. With probability at least 1 − 2δ, the quantities {u t } defined in (52) satisfy The preceding result motivates us to turn attention to bounding the quantities {u t }. Towards this end, we resort to a frame-based analysis by dividing the iterations [1, t] into contiguous frames each comprising t frame (cf. (37a)) iterations. Further, we define another auxiliary sequence: where we remind the reader of the definition of ρ in (37d). The connection between {w k } and {u t } is made precise as follows, whose proof is postponed to Section B.4.

Proof of Theorem 1
Now we return to complete the proof of Theorem 1. To control ∆ t ∞ to the desired level, we first claim that the first term of (38) obeys provided that η < 1/µ frame . Furthermore, by taking the learning rate as one can easily verify that the second term of (38) satisfies where the last step follows since V ∞ ≤ 1 1−γ . Putting the above bounds together ensures ∆ t ∞ ≤ 3ε. By replacing ε with ε/3, we can readily conclude the proof, as long as the claim (56) can be justified.

Cover-time-based analysis of asynchronous Q-learning
In this section, we sketch the proof of Theorem 2. Before continuing, we recall the definition of t cover in (9), and further introduce a quantity t cover,all := t cover log T δ .
There are two useful facts regarding t cover,all that play an important role in the analysis. Proof. See Section B.6.
In other words, Lemma 5 tells us that with high probability, all state-action pairs are visited at least once in every time frame (lt cover,all , (l + 1)t cover,all ] with 0 ≤ l ≤ T /t cover,all . The next result is a consequence of Lemma 5 as well as the analysis of Lemma 2; the proof can be found in Section B.2.
Lemma 6. For any δ > 0, recall the definition of t cover,all in (62). Suppose that T > t cover,all and 0 < η < 1. Then with probability exceeding 1 − δ one has uniformly over all t obeying T ≥ t ≥ t cover,all and all vector ∆ 0 ∈ R |S||A| .
With the above two lemmas in mind, we are now positioned to prove Theorem 2. Repeating the analysis of (47) (except that Lemma 2 is replaced by Lemma 6) yields with probability at least 1 − 2δ. This observation resembles (47), except that t frame (resp. µ min ) is replaced by t cover,all (resp. 1 t cover,all ). As a consequence, we can immediately use the recursive analysis carried out in Section 6.2.2 to establish a convergence guarantee based on the cover time. More specifically, define Replacing ρ by ρ in Theorem 5 reveals that with probability at least 1 − 6δ, holds for all t ≤ T , where k := max 0, t−t th,cover t cover,all and we abuse notation to define t th,cover := 2t cover,all log 1 (1 − γ) 2 ε .

Analysis under adaptive learning rates (proof of Theorem 3)
Useful preliminary facts about η t . To begin with, we make note of several useful properties about η t .
• Invoking the concentration result in Lemma 8, one can easily show that with probability at least 1 − δ, holds simultaneously for all t obeying T ≥ t ≥ 443t mix log( 4|S||A|t δ ) µ min . In addition, this concentration result taken collectively with the update rule (22) of µ min,t -in particular, the second case of (22) -implies that µ min,t "stabilizes" as t grows; to be precise, there exists some quantity c ∈ [1/6, 9/2] such that µ min,t ≡ c µ min (67) holds simultaneously for all t obeying T ≥ t ≥ for c η ≥ 11), the learning rate (23) simplifies to Clearly, there exists a sequence of endpoints t 1 < t 2 < t 3 < . .
for some positive constant α k ∈ 2cη 9e , 6c η ; in words, (70) provides a concrete expression/bound for the piecewise constant learning rate, where the t k 's form the change points.
Combining (70) with the definition of Q t (cf. (22)), one can easily check that for t > t 1 , meaning that Q t remains fixed within each time segment (t k , t k+1 ]. With this property in mind, we only need to analyze Q t k in the sequel, which can be easily accomplished by invoking Theorem 1. A crude bound. Given that 0 < η t ≤ 1 and 0 ≤ r(s, a) ≤ 1, the update rule (10) of Q t implies that thus leading to the following crude bound Remark 5. As we shall see momentarily, this crude bound allows one to control -in a coarse mannerthe error at the beginning of each time interval [t k−1 , t k ], which is needed when invoking Theorem 1.

Refined analysis. Let us define
where the constant c k,0 is chosen to be c k,0 = α k−1 /c 1 > 0, with c 1 > 0 the universal constant stated in Theorem 1. The property (70) of η t together with the definition (73) implies that as long as (1 − γ) 4 ε 2 k ≤ 1/t mix , or more explicitly, when In addition, the condition (69) and the definition (73) further tell us that Invoking Theorem 1 with an initialization Q t k−1 (which clearly satisfies the crude bound (72)) ensures that with probability at least 1 − δ, with the proviso that with c 0 > 0 the universal constant stated in Theorem 1. Under the sample size condition (74), this requirement (76) can be guaranteed by adjusting the constant c η in (23) to satisfy the following inequality: Finally, taking t kmax to be the largest change point that does not exceed T , we see from (69) that These immediately conclude the proof of the theorem under the sample size condition (26), provided that

Analysis of asynchronous variance-reduced Q-learning
This section aims to establish Theorem 4. We carry out an epoch-based analysis, that is, we first quantify the progress made over each epoch, and then demonstrate how many epochs are sufficient to attain the desired accuracy. In what follows, we shall overload the notation by defining

Per-epoch analysis
We start by analyzing the progress made over each epoch. Before proceeding, we denote by P ∈ [0, 1] |S||A|×|S| a matrix corresponding to the empirical probability transition kernel used in (28) from N new sample transitions. Further, we use the vector Q ∈ R |S||A| to represent the reference Q-function, and introduce the vector V ∈ R |S| to represent the corresponding value function so that V (s) := max a Q(s, a) for all s ∈ S. For convenience, this subsection abuses notation to assume that an epoch starts with an estimate Q 0 = Q, and consists of the subsequent iterations of variance-reduced Q-learning updates, where t frame and t th are defined in (78a) and (78b), respectively. In the sequel, we divide all epochs into two phases, depending on the quality of the initial estimate Q in each epoch.

Phase 1: when
Recalling the matrix notation of Λ t and P t in (34) and (35), respectively, we can rewrite (27) as follows Following similar steps as in the expression (39), we arrive at the following error decomposition which once again leads to a recursive relation This identity takes a very similar form as (40) except for the additional term h 0,t . Let us begin by controlling the first term, towards which we have the following lemma. The proof is postponed to Section B.5.
Lemma 7. Suppose that P is constructed using N consecutive sample transitions. If N > t frame , then with probability greater than 1 − δ, one has If t < t frame , then it is straightforwardly seen that Taking this together with the results from Lemma 1 and Lemma 2, we are guaranteed that with probability at least 1 − 2δ, where τ 2 := c γ η log |S||A|t epoch δ for some constant c > 0 (similar to (44)). In addition, the term h 2,t can be bounded in the same way as β 2,t in (42). Therefore, repeating the same argument as for Theorem 5 and taking ξ = 1 16 √ 1−γ , we conclude that with probability at least 1 − δ, holds simultaneously for all 0 < t ≤ t epoch , where k = max 0, t−t th,ξ t frame , and (1−γ) 3 µ min }, we can easily demonstrate that As a consequence, if t epoch ≥ t frame + t th,ξ + 8 log 2 which in turn implies that where the last step invokes the simple relation V − V ∞ ≤ Q − Q ∞ . Thus, we conclude that

Phase 2: when
The analysis of Phase 2 follows by straightforwardly combining the analysis of Phase 1 and that of the synchronous counterpart in Wainwright (2019b). For the sake of brevity, we only sketch the main steps. Following the proof idea of Wainwright (2019b, Section B.2), we introduce an auxiliary vector Q which is the unique fix point to the following equation, which can be regarded as a population-level Bellman equation with proper reward perturbation, namely, Here, as usual, V ∈ R |S| represents the value function corresponding to Q. This can be viewed as a Bellman equation when the reward vector r is replaced by r := r + γ( P − P )V . Repeating the arguments in the proof of Wainwright (2019b, Lemma 4) (except that we need to apply the measure concentration of P in the manner performed in the proof of Lemma 7 due to Markovian data), we reach with probability at least 1 − δ for some constant c > 0, provided that N ≥ (c ) 2 log N |S||A| It is worth noting that Q only serves as a helper in the proof and is never explicitly constructed in the algorithm, as we don't have access to the probability transition matrix P .
In addition, we claim that Under this claim, the triangle inequality yields where the last inequality follows from (88).
Proof of the inequality (88). Suppose that holds for some constant c > 0. By replacing Lemma 5 in the proof of Wainwright (2019b, Lemma 4) with this bound, we can arrive at (88) immediately. In what follows, we demonstrate how to prove the bound (91), which follows a similar argument as in the proof of Lemma 7. Let us begin with the following triangle inequality: leaving us with two terms to control.
• Similar to (140), by applying the Hoeffding inequality and taking the union bound over all (s, a) ∈ S×A, we can control the first term on the right-hand side of (92) as follows: with probability at least 1 − δ. Here, we have made use of the following property of this phase that and K N (s, a) ≥ N µ min /2 for all (s, a) (see Lemma 8).
• Next, we turn attention to the second term on the right-hand side of (92), towards which we resort to the Bernstein inequality. Note that the (s, a)-th entry of ( P − P )V is given by where K N (s, a) denotes the total number of visits to (s, a) during the first N time instances (see also (112)). In addition, let t i := t i (s, a) denote the time stamp when the trajectory visits (s, a) for the i-th time (see also (111)). In view of our derivation for (116), the state transitions happening at times t 1 , t 2 , · · · , t k (which are random) are independent for any given integer k > 0. It can be calculated that Consequently, invoking the Bernstein inequality implies that with probability at least 1 − δ |S||A| , holds simultaneously for all 1 ≤ k ≤ N . Recognizing the bound 1 2 N µ min ≤ K N (s, a) ≤ N and applying the union bound over all (s, a) ∈ S × A yield • Finally, combining (93) and (96) immediately establishes the claim (91).
Proof of the inequality (89). Recalling the variance-reduced update rule (80) and using the Bellman-type equation (87), we obtain Adopting the same expansion as before (see (40)), we arrive at Inheriting the results in Lemma 1 and Lemma 2, we can demonstrate that, with probability at least 1 − 2δ, Repeating the same argument as for Theorem 5, we reach for some constant c > 0, where k = max{0, t−t th t frame } with t th defined in (78b). By taking η = c 5 min (1−γ) 2 γ 2 log |S||A|t epoch δ , 1 µ frame for some sufficiently small constant c 5 > 0 and ensuring that for some large constant c 6 > 0, we obtain where the last line follows by the triangle inequality.
9.2 How many epochs are needed?
We are now ready to pin down how many epochs are needed to achieve ε-accuracy.
• In Phase 1, the contraction result (86) indicates that, if the algorithm is initialized with Q 0 = 0 at the very beginning, then it takes at most epochs to yield Q − Q ∞ ≤ max{ 1 √ 1−γ , ε} (so as to enter Phase 2). Clearly, if the target accuracy level ε > 1 √ 1−γ , then the algorithm terminates in this phase.
• Suppose now that the target accuracy level ε ≤ 1 √ 1−γ . Once the algorithm enters Phase 2, the dynamics can be characterized by (90). Given that Q is also the last iterate of the preceding epoch, the property (90) provides a recursive relation across epochs. Standard recursive analysis thus reveals that: within at most epochs (with c 7 > 0 some constant), we are guaranteed to attain an ∞ estimation error at most 3ε.
To summarize, a total number of O log 1 ε(1−γ) +log 1 1−γ epochs are sufficient for our purpose. This concludes the proof.

Discussion
This work develops a sharper finite-sample analysis of the classical asynchronous Q-learning algorithm, highlighting and refining its dependency on intrinsic features of the Markovian trajectory induced by the behavior policy. Our sample complexity bound strengthens the state-of-the-art result by an order of at least |S||A|. A variance-reduced variant of asynchronous Q-learning is also analyzed, exhibiting improved scaling with the effective horizon 1 1−γ . Our findings and the analysis framework developed herein suggest a couple of directions for future investigation. For instance, our improved sample complexity of asynchronous Q-learning has a dependence of 1 (1−γ) 5 on the effective horizon, which is inferior to its model-based counterpart. In the synchronous setting, Li et al. (2021a,b) recently demonstrated Q-learning has a dependence of 1 (1−γ) 4 , which is tight up to logarithmic factors. In light of this development, it would be important to determine the exact scaling for the asynchronous setting, which is left as future work. In addition, it would be interesting to see whether the techniques developed herein can be exploited towards understanding model-free algorithms with more sophisticated exploration schemes Dann and Brunskill (2015). Finally, asynchronous Q-learning on a single Markovian trajectory is closely related to coordinate descent with coordinates selected according to a Markov chain; one would naturally ask whether our analysis framework can yield improved convergence guarantees for general Markov-chain-based optimization algorithms (Doan et al., 2020;Sun et al., 2020).

A.1 Concentration of empirical distributions of Markov chains
We first record a result concerning the concentration of measure of the empirical distribution of a uniformly ergodic Markov chain, which makes clear the role of the mixing time.
Consequently, for any t ≥ t mix and any τ ≥ 0, one can continue the bound (100) to obtain As a result, by taking τ = 10 21 tµ(x) and applying the union bound, we reach as long as 10 21 tµ(x) ≥ max 10 tµ(x)t mix log 2|X | δ , 80t mix log 2|X | δ for all x ∈ X , or equivalently, when Next, we seek to extend the above result to the more general case when X 1 takes an arbitrary state y ∈ X . From the definition of t mix (·) (cf. (98a)), we know that This taken together with the definition of d TV (cf. (5)) reveals that: for any event B belonging to the σ-algebra generated by {X τ } τ ≥t mix (δ) , one has where we define Here, the last inequality in (103) follows from the inequality (102) and the definition (5) of the total-variation distance. As a consequence, one obtains with the proviso that t ≥ t mix (δ) + 441t mix µ min log 2|X | δ . To finish up, we recall from Paulin (2015, Section 1.1 These taken together lead to sup y∈X P X1=y ∃x ∈ X : where the last inequality results from (104). Replacing δ with δ/2 thus concludes the proof.

A.2 Connection between the mixing time and the cover time
Lemma 8 combined with the definition (9) immediately reveals the following upper bound on the cover time: In addition, while a general matching converse bound (namely, t mix /µ min = O(t cover )) is not available, we can come up with some special examples for which the bound (105) is provably tight.
With the lower bound (107) in place, we conclude that the upper bound (105) is, in general, nearly un-improvable (up to some logarithmic factor).
Remark 6. We shall take a moment to briefly discuss the key design rationale behind Example 1. Let us partition the state space into two halves, denoted respectively by X 1 and X 2 . From every state s ∈ X , it is much easier to transition into the first half X 1 rather than the second half X 2 . This leads to two properties: (i) the stationary distribution of any state in X 2 is much lower than that of a state in X 1 ; (ii) the cover time also increases as the stationary distribution w.r.t. X 2 decreases, given that it becomes more difficult to traverse the second half. As a result, we can guarantee that t cover is proportional to µ min through this type of designs. On the other hand, the example is also constructed in a way such that all states are "lazy", meaning that they are more inclined to stay unchanged rather than moving to a different state. The level of laziness clearly controls how fast the Markov chain mixes, as well as how long it takes to cover all states. This in turn allows one to ensure that t cover is proportional to t mix . More details can be found in the proof below.
As a result, the minimum state occupancy probability of the stationary distribution is given by In addition, the reversibility of this chain implies that the matrix P d := D 1 2 P D − 1 2 with D := diag [µ] is symmetric and has the same set of eigenvalues as P (Brémaud, 2013). A little algebra yields allowing us to determine the eigenvalues {λ i } 1≤i≤|X | as follows We are now ready to establish the lower bound on the cover time. First of all, the well-known connection between the spectral gap and the mixing time gives Paulin (2015, Proposition 3.3) In addition, let (x 0 , x 1 , · · · ) be the corresponding Markov chain, and assume that x 0 ∼ µ, where µ stands for the stationary distribution. Consider the last state -denoted by |X |, which enjoys the minimum state occupancy probability µ min . For any integer t > 0 one has where (i) follows from the chain rule, (ii) relies on the Markovian property, (iii) results from the construction (106), and (iv) holds as long as q |X | t < 1 2 . Consequently, if |X | ≥ 3 and if t < |X | 8q , then one necessarily has This taken collectively with the definition of t cover (cf. (9)) reveals that where the last inequality is a direct consequence of (108) and (109).

B.1 Proof of Lemma 1
Fix any state-action pair (s, a) ∈ S × A, and let us look at β 1,t (s, a), namely, the (s, a)-th entry of For convenience of presentation, we abuse the notation to let Λ j (s, a) denote the (s, a)-th diagonal entry of the diagonal matrix Λ j , and P t (s, a) (resp. P (s, a)) the (s, a)-th row of P t (resp. P ). In view of the definition (40), we can write As it turns out, it is convenient to study this expression by defining t k (s, a) := the time stamp when the trajectory visits (s, a) for the k-th time (111) and namely, the total number of times -during the first t iterations -that the sample trajectory visits (s, a).
With these in place, the special form of Λ j (cf. (34)) allows us to rewrite (110) as where we suppress the dependency on (s, a) and write t k := t k (s, a) to streamline notation. The main step thus boils down to controlling (113). Towards this, we claim that: with probability at least 1 − δ, holds simultaneously for all (s, a) ∈ S × A and all 1 ≤ K ≤ T , provided that 0 < η log |S||A|T δ < 1. Recognizing the trivial bound K t (s, a) ≤ t ≤ T (by construction (112)) and substituting the claimed bound (114) into the expression (113), we arrive at thus concluding the proof of this lemma. It remains to validate the inequality (114).
Proof of the inequality (114). We first make the observation that: for any fixed integer K > 0, the following vectors {P t k +1 (s, a) | 1 ≤ k ≤ K} are identically and independently distributed. 4 To justify this observation, let us denote by P s,a (·) the transition probability from state s when action a is taken. For any i 1 , · · · , i K ∈ S, one obtains where (i) holds true from the Markov property as well as the fact that t K is an iteration in which the trajectory visits state s and takes action a. Invoking the above identity recursively, we arrive at meaning that the state transitions happening at times {t 1 , · · · , t K } are independent, each following the distribution P s,a (·). This clearly demonstrates the independence of {P t k +1 (s, a) | 1 ≤ k ≤ K}.
With the above observation in mind, we resort to the Hoeffding inequality to bound the quantity of interest (which has zero mean). To begin with, notice the facts that for all k ≥ 1, which gives As a consequence, invoking the Hoeffding inequality (Boucheron et al., 2013) with probability exceeding 1 − δ |S||A|T , where the last line holds since Taking the union bound over all (s, a) ∈ S × A and all 1 ≤ K ≤ T then reveals that: with probability at least 1 − δ, the inequality (118) holds simultaneously over all (s, a) ∈ S × A and all 1 ≤ K ≤ T . This concludes the proof.

B.2 Proof of Lemma 2 and Lemma 6
Proof of Lemma 2. Let β 3,t = t j=1 I − Λ j ∆ 0 . Denote by β 3,t (s, a) (resp. ∆ 0 (s, a)) the (s, a)-th entry of β 3,t (resp. ∆ 0 ). From the definition of β 3,t , it is easily seen that where K t (s, a) denotes the number of times the sample trajectory visits (s, a) during the iterations [1, t] (cf. (112)). By virtue of Lemma 8 and the union bound, one has, with probability at least 1 − δ, that simultaneously over all (s, a) ∈ S × A and all t obeying 443τ mix µ min log 4|S||A|T δ ≤ t ≤ T . Substitution into the relation (119) establishes that, with probability greater than 1 − δ, holds uniformly over all (s, a) ∈ S × A and all t obeying 443τ mix µ min log 4|S||A|T δ ≤ t ≤ T , as claimed.
Proof of Lemma 6. The proof of this lemma is essentially the same as that of Lemma 2, except that we use instead the following lower bound on K t (s, a) (which is an immediate consequence of Lemma 5) for all t > t cover,all . Therefore, replacing tµ min with t/t cover,all in the above analysis, we establish Lemma 6.

B.3 Proof of Lemma 3
We prove this fact via an inductive argument. The base case with t = 0 is a consequence of the crude bound (49). Now, assume that the claim holds for all iterations up to t − 1, and we would like to justify it for the t-th iteration as well. Towards this, define Recall that (1 − η) 1 2 tµ min ≤ (1 − γ)ε for any t ≥ t th . Therefore, combining the inequality (47) with the induction hypotheses indicates that Taking this together with the inequality (51b) and rearranging terms, we obtain where we have used the definition of v t in (52). This taken collectively with the definition u t = v t ∞ establishes that as claimed. This concludes the proof.

B.4 Proof of Lemma 4
We shall prove this result by induction over the index k. To start with, consider the base case where k = 0 and t < t th + t frame . By definition, it is straightforward to see that u 0 ≤ ∆ 0 ∞ /(1 − γ) = w 0 . In fact, repeating our argument for the crude bound (see Section 6.2.2) immediately reveals that ∀t ≥ 0 : thus indicating that the inequality (55) holds for the base case. In what follows, we assume that the inequality (55) holds up to k − 1, and would like to extend it to the case with all t obeying t−t th t frame = k.
Consider any 0 ≤ j < t frame . In view of the definition of v t (cf. (52)) as well as our induction hypotheses, one can arrange terms to derive v t th +kt frame +j = γ t th +kt frame +j where the last inequality follows from our induction hypotheses, the non-negativity of (I − Λ j )Λ i 1, and the fact that w s is non-increasing. Given any state-action pair (s, a) ∈ S × A, let us look at the (s, a)-th entry of v t th +kt frame +j -denoted by v t th +kt frame +j (s, a), towards which it is convenient to pause and introduce some notation. Recall that N n i (s, a) has been used to denote the number of visits to the state-action pair (s, a) between iteration i and iteration n (including i and n). To help study the behavior in each timeframe, we introduce the following quantities L k−1 h := N n i (s, a) with i = t th + ht frame + j + 1, n = t th + kt frame + j for every h ≤ k − 1. Lemma 8 tells us that, with probability at least 1 − 2δ, which holds uniformly over all state-action pairs (s, a). Armed with this set of notation, it is straightforward to use the expression (126)  ( where we denote α h := (1 − η) L k−1 h for any h ≤ k − 1 and α k := 1. A little algebra further leads to Thus, in order to control the quantity v t th +kt frame +j (s, a), it suffices to control the right-hand side of (130), for which we start by bounding the last term. Plugging in the definitions of w h and α h yields where the last inequality results from the fact (128). Additionally, direct calculation yields where the last inequality makes use of the fact that Combining the inequalities (129), (130) and (131) and using the fact α 0 w 0 ≥ 0 give v t th +kt frame +j (s, a) ≤ γ We are now ready to justify that v t th +kt frame +j (s, a) ≤ w k . Note that the observation (132) implies This combined with the bound (133) yields v t th +kt frame +j (s, a) ≤ where the last line follows from the definition of ρ (cf. (37d)). Since the above inequality holds for all state-action pair (s, a), we conclude that u t th +kt frame +j = v t th +kt frame +j ∞ ≤ w k .
As a consequence, we have established the inequality (55) for all t obeying t−t th t frame = k, which together with the induction argument completes the proof of this lemma.
Then the (s, a)-th row of P -denoted by P (s, a) -is given by P (s, a) = 1 K N (s, a) N −1 i=0 P i+1 (s, a)V 1{(s i , a i ) = (s, a)} = 1 K N (s, a) where P i is defined in (35), and P i (s, a) denotes its (s, a)-th row. Here, K N (s, a) denotes the total number of visits to (s, a) during the first N time instances (cf. (112)), and t k := t k (s, a) denotes the time stamp when the trajectory visits (s, a) for the k-th time (cf. (111)). In view of our derivation for (116), the state transitions happening at time t 1 , t 2 , · · · , t k are independent for any given integer k > 0. This together with the Hoeffding inequality implies that Consequently, with probability at least 1 − δ |S||A| one has Recognizing the simple bound K N (s, a) ≤ N , the above inequality holds for each state-action pair (s, a) when k is replaced by K N (s, a). Conditioning on these K N (s, a), applying the union bound over all (s, a) ∈ S × A, we obtain with probability at least 1 − δ.
In addition, for any N ≥ t frame , Lemma 8 guarantees that with probability 1 − 2δ, each state-action pair (s, a) is visited at least N µ min /2 times, namely, K N (s, a) ≥ 1 2 N µ min for all (s, a). This combined with (140) yields with probability at least 1 − 3δ, where the second inequality follows from the triangle inequality, and the last inequality follows from V ∞ ≤ 1 1−γ . Putting this together with (136) concludes the proof.

B.6 Proof of Lemma 5
For notational convenience, set t l := t cover l, and define H l := ∃(s, a) ∈ S × A that is not visited within t l , t l+1 for any integer l ≥ 0. In view of the definition of t cover , we see that for any given (s , a ) ∈ S × A, P {H l | (s t l , a t l ) = (s , a )} ≤ 1 2 .