Skip to Main Content
We describe a reinforcement learning based scheme to estimate the stationary distribution of subsets of states of large Markov chains. dasiaSplit samplingpsila ensures that the algorithm needs to just encode the state transitions and will not need to know any other property of the Markov chain. (An earlier scheme required knowledge of the column sums of the transition probability matrix.) This algorithm is applied to analyze the stationary distribution of the states of a node in an 802.11 network.
Date of Conference: 23-26 Sept. 2008