Skip to Main Content
We study the problem of determining approximate equivalences in Markov Decision Processes with rewards using bisimulation metrics. We provide an extension of the framework previously introduced in Ferns et al. (2004), which computes iteratively improving approximations to bisimulation metrics using exhaustive pairwise state comparisons. The similarity between states is determined using the Earth Mover's Distance, as extensively studied in optimization and machine learning. We address two computational limitations of the above framework: first, all pairs of states have to be compared at every iteration, and second, convergence is proven only under exact computations. We extend their work to incorporate "on-the-fly" methods, which allow computational effort to focus first on pairs of states where the impact is expected to be greater. We prove that a method similar to asynchronous dynamic programming converges to the correct value of the bisimulation metric. The second relaxation is based on applying heuristics to obtain approximate state comparisons, building on recent work on improved algorithms for computing Earth Mover's Distance. Finally, we show how this approach can be used to generate new algorithmic strategies, based on existing prioritized sweeping algorithms used for prediction and control in MDPs.