Skip to Main Content
This paper proposes a new architecture to build a hybrid value function estimation based on a combination of temporal-different (TD) and on-line variant of random forest (RF). We call this implementation random-TD. First RF is induced into on-line mode in order to deal with large state space and memory constraints, while state-action mapping is based on the Bellman error, or on the TD error. The approach iteratively improves its value function by exploiting only relevant parts of action space. We evaluate the potential of the proposed procedure in terms of a reduction in the Bellman error with extended empirical studies on high-dimensional control problems (Ailerons, Elevator, Kinematics, and Friedman). The results demonstrate that our approach can significantly improve the performance of TD methods and speed up learning process.