Skip to Main Content
In a general reinforcement learning problem, a plant (state transition probabilities) is estimated and a learning policy for the estimated plant is applied to a real plant. If there are differences between the estimated plant and the real plant, the obtained policy may not work for the real plant. Therefore, a set of plants with variations is used for learning in order to obtain a robust policy against variations. Bellman's principle of optimality does not hold when the set of plants is used, and a typical dynamic programming algorithm cannot solve the problem. This study shows the reason why the principle of optimality does not hold. It then makes some relaxed problems whose solutions can be obtained. Moreover, this study proposes solutions to learn feasible policies efficiently. The effectiveness of the proposed method is demonstrated by applying to simple examples.
Date of Conference: 9-12 Oct. 2011