Skip to Main Content
In this paper we present a mixed integer programming approach to deterministic dynamic programming. We consider the problem of computing a policy that maximizes the total discounted reward earned over an infinite time horizon. While problems of this form are difficult in general, suboptimal solutions and performance bounds can be computed by approximating the dynamic programming value function. Here we provide a linear programming-based method for approximating the value function, and show how suboptimal policies can be computed through repeated solution of mixed integer programs that directly utilize this approximation. We have applied this approach to problems with states described by binary vectors with dimension as large as several hundred. Although the number of distinct states associated with such a problem is extremely large, we are able to obtain suboptimal policies with surprisingly tight performance guarantees. We illustrate the application of this method on a class of infinite horizon job shop scheduling problems.