Skip to Main Content
Markov decision processes (MDPs) have become a standard method for planning under uncertainty, however they usually assume a sequential process, so a single action is executed at each time step. In some applications, as in robotics, it is required to execute several actions concurrently. For this we propose a framework based on a functional decomposition of the problem into several sub-problems, each represented as a subMDP. Each subMDP is solved independently and their policies are combined to obtain a global solution, such that the actions of each subMDP can be executed concurrently. As we combine the local policies, conflicts between them can arise. We define two kinds of conflicts, resource and behavior conflicts, and propose solutions for both. Resource conflicts are solved off-line via a two-phase process which guarantees a near-optimal global policy. Behavior conflicts are solved on-line based on a set of restrictions specified by the user. If there are no restrictions, all the actions are executed concurrently; otherwise, an arbiter selects the action(s) with higher expected utility. We present experimental results in two cases: (i) a simulated robot navigation problem, with resource conflicts, and (ii) a simulated robot in a message delivery task, with behavior conflicts.
Date of Conference: March 30 2009-April 2 2009