Skip to Main Content
Existing task allocation methods for multi-robot generally consider the allocation of a task to a robot as a certain source of reward (real reward). In fact, they ignore the impact of the robots' uncertain behaviors on the solution quality and the obtained reward. In environments where task execution is uncertain, a robot can not know, during the allocation phase, whether it will be able to execute all the tasks that are allocated to it. That's why, in many recent and real applications, as planetary rovers, known task allocation mechanisms seem to be insufficient to provide good solutions in the light of uncertainty. In this paper, we address the problem of multi-robot task allocation for situations where task execution is uncertain. We propose an approach that allows robots to take into account the uncertain execution when they are negotiating the allocation of tasks. We decompose the problem into two stages. In the first stage, each robot locally selects tasks it would like to execute, basing its choice on some criterion. Since the resources consumption is uncertain, a criterion to select tasks can not only be the maximization of the robot's reward. We define a new criterion based on the notion of expected reward that provides a good tread-off between the reward of selected tasks and the chances to completely execute these tasks. The task selection mechanism will be formalized by a Markov decision process that allows maximizing the expected reward. In the second stage, an auctioning mechanism is used to allow robots acting in a decentralized way to coordinate their local choices and to allocate tasks.