Skip to Main Content
Many real world applications demand solutions that are difficult to implement. It is common practice for system designers to recur to multiagent theory, where the problem at hand is broken in sub-problems and each is handled by an autonomous agent. Notwithstanding, new questions emerge, like How should a problem be broken? What the task of each agent should be? And What information should they need to process their task? In addition, conflicts between agents' partial solutions (actions) may arise as a consequence of their autonomy. In this spirit, another question would be how should conflicts be solved? In this paper we conduct a study to answer some of those questions under a multiagent learning framework. The proposed framework guarantees an optimal solution to the original problem, at the cost of a low learning speed, but can be tuned to balance learning speed and optimality. We present an experimental analysis that shows learning curves until convergence to optimality, illustrating the trade-offs between learning speeds and optimality.