By Topic

Issues in rational planning in multi-agent settings

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
P. J. Gmytrasiewicz ; Dept. of Comput. Sci., Illinois Univ., Chicago, IL, USA

We adopt the decision-theoretic principle of expected utility maximization as a paradigm for designing autonomous rational agents operating in multi-agent environments. We use the formalism of partially observable Markov decision processes and generalize it to include the presence of other agents. Under common assumptions, belief-state MDP can be defined using agents' beliefs that include the agent's knowledge about the environment and about the other agents, including their knowledge about others' states of knowledge. The resulting solution corresponds to what has been called the decision-theoretic approach to game theory. Our approach complements the more traditional game-theoretic approach based on equilibria. Equilibria may be non-unique and do not capture off-equilibrium behaviors. Our approach seeks to avoid these problems, but does so at the cost of having to represent, process and continually update the complex nested state of agent's knowledge.

Published in:

System Sciences, 2003. Proceedings of the 36th Annual Hawaii International Conference on

Date of Conference:

6-9 Jan. 2003