By Topic

Localization for a class of two-team zero-sum Markov games

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Hyeong Soo Chang ; Dept. of Comput. Sci. & Eng., Sogang Univ., Seoul, South Korea ; Fu, M.C.

This paper presents a novel concept of "localization" for a class of infinite horizon two-team zero-sum Markov games (MGs) with a minimizer team of multiple decision makers that competes against nature (a maximizer team) which controls the disturbances that are unknown to the minimizer team. The minimizer team is associated with a general joint cost structure but has a special decomposable state/action structure such that each pair of a minimizing agent's action and a random disturbance to the agent affects the system's state transitions independently from all of the other pairs. By localization, the original MG is decomposed into "local" MGs defined only on local state and action spaces. We discuss how to use localization to develop an efficient distributed heuristic scheme to find an "autonomous" joint policy such that each agent's action is based on only its local state.

Published in:

Decision and Control, 2004. CDC. 43rd IEEE Conference on  (Volume:5 )

Date of Conference:

14-17 Dec. 2004