Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Policy iteration algorithm for distributed networks and graphical games

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Vamvoudakis, K.G. ; Autom. & Robot. Res. Inst., Univ. of Texas at Arlington, Fort Worth, TX, USA ; Lewis, F.L.

This paper brings together cooperative control, reinforcement learning, and game theory to present a multi-agent distributed formulation for graphical games. The notion of graphical games is developed for dynamical systems, where the dynamics and performance indices for each node depend only on local neighbor information. We propose a cooperative policy iteration algorithm for graphical games. This algorithm converges to the best response when the neighbors of each agent do not update their policies and to the Nash equilibrium when all agents update their policies simultaneously. It is also shown that the convergence of this algorithm is based on the speed of convergence of the neighbors of every player in the graph, graph topology, and user defined matrices in the performance index. This framework will be used to develop methods for online adaptive learning solutions of graphical games in real time.

Published in:

Decision and Control and European Control Conference (CDC-ECC), 2011 50th IEEE Conference on

Date of Conference:

12-15 Dec. 2011