By Topic

Multiagent-Based Reinforcement Learning for Optimal Reactive Power Dispatch

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Yinliang Xu ; Klipsch School of Electrical and Computer Engineering, New Mexico State University, Las Cruces, NM , USA ; Wei Zhang ; Wenxin Liu ; Frank Ferrese

This paper proposes a fully distributed multiagent-based reinforcement learning method for optimal reactive power dispatch. According to the method, two agents communicate with each other only if their corresponding buses are electrically coupled. The global rewards that are required for learning are obtained with a consensus-based global information discovery algorithm, which has been demonstrated to be efficient and reliable. Based on the discovered global rewards, a distributed Q-learning algorithm is implemented to minimize the active power loss while satisfying operational constraints. The proposed method does not require accurate system model and can learn from scratch. Simulation studies with power systems of different sizes show that the method is very computationally efficient and able to provide near-optimal solutions. It can be observed that prior knowledge can significantly speed up the learning process and decrease the occurrences of undesirable disturbances. The proposed method has good potential for online implementation.

Published in:

IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)  (Volume:42 ,  Issue: 6 )