Stability Constrained Reinforcement Learning for Decentralized Real-Time Voltage Control | IEEE Journals & Magazine | IEEE Xplore

Stability Constrained Reinforcement Learning for Decentralized Real-Time Voltage Control


Abstract:

Deep reinforcement learning (RL) has been recognized as a promising tool to address the challenges in real-time control of power systems. However, its deployment in real-...Show More

Abstract:

Deep reinforcement learning (RL) has been recognized as a promising tool to address the challenges in real-time control of power systems. However, its deployment in real-world power systems has been hindered by a lack of explicit stability and safety guarantees. In this article, we propose a stability-constrained RL method for real-time implementation of voltage control that guarantees system stability both during policy learning and deployment of the learned policy. The key idea underlying our approach is an explicitly constructed Lyapunov function that leads to a sufficient structural condition for stabilizing policies, i.e., monotonically decreasing policies guarantee stability. We incorporate this structural constraint with RL, by parameterizing each local voltage controller using a monotone neural network, thus ensuring the stability constraint is satisfied by design. We demonstrate the effectiveness of our approach in both single-phase and three-phase IEEE test feeders, where the proposed method can reduce the transient control cost by more than 26.7% and shorten the voltage recovery time by 23.6% on average compared to the widely used linear policy, while always achieving voltage stability. In contrast, standard RL methods often fail to achieve voltage stability.
Published in: IEEE Transactions on Control of Network Systems ( Volume: 11, Issue: 3, September 2024)
Page(s): 1370 - 1381
Date of Publication: 01 December 2023

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.