Optimal Coordination of Distributed Energy Resources Using Deep Deterministic Policy Gradient | IEEE Conference Publication | IEEE Xplore

Optimal Coordination of Distributed Energy Resources Using Deep Deterministic Policy Gradient


Abstract:

Recent studies have shown that reinforcement learning (RL) is a promising approach for coordination and control of distributed energy resources (DERS) under uncertainties...Show More

Abstract:

Recent studies have shown that reinforcement learning (RL) is a promising approach for coordination and control of distributed energy resources (DERS) under uncertainties. Many existing RL approaches, including Q-learning and approximate dynamic programming, are based on lookup table methods, which become inefficient when the problem size is large and infeasible when continuous states and actions are involved. In addition, when modeling battery energy storage systems (BESSS), the loss of life is not reasonably considered in the decision-making process. This paper proposes an innovative deep RL method for DER coordination considering BESS degradation. The proposed deep RL is designed based on an adaptive actor-critic architecture and employs an off-policy deterministic policy gradient method for determining the dispatch operation that minimizes the operation cost and BESS loss of life. Case studies were performed to validate the proposed method and demonstrate the effects of incorporating degradation models into control design.
Date of Conference: 08-09 November 2022
Date Added to IEEE Xplore: 30 December 2022
ISBN Information:
Conference Location: Austin, TX, USA

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.