Loading [MathJax]/extensions/MathMenu.js
Explainable and Safety Aware Deep Reinforcement Learning-Based Control of Nonlinear Discrete-Time Systems Using Neural Network Gradient Decomposition | IEEE Journals & Magazine | IEEE Xplore

Explainable and Safety Aware Deep Reinforcement Learning-Based Control of Nonlinear Discrete-Time Systems Using Neural Network Gradient Decomposition


Abstract:

This paper presents an explainable deep-reinforcement learning (DRL)-based safety-aware optimal adaptive tracking (SOAT) scheme for a class of nonlinear discrete-time (DT...Show More

Abstract:

This paper presents an explainable deep-reinforcement learning (DRL)-based safety-aware optimal adaptive tracking (SOAT) scheme for a class of nonlinear discrete-time (DT) affine systems subject to state inequality constraints. The DRL-based SOAT utilizes a multilayer neural network (MNN)-based actor-critic to estimate the cost function and optimal policy while the MNN update laws are tuned both using the singular value decomposition (SVD) of activation function gradient in order to mitigate the vanishing gradient issue and safety-aware Bellman error at each layer. An approximate safety-aware optimal policy is developed using Karush-Kuhn–Tucker (KKT) conditions by incorporating the higher-order control barrier function (HOCBF) into the Hamiltonian through the Lagrangian multiplier. The resulting safety-aware Bellman error helps with safe exploration both during online learning phase and at steady state without any explicit actor-critic MNN update law changes. To study the explainability and gain insights, we employ the Shapley Additive Explanations (SHAP) method to construct an explainer model for the DRL-based SOAT scheme in order to identify the important features in determining the optimal policy. The overall stability is established. Finally, the effectiveness of the proposed method is demonstrated on Shipboard Power Systems (SPS), achieving over a 35% reduction in cumulative cost compared to the existing actor-critic MNN optimal control policy. Note to Practitioners—In practical control systems, meeting safety constraints is often critical since ignoring constraints can lead to degraded performance or damage to equipment. This paper addresses the challenge of a safe DRL-based control approach that not only optimizes performance but also integrates robust safety assurances. Our DRL-based SOAT scheme specifically targets nonlinear discrete-time systems that must satisfy state inequality constraints. The successful proposed control performance in simulations on a ...
Page(s): 13556 - 13568
Date of Publication: 24 March 2025

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.