Abstract:
In safety-critical applications, it is crucial to verify and certify the decisions made by AI-driven Autonomous Systems (ASs). However, the black-box nature of neural net...Show MoreMetadata
Abstract:
In safety-critical applications, it is crucial to verify and certify the decisions made by AI-driven Autonomous Systems (ASs). However, the black-box nature of neural networks used in these systems often makes it challenging to achieve this. The explainability of these systems can help with the verification and certification process, which will speed up their deployment in safety-critical applications. This study investigates the explainability of AI-driven air combat agents via semantically grouped reward decomposition. The paper presents two use cases to demonstrate how this approach can help AI and non-AI experts to evaluate and debug the behavior of RL agents.
Published in: 2023 IEEE Conference on Artificial Intelligence (CAI)
Date of Conference: 05-06 June 2023
Date Added to IEEE Xplore: 02 August 2023
ISBN Information: