Loading [MathJax]/extensions/MathZoom.js
A Survey on Causal Reinforcement Learning | IEEE Journals & Magazine | IEEE Xplore

A Survey on Causal Reinforcement Learning


Abstract:

While reinforcement learning (RL) achieves tremendous success in sequential decision-making problems of many domains, it still faces key challenges of data inefficiency a...Show More

Abstract:

While reinforcement learning (RL) achieves tremendous success in sequential decision-making problems of many domains, it still faces key challenges of data inefficiency and the lack of interpretability. Interestingly, many researchers have leveraged insights from the causality literature recently, bringing forth flourishing works to unify the merits of causality and address well the challenges from RL. As such, it is of great necessity and significance to collate these causal RL (CRL) works, offer a review of CRL methods, and investigate the potential functionality from causality toward RL. In particular, we divide the existing CRL approaches into two categories according to whether their causality-based information is given in advance or not. We further analyze each category in terms of the formalization of different models, ranging from the Markov decision process (MDP), partially observed MDP (POMDP), multiarmed bandits (MABs), imitation learning (IL), and dynamic treatment regime (DTR). Each of them represents a distinct type of causal graphical illustration. Moreover, we summarize the evaluation matrices and open sources, while we discuss emerging applications, along with promising prospects for the future development of CRL.
Page(s): 5942 - 5962
Date of Publication: 28 November 2024

ISSN Information:

PubMed ID: 40030342

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.