Loading [MathJax]/extensions/MathMenu.js
Beyond exponential utility functions: A variance-adjusted approach for risk-averse reinforcement learning | IEEE Conference Publication | IEEE Xplore

Beyond exponential utility functions: A variance-adjusted approach for risk-averse reinforcement learning


Abstract:

Utility theory has served as a bedrock for modeling risk in economics. Where risk is involved in decision-making, for solving Markov decision processes (MDPs) via utility...Show More

Abstract:

Utility theory has served as a bedrock for modeling risk in economics. Where risk is involved in decision-making, for solving Markov decision processes (MDPs) via utility theory, the exponential utility (EU) function has been used in the literature as an objective function for capturing risk-averse behavior. The EU function framework uses a so-called risk-averseness coefficient (RAC) that seeks to quantify the risk appetite of the decision-maker. Unfortunately, as we show in this paper, the EU framework suffers from computational deficiencies that prevent it from being useful in practice for solution methods based on reinforcement learning (RL). In particular, the value function becomes very large and typically the computer overflows. We provide a simple example to demonstrate this. Further, we show empirically how a variance-adjusted (VA) approach, which approximates the EU function objective for reasonable values of the RAC, can be used in the RL algorithm. The VA framework in a sense has two objectives: maximize expected returns and minimize variance. We conduct empirical studies on a VA-based RL algorithm on the semi-MDP (SMDP), which is a more general version of the MDP. We conclude with a mathematical proof of the boundedness of the iterates in our algorithm.
Date of Conference: 09-12 December 2014
Date Added to IEEE Xplore: 15 January 2015
Electronic ISBN:978-1-4799-4552-8

ISSN Information:

Conference Location: Orlando, FL, USA

Contact IEEE to Subscribe

References

References is not available for this document.