Loading [MathJax]/extensions/TeX/ietmacros.js
Efficient Preference-Based Reinforcement Learning Using Learned Dynamics Models | IEEE Conference Publication | IEEE Xplore

Efficient Preference-Based Reinforcement Learning Using Learned Dynamics Models


Abstract:

Preference-based reinforcement learning (PbRL) can enable robots to learn to perform tasks based on an individual's preferences without requiring a hand-crafted re-ward f...Show More

Abstract:

Preference-based reinforcement learning (PbRL) can enable robots to learn to perform tasks based on an individual's preferences without requiring a hand-crafted re-ward function. However, existing approaches either assume access to a high-fidelity simulator or analytic model or take a model-free approach that requires extensive, possibly unsafe online environment interactions. In this paper, we study the benefits and challenges of using a learned dynamics model when performing PbRL. In particular, we provide evidence that a learned dynamics model offers the following benefits when performing PbRL: (1) preference elicitation and policy optimization require significantly fewer environment interactions than model-free PbRL, (2) diverse preference queries can be synthesized safely and efficiently as a byproduct of standard model-based RL, and (3) reward pre-training based on suboptimal demonstrations can be performed without any environmental interaction. Our paper provides empirical ev-idence that learned dynamics models enable robots to learn customized policies based on user preferences in ways that are safer and more sample efficient than prior preference learning approaches. Supplementary materials and code are available at https://sites.google.com/berkeley.edu/mop-rl.
Date of Conference: 29 May 2023 - 02 June 2023
Date Added to IEEE Xplore: 04 July 2023
ISBN Information:
Conference Location: London, United Kingdom

Contact IEEE to Subscribe

References

References is not available for this document.