Multimodal Parameter-exploring Policy Gradients | IEEE Conference Publication | IEEE Xplore

Multimodal Parameter-exploring Policy Gradients


Abstract:

Policy Gradients with Parameter-based Exploration (PGPE) is a novel model-free reinforcement learning method that alleviates the problem of high-variance gradient estimat...Show More

Abstract:

Policy Gradients with Parameter-based Exploration (PGPE) is a novel model-free reinforcement learning method that alleviates the problem of high-variance gradient estimates encountered in normal policy gradient methods. It has been shown to drastically speed up convergence for several large-scale reinforcement learning tasks. However the independent normal distributions used by PGPE to search through parameter space are inadequate for some problems with multimodal reward surfaces. This paper extends the basic PGPE algorithm to use multimodal mixture distributions for each parameter, while remaining efficient. Experimental results on the Rastrigin function and the inverted pendulum benchmark demonstrate the advantages of this modification, with faster convergence to better optima.
Date of Conference: 12-14 December 2010
Date Added to IEEE Xplore: 04 February 2011
Print ISBN:978-1-4244-9211-4
Conference Location: Washington, DC, USA

Contact IEEE to Subscribe

References

References is not available for this document.