By Topic

Systems Control With Generalized Probabilistic Fuzzy-Reinforcement Learning

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Hinojosa, W.M. ; Robot. & Autom. Lab., Univ. of Salford, Manchester, UK ; Nefti, S. ; Kaymak, U.

Reinforcement learning (RL) is a valuable learning method when the systems require a selection of control actions whose consequences emerge over long periods for which input-output data are not available. In most combinations of fuzzy systems and RL, the environment is considered to be deterministic. In many problems, however, the consequence of an action may be uncertain or stochastic in nature. In this paper, we propose a novel RL approach to combine the universal-function-approximation capability of fuzzy systems with consideration of probability distributions over possible consequences of an action. The proposed generalized probabilistic fuzzy RL (GPFRL) method is a modified version of the actor-critic (AC) learning architecture. The learning is enhanced by the introduction of a probability measure into the learning structure, where an incremental gradient-descent weight-updating algorithm provides convergence. Our results show that the proposed approach is robust under probabilistic uncertainty while also having an enhanced learning speed and good overall performance.

Published in:

Fuzzy Systems, IEEE Transactions on  (Volume:19 ,  Issue: 1 )