Loading [MathJax]/extensions/MathMenu.js
Fuzzy rule interpolation and reinforcement learning | IEEE Conference Publication | IEEE Xplore

Fuzzy rule interpolation and reinforcement learning


Abstract:

Reinforcement Learning (RL) methods became popular decades ago and still maintain to be one of the mainstream topics in computational intelligence. Countless different RL...Show More

Abstract:

Reinforcement Learning (RL) methods became popular decades ago and still maintain to be one of the mainstream topics in computational intelligence. Countless different RL methods and variants can be found in the literature, each one having its own advantages and disadvantages in a specific application domain. Representation of the revealed knowledge can be realized in several ways depending on the exact RL method, including e.g. simple discrete Q-tables, fuzzy rule-bases, artificial neural networks. Introducing interpolation within the knowledge-base allows the omission of less important, redundant information, while still keeping the system functional. A Fuzzy Rule Interpolation-based (FRI) RL method called FRIQ-learning is a method which possesses this feature. By omitting the unimportant, dependent fuzzy rules - emphasizing the cardinal entries of the knowledge representation - FRIQ-learning is also suitable for knowledge extraction. In this paper the fundamental concepts of FRIQ-learning and associated extensions of the method along with benchmarks will be discussed.
Date of Conference: 26-28 January 2017
Date Added to IEEE Xplore: 20 March 2017
ISBN Information:
Conference Location: Herl'any, Slovakia

Contact IEEE to Subscribe

References

References is not available for this document.