Loading [MathJax]/extensions/MathZoom.js
Q-learning algorithm using an adaptive-sized Q-table | IEEE Conference Publication | IEEE Xplore

Q-learning algorithm using an adaptive-sized Q-table


Abstract:

Q-learning is one of the successfully established algorithms for the reinforcement learning. It directly estimates optimal Q-values for pairs of states and admissible act...Show More

Abstract:

Q-learning is one of the successfully established algorithms for the reinforcement learning. It directly estimates optimal Q-values for pairs of states and admissible actions. By using these Q-values, agents can obtain the optimal movement in controlled Markovian domains without using an explicit model of the desired system. However, the algorithm requires a large number of actions of trial and error in the early stages of learning. In this paper, a Q-learning algorithm using the Memory Based Learning (MBL) system is proposed. By using the generalization property of the MBL system, the learning effect for a Q-value can be spread to adjacent Q-values, and thus the number of actions of trial and error can be reduced. Finally, computer simulation results for the control of inverted pendulums are presented to show the effectiveness of the proposed method.
Date of Conference: 07-10 December 1999
Date Added to IEEE Xplore: 06 August 2002
Print ISBN:0-7803-5250-5
Print ISSN: 0191-2216
Conference Location: Phoenix, AZ, USA

Contact IEEE to Subscribe

References

References is not available for this document.