By Topic

An implementation efficient learning algorithm for adaptive control using associative content addressable memory

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Yendo Hu ; Dept. of Electr. & Comput. Eng., California Univ., San Diego, La Jolla, CA, USA ; Fellman, R.

Three modifications to the Boxes-ASE/ACE reinforcement learning improves implementation efficiency and performance. A state history queue (SHQ) eliminates computations for temporally insignificant states. A dynamic link table only allocates control memory to states the system traverses. CMAC state association uses previous learning to decrease training time. Simulations show a 4-fold improvement in learning. The SHQ in a hardware implementation of the pole-cart balancer reduces computation time 11-fold

Published in:

Systems, Man and Cybernetics, IEEE Transactions on  (Volume:25 ,  Issue: 4 )