By Topic

The Adaptive Critic Learning Agent (ACLA) algorithm: Towards problem independent neural network based optimizers

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Udhay Ravishankar ; Electrical and Computer Engineering Dept., University of Idaho, Falls, USA ; Milos Manic

This paper presents the development of a new neural network based optimizer called the Adaptive Critic Learning Agent (ACLA) algorithm. The ACLA algorithm is based on the traditional Adaptive Critic Design (ACD) algorithm and hence its name. Conventional neural network based optimizers use the principle of Hopfield/Tank Neural Networks (HTNN) to solve unimodal optimization problems. These neural networks require tailored structures for the specific optimization problem. The ACLA algorithm presented in this paper uses a general randomly initialized neural network to solve any unimodal optimization problem. This is achieved by extending the principles of the traditional ACD algorithm for the ACLA algorithm. Other attributes of the ACLA algorithm are related to the issues with swarm based optimizers such as Particle Swarm Optimization (PSO) and Genetic Algorithms (GA). These issues are: (1) large memory requirements and (2) multiple parameters required to tune the algorithm's convergence performance. The ACLA algorithm resolves these issues by: (1) using only one neuron to reduce memory requirements and (2) using only a single learning coefficient parameter to tune the algorithm's convergence performance. The ACLA algorithm was tested and compared with three swarm based optimizers on two unimodal benchmark problems typically used for PSO and GA algorithms. Test results proved the ACLA algorithm to converge to solutions 7 orders greater than the swarm based algorithms. The ACLA algorithm was further tested on two multimodal benchmark problems to demonstrate its capability to converge to nearest local minima.

Published in:

The 2012 International Joint Conference on Neural Networks (IJCNN)

Date of Conference:

10-15 June 2012