Cart (Loading....) | Create Account
Close category search window
 

Interaction of Culture-Based Learning and Cooperative Co-Evolution and its Application to Automatic Behavior-Based System Design

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Farahmand, A.-m. ; Dept. of Electr. & Comput. Eng., Univ. of Tehran, Tehran, Iran ; Ahmadabadi, M.N. ; Lucas, C. ; Araabi, B.N.

Designing an intelligent situated agent is a difficult task because the designer must see the problem from the agent's viewpoint, considering all its sensors, actuators, and computation systems. In this paper, we introduce a bio-inspired hybridization of reinforcement learning, cooperative co-evolution, and a cultural-inspired memetic algorithm for the automatic development of behavior-based agents. Reinforcement learning is responsible for the individual-level adaptation. Cooperative co-evolution performs at the population level and provides basic decision-making modules for the reinforcement-learning procedure. The culture-based memetic algorithm, which is a new computational interpretation of the meme metaphor, increases the lifetime performance of agents by sharing learning experiences between all agents in the society. In this paper, the design problem is decomposed into two different parts: 1) developing a repertoire of behavior modules and 2) organizing them in the agent's architecture. Our proposed cooperative co-evolutionary approach solves the first problem by evolving behavior modules in their separate genetic pools. We address the problem of relating the fitness of the agent to the fitness of behavior modules by proposing two fitness sharing mechanisms, namely uniform and value-based fitness sharing mechanisms. The organization of behavior modules in the architecture is determined by our structure learning method. A mathematical formulation is provided that shows how to decompose the value of the structure into simpler components. These values are estimated during learning and are used to find the organization of behavior modules during the agent's lifetime. To accelerate the learning process, we introduce a culture-based method based on our new interpretation of the meme metaphor. Our proposed memetic algorithm is a mechanism for sharing learned structures among agents in the society. Lifetime performance of the agent, which is quite im- - portant for real-world applications, increases considerably when the memetic algorithm is in action. Finally, we apply our methods to two benchmark problems: an abstract problem and a decentralized multirobot object-lifting task, and we achieve human-competitive architecture designs.

Published in:

Evolutionary Computation, IEEE Transactions on  (Volume:14 ,  Issue: 1 )

Date of Publication:

Feb. 2010

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.