Skip to Main Content
When environments are dynamically varied for agents, the knowledge acquired from an environment would be useless in the future environments. Thus, agents should be able to not only acquire new knowledge but also modify old knowledge in learning. However, modifying all acquired knowledge is not always efficient. Because the knowledge once acquired may be useful again when the same (or similar) environment reappears. Moreover, some of the knowledge can be shared among different environments. To learn efficiently in such a situation, we propose a neural network model that consists of the following four modules: resource allocating network, long-term memory, association buffer, and environmental change detector. We apply this model to a simple dynamic environment in which several target functions to be approximated are varied in turn.