Skip to Main Content
We propose a new method for information-theoretic competitive learning that maximizes information about input patterns as well as target patterns. The method is called teacher-directed information maximization because target information directs networks to produce appropriate outputs. Target information is given in the input layer, and errors need not be back-propagated, as with conventional supervised learning methods. Thus, this is a very efficient supervised learning method. In the new method, we use information-theoretic competitive learning with Gaussian activation functions to simulate competition, because information maximization processes are accelerated by changing the width of the functions. Teacher information is added by distorting the distance between input patterns and connection weights. We applied our method to two problems: a road classification problem and a voting attitude problem. In both problems, we could show that training errors could be significantly decreased and better generalization performance could be obtained.
Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on (Volume:4 )
Date of Conference: 25-29 July 2004