Skip to Main Content
In this paper, we report generalization performance by a new type of efficient learning method called teacher-directed learning. In the method, information on targets or teachers is supposed to be maximized before learning. This teacher information directs input patterns to activate correct competitive units. Because connection weights on teachers are all fixed to maximize information, we have only to update a small number of connections concerning input units and competitive units. Thus, the new method is computationally efficient. However, generalization performance of the new method has not been evaluated. In this paper, we use the Iris problem and the voting attitude problem to show that generalization performance is better than conventional methods. In addition, experimental results reconfirm that we can obtain clearer internal representations.