Abstract:
Usually weight changes in neural networks are exclusively caused by some hard-wired learning algorithm with many specific limitations. The author shows that it is in prin...Show MoreMetadata
Abstract:
Usually weight changes in neural networks are exclusively caused by some hard-wired learning algorithm with many specific limitations. The author shows that it is in principle possible to let the network run and improve its own weight change algorithm (without significant theoretical limits). The author derives an initial gradient-based supervised sequence learning algorithm for an 'introspective' recurrent network that can 'speak' about its own weight matrix in terms of activations. It uses special subsets of its input and output units for observing its own errors and for explicitly analyzing and manipulating all of its own weights, including those weights responsible for analyzing and manipulating weights. The result is the first 'self-referential' neural network with explicit potential control over all adaptive parameters governing its behavior.<>
Date of Conference: 25-27 May 1993
Date Added to IEEE Xplore: 06 August 2002
Print ISBN:0-85296-573-7
Conference Location: Brighton, UK