By Topic

Efficient estimation of neural weights by polynomial approximation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
G. Ritter ; Fakultat fur Math. und Inf., Passau Univ., Germany

It has been known for some years that the uniform-density problem for forward neural networks has a positive answer: any real-valued, continuous function on a compact subset of Rd can be uniformly approximated by a sigmoidal neural network with one hidden layer. We design here algorithms for efficient uniform approximation by a certain class of neural networks with one hidden layer which we call nearly exponential. This class contains, e.g., all networks with the activation functions 1/(1+e-t), tanh(t), or et ∧1 in their hidden layers. The algorithms flow from a theorem stating that such networks attain the order of approximation O(N-1 d/), d being dimension and N the number of hidden neurons. This theorem, in turn, is a consequence of a close relationship between neural networks of nearly exponential type and multivariate algebraic and exponential polynomials. The algorithms need neither a starting point nor learning parameters; they do not get stuck in local minima, and the gain in execution time relative to the backpropagation algorithm is enormous. The size of the hidden layer can be bounded analytically as a function of the precision required

Published in:

IEEE Transactions on Information Theory  (Volume:45 ,  Issue: 5 )