By Topic

A universal learning rule that minimizes well-formed cost functions

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Mora-Jimenez, I. ; Dept. of Signal Theor. & Commun., Univ. Carlos III de Madrid, Leganes-Madrid, Spain ; Cid-Sueiro, J.

In this paper, we analyze stochastic gradient learning rules for posterior probability estimation using networks with a single layer of weights and a general nonlinear activation function. We provide necessary and sufficient conditions on the learning rules and the activation function to obtain probability estimates. Also, we extend the concept of well-formed cost function, proposed by Wittner and Denker, to multiclass problems, and we provide theoretical results showing the advantages of this kind of objective functions.

Published in:

Neural Networks, IEEE Transactions on  (Volume:16 ,  Issue: 4 )