By Topic

On the probabilistic interpretation of neural network classifiers and discriminative training criteria

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
H. Ney ; Lehrstuhl fuer Inf. VI, Tech. Hochschule Aachen, Germany

A probabilistic interpretation is presented for two important issues in neural network based classification, namely the interpretation of discriminative training criteria and the neural network outputs as well as the interpretation of the structure of the neural network. The problem of finding a suitable structure of the neural network can be linked to a number of well established techniques in statistical pattern recognition. Discriminative training of neural network outputs amounts to approximating the class or posterior probabilities of the classical statistical approach. This paper extends these links by introducing and analyzing novel criteria such as maximizing the class probability and minimizing the smoothed error rate. These criteria are defined in the framework of class conditional probability density functions. We show that these criteria can be interpreted in terms of weighted maximum likelihood estimation. In particular, this approach covers widely used techniques such as corrective training, learning vector quantization, and linear discriminant analysis

Published in:

IEEE Transactions on Pattern Analysis and Machine Intelligence  (Volume:17 ,  Issue: 2 )