By Topic

Online learning using a Bayesian surprise metric

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Hasanbelliu, E. ; Electr. & Comput. Eng. Dept., Univ. of Florida, Gainesville, FL, USA ; Kampa, K. ; Principe, J.C. ; Cobb, J.T. defines learning as the process of acquiring knowledge. In psychology, learning is defined as the modification of behavior through training. In our work, we combine these definitions to define learning as the modification of a system model to incorporate the knowledge acquired by new observations. During learning, the system creates and modifies a model to improve its performance. As new samples are introduced, the system updates its model based on the new information provided by the samples. However, this update may not necessarily improve the model. We propose a Bayesian surprise metric to differentiate good data (beneficial) from outliers (detrimental), and thus help to selectively adapt the model parameters. The surprise metric is calculated based on the difference between the prior and the posterior distributions of the model when a new sample is introduced. The metric is useful not only to identify outlier data, but also to differentiate between the data carrying useful information for improving the model and those carrying no new information (redundant). Allowing only the relevant data to update the model would speed up the learning process and prevent the system from overfitting. The method is demonstrated in all three learning procedures: supervised, semi-supervised and unsupervised. The results show the benefit of surprise in both clustering and outlier detection.

Published in:

Neural Networks (IJCNN), The 2012 International Joint Conference on

Date of Conference:

10-15 June 2012