By Topic

Boltzmann Machines Reduction by High-Order Decimation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Farguell, E. ; Eng. i Arquitectura La Salle, Univ. Ramon Llull, Barcelona ; Mazzanti, F. ; Gomez-Ramirez, E.

Decimation is a common technique in statistical physics that is used in the context of Boltzmann machines (BMs) to drastically reduce the computational cost at the learning stage. Decimation allows to analytically evaluate quantities that should otherwise be statistically estimated by means of Monte Carlo (MC) simulations. However, in its original formulation, this method could only be applied to restricted topologies corresponding to sparsely connected neural networks. In this brief, we present a generalization of the decimation process and prove that it can be used on any BM, regardless of its topology and connectivity. We solve the Monk problem with this algorithm and show that it performs as well as the best classification methods currently available.

Published in:

Neural Networks, IEEE Transactions on  (Volume:19 ,  Issue: 10 )