By Topic

Extracting stochastic machines from recurrent neural networks trained on complex symbolic sequences

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Tino, P. ; Dept. of Comput. Sci. & Eng., Slovak Acad. of Sci., Bratislava, Slovakia ; Vojtek, V.

We train a recurrent neural network on a single, long, complex symbolic sequence with positive entropy. The training process is monitored through information theory based performance measures. We show that although the sequence is unpredictable, the network is able to code the sequence's topological and statistical structure in recurrent neuron activation scenarios. Such scenarios can be compactly represented through stochastic machines extracted from the trained network. Generative models, i.e. trained recurrent networks and extracted stochastic machines, are compared using entropy spectra of generated sequences. In addition, entropy spectra computed directly from the machines capture generalization abilities of extracted machines and are related to a machines' long term behavior

Published in:

Knowledge-Based Intelligent Electronic Systems, 1997. KES '97. Proceedings., 1997 First International Conference on  (Volume:2 )

Date of Conference:

27-23 May 1997