By Topic

Extracting finite-state representations from recurrent neural networks trained on chaotic symbolic sequences

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Tino, P. ; Dept. of Comput. Sci. & Eng., Slovak Tech. Univ., Bratislava, Slovakia ; Koteles, M.

Concerns neural-based modeling of symbolic chaotic time series. We investigate the knowledge induction process associated with training recurrent mural nets (RNN) on single long chaotic symbolic sequences. Even though training RNN to predict the next symbol leaves the standard performance measures such as the mean square error on the network output virtually unchanged, the nets extract a lot of knowledge. We monitor the knowledge extraction process by considering the nets stochastic sources and letting them generate sequences which are then confronted with the training sequence via information theoretic entropy and cross-entropy measures. We also study the possibility of reformulating the knowledge gained by RNN in a compact easy-to-analyze form of finite-state stochastic machines. The experiments are performed on two sequences with different complexities measured by the size and state transition structure of the induced Crutchfield's ε-machines (1991, 1994). The extracted machines can achieve comparable or even better entropy and cross-entropy performance. They reflect the training sequence complexity in their dynamical state representations that can be reformulated using finite-state means. The findings are confirmed by a much more detailed analysis of model generated sequences. We also introduce a visual representation of allowed block structure in the studied sequences that allows for an illustrative insight into both RNN training and finite-state stochastic machine extraction processes

Published in:

Neural Networks, IEEE Transactions on  (Volume:10 ,  Issue: 2 )