By Topic

ScaleNet-multiscale neural-network architecture for time series prediction

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
A. B. Geva ; Dept. of Electr. & Comput. Eng., Ben-Gurion Univ. of the Negev, Beer-Sheva, Israel

The effectiveness of a multiscale neural net architecture for time series prediction of nonlinear dynamic systems is investigated. The prediction task is simplified by decomposing different scales of past windows into different scales of wavelets, and predicting the coefficients of each scale of wavelets by means of a separate multilayer perceptron. The short-term history is decomposed into the lower scales of wavelet coefficients, which are utilized for detailed analysis and prediction, while the long-term history is decomposed into higher scales of wavelet coefficients that are used for the analysis and prediction of slow trends in the time series. These coordinated scales of time and frequency provide an interpretation of the series structures, and more information about the history of the series, using fewer coefficients than other methods. Results concerning scales of time and frequencies are combined by another expert perceptron, which learns the weight of each scale in the goal-prediction of the original time series. Each network is trained by backpropagation. The weights and biases are initialized by a clustering algorithm of the temporal patterns of the time series, which improves the prediction results as compared to random initialization. The suggested multiscale architecture outperforms the corresponding single-scale architectures. The employment of improved learning methods for each of the ScaleNet networks can further improve the prediction results

Published in:

IEEE Transactions on Neural Networks  (Volume:9 ,  Issue: 6 )