By Topic

Fast and efficient second-order training of the dynamic neural network paradigm

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Gruber, C. ; Passau Univ., Germany ; Sick, B.

In many applications neural networks must process or generate time series, and various network paradigms exist for this purpose. Two prominent examples are time-delay neural networks (TDNN), which are known for their noise suppression capability, and NARX (nonlinear autoregressive models with exogenous inputs) networks, which have a powerful modeling ability (at least Turing equivalence). In this article, we suggest a combination of these two approaches, called dynamic neural network (DYNN), which unifies the particular advantages. Efficient training algorithms are needed to adjust the weights of DYNN. Here, we describe an algorithm for the computation of first-order information about the error surface: temporal backpropagation through time (TBPTT). Essentially, this algorithm is a combination of temporal backpropagation (used for TDNN) and backpropagation through time (used for NARX). The first-order information is then utilized to apply the scaled conjugate gradient (SCG) learning algorithm which approximates second-order with first-order information. The benefits of this approach are shown by means of two benchmark data sets: "logistic map" and "building". It is shown that SCG for DYNN is significantly faster and more accurate than other learning algorithms (e.g. TBPTT, resilient propagation, memoryless Quasi-Newton).

Published in:

Neural Networks, 2003. Proceedings of the International Joint Conference on  (Volume:4 )

Date of Conference:

20-24 July 2003