Skip to Main Content
In many applications neural networks must process or generate time series, and various network paradigms exist for this purpose. Two prominent examples are time-delay neural networks (TDNN), which are known for their noise suppression capability, and NARX (nonlinear autoregressive models with exogenous inputs) networks, which have a powerful modeling ability (at least Turing equivalence). In this article, we suggest a combination of these two approaches, called dynamic neural network (DYNN), which unifies the particular advantages. Efficient training algorithms are needed to adjust the weights of DYNN. Here, we describe an algorithm for the computation of first-order information about the error surface: temporal backpropagation through time (TBPTT). Essentially, this algorithm is a combination of temporal backpropagation (used for TDNN) and backpropagation through time (used for NARX). The first-order information is then utilized to apply the scaled conjugate gradient (SCG) learning algorithm which approximates second-order with first-order information. The benefits of this approach are shown by means of two benchmark data sets: "logistic map" and "building". It is shown that SCG for DYNN is significantly faster and more accurate than other learning algorithms (e.g. TBPTT, resilient propagation, memoryless Quasi-Newton).