Skip to Main Content
Generally, training neural networks with the global extended Kalman filter (GEKF) technique exhibits excellent performance at the expense of a large increase in computational costs which can become prohibitive even for networks of moderate size. This drawback was previously addressed by heuristically decoupling some of the weights of the networks. Inevitably, ad hoc decoupling leads to a degradation in the quality (accuracy) of the resultant neural networks. In this paper, we present an algorithm that emulates the accuracy of GEKF, but avoids the construction of the state covariance matrix-the source of the computational bottleneck in GEKF. In the proposed algorithm, all the synaptic weights remain connected while the amount of computer memory required is similar to (or cheaper than) the memory requirements in the decoupling schemes. We also point out that the new method can be extended to derivative-free nonlinear Kalman filters, such as the unscented Kalman filter and ensemble Kalman filters.