Skip to Main Content
The particle swarm optimisation (PSO) algorithm has been established as a useful global optimisation algorithm for multidimensional search spaces. A practical example is its success in training feed-forward neural networks. Such successes, however, must be judged relative to the complexity of the search space. We show that the effectiveness of the PSO algorithm breaks down when extended to high-dimensional "highly convex" search spaces, such as those found in training recurrent neural networks. A comparative study of backpropagation methods reveals the importance of an adaptive learning rate to their success. We briefly review the physics of the particle swarm optimiser, and use this view to introduce an analogous adaptive time step. Finally we demonstrate that the new adaptive algorithm shows improved performance on the recurrent network training problem.