Skip to Main Content
A practical way to implement the particle filter (PF) on a massively parallel computer is discussed. Although the PF is a useful tool for sequential Bayesian estimation, the PF tends to be computationally expensive in applying to high-dimensional problems because a enormous number of particles is required in order to appropriately approximate a PDF. One way to overcome this problem is to use large computing resources of a massively parallel computer. However, in implementing the PF on such a massively parallel computer, it is crucial to reduce the time cost for data transfer between different processing elements (PEs). In addition, in using a parallel computer with a multidimensional torus network topology, it is necessary to avoid data transfers between nodes distant from each other. The present study proposes a strategy in which the PEs in use are divided into small groups and the grouping is changed at each time step. The resampling is carried out within each group in parallel and data transfers between distant nodes never occur. Therefore, the time cost for data transfer would be greatly reduced and the efficiency is remarkably improved in comparison with the normal PF.