Skip to Main Content
Continuous attractors of recurrent neural networks (RNNs) have attracted extensive interests in recent years. It is often used to describe the encoding of continuous stimuli such as orientation, moving direction and spatial location of objects. This paper studies the dynamic shift mechanism of a class of continuous attractor neural networks. It shows that if the external input is a gaussian shape with its center varying along with time, by adding a slight shift to the weights, the symmetry of gaussian weight function is destroyed. Then, the activity profile will shift continuously without changing its shape, and the shift speed can be controlled accurately by a given constant. Simulations are employed to illustrate the theory.