Skip to Main Content
Real-time brain-machine interfaces have estimated either the target of a movement, or its kinematics. However, both are encoded in the brain. Moreover, movements are often goal-directed and made to reach a target. Hence, modeling the goal-directed nature of movements and incorporating the target information in the kinematic decoder can increase its accuracy. Using an optimal feedback control design, we develop a recursive Bayesian kinematic decoder that models goal-directed movements and combines the target information with the neural spiking activity during movement. To do so, we build a prior goal-directed state-space model for the movement using an optimal feedback control model of the sensorimotor system that aims to emulate the processes underlying actual motor control and takes into account the sensory feedback. Most goal-directed models, however, depend on the movement duration, not known a priori to the decoder. This has prevented their real-time implementation. To resolve this duration uncertainty, the decoder discretizes the duration and consists of a bank of parallel point process filters, each combining the prior model of a discretized duration with the neural activity. The kinematics are computed by optimally combining these filter estimates. Using the feedback-controlled model and even a coarse discretization, the decoder significantly reduces the root mean square error in estimation of reaching movements performed by a monkey.