Abstract:
Closed-loop decoder adaptation (CLDA) can improve brain-machine interface (BMI) performance. CLDA methods use batches of data to refit the decoder parameters in closed-lo...Show MoreMetadata
Abstract:
Closed-loop decoder adaptation (CLDA) can improve brain-machine interface (BMI) performance. CLDA methods use batches of data to refit the decoder parameters in closed-loop operation. Recently, dynamic state-space algorithms have also been designed to fit the parameters of a point process decoder (PPF). A main design parameter that needs to be selected in any CLDA algorithm is the learning rate, i.e., how fast should the decoder parameters be updated on the basis of new neural observations. So far, the learning rate of CLDA algorithms has been selected empirically using ad-hoc methods. Here we develop a principled framework to calibrate the learning rate in adaptive state-space algorithms. The learning rate introduces a trade-off between the convergence rate and the steady-state error covariance of the estimated decoder parameters. Hence our algorithm first finds an analytical upper-bound on the steady-state error covariance as a function of the learning rate. It then finds the inverse mapping to select the optimal learning rate based on the maximum allowable steady-state error. Using numerical BMI experiments, we show that the calibration algorithm selects the optimal learning rate that meets the requirement on steady-state error level while achieving the fastest convergence rate possible corresponding to this steady-state level.
Published in: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
Date of Conference: 25-29 August 2015
Date Added to IEEE Xplore: 05 November 2015
ISBN Information:
ISSN Information:
PubMed ID: 26736596