Unknown

Dataset Information

0

Optimizing the learning rate for adaptive estimation of neural encoding models.


ABSTRACT: Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.

SUBMITTER: Hsieh HL 

PROVIDER: S-EPMC5993334 | biostudies-literature | 2018 May

REPOSITORIES: biostudies-literature

altmetric image

Publications

Optimizing the learning rate for adaptive estimation of neural encoding models.

Hsieh Han-Lin HL   Shanechi Maryam M MM  

PLoS computational biology 20180529 5


Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing met  ...[more]

Similar Datasets

| S-EPMC7584453 | biostudies-literature
| S-EPMC5976558 | biostudies-literature
| S-EPMC4762277 | biostudies-literature
| S-EPMC9011806 | biostudies-literature
| S-EPMC8299461 | biostudies-literature
| S-EPMC6248928 | biostudies-literature
| S-EPMC2887627 | biostudies-literature
| S-EPMC3596400 | biostudies-literature
| S-EPMC9377634 | biostudies-literature
| S-EPMC10447473 | biostudies-literature