Unknown

Dataset Information

0

Learning and forgetting using reinforced Bayesian change detection.


ABSTRACT: Agents living in volatile environments must be able to detect changes in contingencies while refraining to adapt to unexpected events that are caused by noise. In Reinforcement Learning (RL) frameworks, this requires learning rates that adapt to past reliability of the model. The observation that behavioural flexibility in animals tends to decrease following prolonged training in stable environment provides experimental evidence for such adaptive learning rates. However, in classical RL models, learning rate is either fixed or scheduled and can thus not adapt dynamically to environmental changes. Here, we propose a new Bayesian learning model, using variational inference, that achieves adaptive change detection by the use of Stabilized Forgetting, updating its current belief based on a mixture of fixed, initial priors and previous posterior beliefs. The weight given to these two sources is optimized alongside the other parameters, allowing the model to adapt dynamically to changes in environmental volatility and to unexpected observations. This approach is used to implement the "critic" of an actor-critic RL model, while the actor samples the resulting value distributions to choose which action to undertake. We show that our model can emulate different adaptation strategies to contingency changes, depending on its prior assumptions of environmental stability, and that model parameters can be fit to real data with high accuracy. The model also exhibits trade-offs between flexibility and computational costs that mirror those observed in real data. Overall, the proposed method provides a general framework to study learning flexibility and decision making in RL contexts.

SUBMITTER: Moens V 

PROVIDER: S-EPMC6488101 | biostudies-literature | 2019 Apr

REPOSITORIES: biostudies-literature

altmetric image

Publications

Learning and forgetting using reinforced Bayesian change detection.

Moens Vincent V   Zénon Alexandre A  

PLoS computational biology 20190417 4


Agents living in volatile environments must be able to detect changes in contingencies while refraining to adapt to unexpected events that are caused by noise. In Reinforcement Learning (RL) frameworks, this requires learning rates that adapt to past reliability of the model. The observation that behavioural flexibility in animals tends to decrease following prolonged training in stable environment provides experimental evidence for such adaptive learning rates. However, in classical RL models,  ...[more]

Similar Datasets

| PRJEB21102 | ENA
| S-EPMC3080825 | biostudies-other
| S-EPMC6104671 | biostudies-literature
| S-EPMC8300051 | biostudies-literature
| S-EPMC3694826 | biostudies-literature
| S-EPMC5557010 | biostudies-other
| S-EPMC2966286 | biostudies-literature
| S-EPMC6767232 | biostudies-literature
| S-EPMC8324515 | biostudies-literature
| S-EPMC4024662 | biostudies-literature