Unknown

Dataset Information

0

Neural correlates of forward planning in a spatial decision task in humans.


ABSTRACT: Although reinforcement learning (RL) theories have been influential in characterizing the mechanisms for reward-guided choice in the brain, the predominant temporal difference (TD) algorithm cannot explain many flexible or goal-directed actions that have been demonstrated behaviorally. We investigate such actions by contrasting an RL algorithm that is model based, in that it relies on learning a map or model of the task and planning within it, to traditional model-free TD learning. To distinguish these approaches in humans, we used functional magnetic resonance imaging in a continuous spatial navigation task, in which frequent changes to the layout of the maze forced subjects continually to relearn their favored routes, thereby exposing the RL mechanisms used. We sought evidence for the neural substrates of such mechanisms by comparing choice behavior and blood oxygen level-dependent (BOLD) signals to decision variables extracted from simulations of either algorithm. Both choices and value-related BOLD signals in striatum, although most often associated with TD learning, were better explained by the model-based theory. Furthermore, predecessor quantities for the model-based value computation were correlated with BOLD signals in the medial temporal lobe and frontal cortex. These results point to a significant extension of both the computational and anatomical substrates for RL in the brain.

SUBMITTER: Simon DA 

PROVIDER: S-EPMC3108440 | biostudies-other | 2011 Apr

REPOSITORIES: biostudies-other

altmetric image

Publications

Neural correlates of forward planning in a spatial decision task in humans.

Simon Dylan Alexander DA   Daw Nathaniel D ND  

The Journal of neuroscience : the official journal of the Society for Neuroscience 20110401 14


Although reinforcement learning (RL) theories have been influential in characterizing the mechanisms for reward-guided choice in the brain, the predominant temporal difference (TD) algorithm cannot explain many flexible or goal-directed actions that have been demonstrated behaviorally. We investigate such actions by contrasting an RL algorithm that is model based, in that it relies on learning a map or model of the task and planning within it, to traditional model-free TD learning. To distinguis  ...[more]

Similar Datasets

| S-EPMC5025724 | biostudies-literature
| S-EPMC2040441 | biostudies-literature
| S-EPMC5537618 | biostudies-literature
| S-EPMC9949498 | biostudies-literature
| S-EPMC9911144 | biostudies-literature
| S-EPMC10524674 | biostudies-literature
| S-EPMC4350402 | biostudies-literature
| S-EPMC6869080 | biostudies-literature
| S-EPMC6517386 | biostudies-literature
| S-EPMC3195842 | biostudies-literature