Unknown

Dataset Information

0

Goal-Directed and Habit-Like Modulations of Stimulus Processing during Reinforcement Learning.


ABSTRACT: Recent research has shown that perceptual processing of stimuli previously associated with high-value rewards is automatically prioritized even when rewards are no longer available. It has been hypothesized that such reward-related modulation of stimulus salience is conceptually similar to an "attentional habit." Recording event-related potentials in humans during a reinforcement learning task, we show strong evidence in favor of this hypothesis. Resistance to outcome devaluation (the defining feature of a habit) was shown by the stimulus-locked P1 component, reflecting activity in the extrastriate visual cortex. Analysis at longer latencies revealed a positive component (corresponding to the P3b, from 550-700 ms) sensitive to outcome devaluation. Therefore, distinct spatiotemporal patterns of brain activity were observed corresponding to habitual and goal-directed processes. These results demonstrate that reinforcement learning engages both attentional habits and goal-directed processes in parallel. Consequences for brain and computational models of reinforcement learning are discussed.SIGNIFICANCE STATEMENT The human attentional network adapts to detect stimuli that predict important rewards. A recent hypothesis suggests that the visual cortex automatically prioritizes reward-related stimuli, driven by cached representations of reward value; that is, stimulus-response habits. Alternatively, the neural system may track the current value of the predicted outcome. Our results demonstrate for the first time that visual cortex activity is increased for reward-related stimuli even when the rewarding event is temporarily devalued. In contrast, longer-latency brain activity was specifically sensitive to transient changes in reward value. Therefore, we show that both habit-like attention and goal-directed processes occur in the same learning episode at different latencies. This result has important consequences for computational models of reinforcement learning.

SUBMITTER: Luque D 

PROVIDER: S-EPMC6596732 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC8218821 | biostudies-literature
| S-EPMC8218820 | biostudies-literature
| S-EPMC7984586 | biostudies-literature
| S-EPMC7533705 | biostudies-literature
| S-EPMC4970812 | biostudies-literature
| S-EPMC4622932 | biostudies-literature
| S-EPMC6753148 | biostudies-literature
| S-EPMC6695345 | biostudies-literature