Unknown

Dataset Information

0

Go and no-go learning in reward and punishment: interactions between affect and effect.


ABSTRACT: Decision-making invokes two fundamental axes of control: affect or valence, spanning reward and punishment, and effect or action, spanning invigoration and inhibition. We studied the acquisition of instrumental responding in healthy human volunteers in a task in which we orthogonalized action requirements and outcome valence. Subjects were much more successful in learning active choices in rewarded conditions, and passive choices in punished conditions. Using computational reinforcement-learning models, we teased apart contributions from putatively instrumental and Pavlovian components in the generation of the observed asymmetry during learning. Moreover, using model-based fMRI, we showed that BOLD signals in striatum and substantia nigra/ventral tegmental area (SN/VTA) correlated with instrumentally learnt action values, but with opposite signs for go and no-go choices. Finally, we showed that successful instrumental learning depends on engagement of bilateral inferior frontal gyrus. Our behavioral and computational data showed that instrumental learning is contingent on overcoming inherent and plastic Pavlovian biases, while our neuronal data showed this learning is linked to unique patterns of brain activity in regions implicated in action and inhibition respectively.

SUBMITTER: Guitart-Masip M 

PROVIDER: S-EPMC3387384 | biostudies-literature | 2012 Aug

REPOSITORIES: biostudies-literature

altmetric image

Publications

Go and no-go learning in reward and punishment: interactions between affect and effect.

Guitart-Masip Marc M   Huys Quentin J M QJ   Fuentemilla Lluis L   Dayan Peter P   Duzel Emrah E   Dolan Raymond J RJ  

NeuroImage 20120421 1


Decision-making invokes two fundamental axes of control: affect or valence, spanning reward and punishment, and effect or action, spanning invigoration and inhibition. We studied the acquisition of instrumental responding in healthy human volunteers in a task in which we orthogonalized action requirements and outcome valence. Subjects were much more successful in learning active choices in rewarded conditions, and passive choices in punished conditions. Using computational reinforcement-learning  ...[more]

Similar Datasets

| S-EPMC2765863 | biostudies-other
| S-EPMC3020386 | biostudies-literature
| S-EPMC3110431 | biostudies-literature
| S-EPMC5308829 | biostudies-literature
| S-EPMC8365707 | biostudies-literature
| S-EPMC5627895 | biostudies-literature
| S-EPMC3859585 | biostudies-literature
| S-EPMC4560823 | biostudies-literature
| S-EPMC9630918 | biostudies-literature
| S-EPMC7264311 | biostudies-literature