Unknown

Dataset Information

0

Learning Similar Actions by Reinforcement or Sensory-Prediction Errors Rely on Distinct Physiological Mechanisms.


ABSTRACT: Humans can acquire knowledge of new motor behavior via different forms of learning. The two forms most commonly studied have been the development of internal models based on sensory-prediction errors (error-based learning) and success-based feedback (reinforcement learning). Human behavioral studies suggest these are distinct learning processes, though the neurophysiological mechanisms that are involved have not been characterized. Here, we evaluated physiological markers from the cerebellum and the primary motor cortex (M1) using noninvasive brain stimulations while healthy participants trained finger-reaching tasks. We manipulated the extent to which subjects rely on error-based or reinforcement by providing either vector or binary feedback about task performance. Our results demonstrated a double dissociation where learning the task mainly via error-based mechanisms leads to cerebellar plasticity modifications but not long-term potentiation (LTP)-like plasticity changes in M1; while learning a similar action via reinforcement mechanisms elicited M1 LTP-like plasticity but not cerebellar plasticity changes. Our findings indicate that learning complex motor behavior is mediated by the interplay of different forms of learning, weighing distinct neural mechanisms in M1 and the cerebellum. Our study provides insights for designing effective interventions to enhance human motor learning.

SUBMITTER: Uehara S 

PROVIDER: S-EPMC6887949 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC7732826 | biostudies-literature
| S-EPMC2818688 | biostudies-other
| S-EPMC3277161 | biostudies-literature
| S-EPMC6839916 | biostudies-literature
| S-EPMC8105414 | biostudies-literature
| S-EPMC3576883 | biostudies-literature
| S-EPMC3468605 | biostudies-literature
| S-EPMC3554678 | biostudies-literature
| S-EPMC4583356 | biostudies-literature