Testing simulation theory with cross-modal multivariate classification of fMRI data.
Ontology highlight
ABSTRACT: The discovery of mirror neurons has suggested a potential neural basis for simulation and common coding theories of action perception, theories which propose that we understand other people's actions because perceiving their actions activates some of our neurons in much the same way as when we perform the actions. We propose testing this model directly in humans with functional magnetic resonance imaging (fMRI) by means of cross-modal classification. Cross-modal classification evaluates whether a classifier that has learned to separate stimuli in the sensory domain can also separate the stimuli in the motor domain. Successful classification provides support for simulation theories because it means that the fMRI signal, and presumably brain activity, is similar when perceiving and performing actions. In this paper we demonstrate the feasibility of the technique by showing that classifiers which have learned to discriminate whether a participant heard a hand or a mouth action, based on the activity patterns in the premotor cortex, can also determine, without additional training, whether the participant executed a hand or mouth action. This provides direct evidence that, while perceiving others' actions, (1) the pattern of activity in premotor voxels with sensory properties is a significant source of information regarding the nature of these actions, and (2) that this information shares a common code with motor execution.
SUBMITTER: Etzel JA
PROVIDER: S-EPMC2577733 | biostudies-literature | 2008
REPOSITORIES: biostudies-literature
ACCESS DATA