Reading the mind's eye: decoding category information during mental imagery.
Ontology highlight
ABSTRACT: Category information for visually presented objects can be read out from multi-voxel patterns of fMRI activity in ventral-temporal cortex. What is the nature and reliability of these patterns in the absence of any bottom-up visual input, for example, during visual imagery? Here, we first ask how well category information can be decoded for imagined objects and then compare the representations evoked during imagery and actual viewing. In an fMRI study, four object categories (food, tools, faces, buildings) were either visually presented to subjects, or imagined by them. Using pattern classification techniques, we could reliably decode category information (including for non-special categories, i.e., food and tools) from ventral-temporal cortex in both conditions, but only during actual viewing from retinotopic areas. Interestingly, in temporal cortex when the classifier was trained on the viewed condition and tested on the imagery condition, or vice versa, classification performance was comparable to within the imagery condition. The above results held even when we did not use information in the specialized category-selective areas. Thus, the patterns of representation during imagery and actual viewing are in fact surprisingly similar to each other. Consistent with this observation, the maps of "diagnostic voxels" (i.e., the classifier weights) for the perception and imagery classifiers were more similar in ventral-temporal cortex than in retinotopic cortex. These results suggest that in the absence of any bottom-up input, cortical back projections can selectively re-activate specific patterns of neural activity.
SUBMITTER: Reddy L
PROVIDER: S-EPMC2823980 | biostudies-literature | 2010 Apr
REPOSITORIES: biostudies-literature
ACCESS DATA