Motion-sensitive cortex and motion semantics in American Sign Language.
Ontology highlight
ABSTRACT: Previous research indicates that motion-sensitive brain regions are engaged when comprehending motion semantics expressed by words or sentences. Using fMRI, we investigated whether such neural modulation can occur when the linguistic signal itself is visually dynamic and motion semantics is expressed by movements of the hands. Deaf and hearing users of American Sign Language (ASL) were presented with signed sentences that conveyed motion semantics ("The deer walked along the hillside.") or were static, conveying little or no motion ("The deer slept along the hillside."); sentences were matched for the amount of visual motion. Motion-sensitive visual areas (MT+) were localized individually in each participant. As a control, the Fusiform Face Area (FFA) was also localized for the deaf participants. The whole-brain analysis revealed static (locative) sentences engaged regions in left parietal cortex more than motion sentences, replicating previous results implicating these regions in comprehending spatial language for sign languages. Greater activation was observed in the functionally defined MT+ ROI for motion than static sentences for both deaf and hearing signers. No modulation of neural activity by sentence type was observed in the FFA. Deafness did not affect modulation of MT+ by motion semantics, but hearing signers exhibited stronger neural activity in MT+ for both sentence types, perhaps due to differences in exposure and/or use of ASL. We conclude that top down modulation of motion-sensitive cortex by linguistic semantics is not disrupted by the visual motion that is present in sign language sentences.
SUBMITTER: McCullough S
PROVIDER: S-EPMC3429697 | biostudies-literature | 2012 Oct
REPOSITORIES: biostudies-literature
ACCESS DATA