Incongruence effects in crossmodal emotional integration.
Ontology highlight
ABSTRACT: Emotions are often encountered in a multimodal fashion. Consequently, contextual framing by other modalities can alter the way that an emotional facial expression is perceived and lead to emotional conflict. Whole brain fMRI data was collected when 35 healthy subjects judged emotional expressions in faces while concurrently being exposed to emotional (scream, laughter) or neutral (yawning) sounds. The behavioral results showed that subjects rated fearful and neutral faces as being more fearful when accompanied by screams than compared to yawns (and laughs for fearful faces). Moreover, the imaging data revealed that incongruence of emotional valence between faces and sounds led to increased activation in the middle cingulate cortex, right superior frontal cortex, right supplementary motor area as well as the right temporoparietal junction. Against expectations no incongruence effects could be found in the amygdala. Further analyses revealed that, independent of emotional valence congruency, the left amygdala was consistently activated when the information from both modalities was emotional. If a neutral stimulus was present in one modality and emotional in the other, activation in the left amygdala was significantly attenuated. These results indicate that incongruence of emotional valence in audiovisual integration activates a cingulate-fronto-parietal network involved in conflict monitoring and resolution. Furthermore in audiovisual pairing amygdala responses seem to signal also the absence of any neutral feature rather than only the presence of an emotionally charged one.
SUBMITTER: Muller VI
PROVIDER: S-EPMC8007888 | biostudies-literature |
REPOSITORIES: biostudies-literature
ACCESS DATA