Unknown

Dataset Information

0

The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study.


ABSTRACT: The efficacy of audiovisual (AV) integration is reflected in the degree of cross-modal suppression of the auditory event-related potentials (ERPs, P1-N1-P2), while stronger semantic encoding is reflected in enhanced late ERP negativities (e.g., N450). We hypothesized that increasing visual stimulus reliability should lead to more robust AV-integration and enhanced semantic prediction, reflected in suppression of auditory ERPs and enhanced N450, respectively. EEG was acquired while individuals watched and listened to clear and blurred videos of a speaker uttering intact or highly-intelligible degraded (vocoded) words and made binary judgments about word meaning (animate or inanimate). We found that intact speech evoked larger negativity between 280-527-ms than vocoded speech, suggestive of more robust semantic prediction for the intact signal. For visual reliability, we found that greater cross-modal ERP suppression occurred for clear than blurred videos prior to sound onset and for the P2 ERP. Additionally, the later semantic-related negativity tended to be larger for clear than blurred videos. These results suggest that the cross-modal effect is largely confined to suppression of early auditory networks with weak effect on networks associated with semantic prediction. However, the semantic-related visual effect on the late negativity may have been tempered by the vocoded signal's high-reliability.

SUBMITTER: Shen S 

PROVIDER: S-EPMC7692090 | biostudies-literature | 2020 Nov

REPOSITORIES: biostudies-literature

altmetric image

Publications

The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study.

Shen Stanley S   Kerlin Jess R JR   Bortfeld Heather H   Shahin Antoine J AJ  

Brain sciences 20201102 11


The efficacy of audiovisual (AV) integration is reflected in the degree of cross-modal suppression of the auditory event-related potentials (ERPs, P1-N1-P2), while stronger semantic encoding is reflected in enhanced late ERP negativities (e.g., N450). We hypothesized that increasing visual stimulus reliability should lead to more robust AV-integration and enhanced semantic prediction, reflected in suppression of auditory ERPs and enhanced N450, respectively. EEG was acquired while individuals wa  ...[more]

Similar Datasets

| S-EPMC7653187 | biostudies-literature
| S-EPMC4743927 | biostudies-literature
| S-EPMC3187777 | biostudies-literature
| S-EPMC5319989 | biostudies-literature
| S-EPMC5006149 | biostudies-literature
| S-EPMC2639724 | biostudies-literature
| S-EPMC4853386 | biostudies-literature
| S-EPMC6435873 | biostudies-literature
| S-EPMC7839034 | biostudies-literature
| S-EPMC6736834 | biostudies-other