Unknown

Dataset Information

0

The recognition of facial expressions of emotion in deaf and hearing individuals.


ABSTRACT: During real-life interactions, facial expressions of emotion are perceived dynamically with multimodal sensory information. In the absence of auditory sensory channel inputs, it is unclear how facial expressions are recognised and internally represented by deaf individuals. Few studies have investigated facial expression recognition in deaf signers using dynamic stimuli, and none have included all six basic facial expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) with stimuli fully controlled for their low-level visual properties, leaving the question of whether or not a dynamic advantage for deaf observers exists unresolved. We hypothesised, in line with the enhancement hypothesis, that the absence of auditory sensory information might have forced the visual system to better process visual (unimodal) signals, and predicted that this greater sensitivity to visual stimuli would result in better recognition performance for dynamic compared to static stimuli, and for deaf-signers compared to hearing non-signers in the dynamic condition. To this end, we performed a series of psychophysical studies with deaf signers with early-onset severe-to-profound deafness (dB loss >70) and hearing controls to estimate their ability to recognize the six basic facial expressions of emotion. Using static, dynamic, and shuffled (randomly permuted video frames of an expression) stimuli, we found that deaf observers showed similar categorization profiles and confusions across expressions compared to hearing controls (e.g., confusing surprise with fear). In contrast to our hypothesis, we found no recognition advantage for dynamic compared to static facial expressions for deaf observers. This observation shows that the decoding of dynamic facial expression emotional signals is not superior even in the deaf expert visual system, suggesting the existence of optimal signals in static facial expressions of emotion at the apex. Deaf individuals match hearing individuals in the recognition of facial expressions of emotion.

SUBMITTER: Rodger H 

PROVIDER: S-EPMC8141778 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC4473593 | biostudies-other
| S-EPMC8406528 | biostudies-literature
| S-EPMC3992629 | biostudies-literature
| S-EPMC4661469 | biostudies-literature
| S-EPMC5657455 | biostudies-literature
| S-EPMC7815624 | biostudies-literature
| S-EPMC3358835 | biostudies-other
| S-EPMC7032686 | biostudies-literature
| S-EPMC8483373 | biostudies-literature
| S-EPMC5976168 | biostudies-literature