Unknown

Dataset Information

0

The interrelationship between the face and vocal tract configuration during audiovisual speech.


ABSTRACT: It is well established that speech perception is improved when we are able to see the speaker talking along with hearing their voice, especially when the speech is noisy. While we have a good understanding of where speech integration occurs in the brain, it is unclear how visual and auditory cues are combined to improve speech perception. One suggestion is that integration can occur as both visual and auditory cues arise from a common generator: the vocal tract. Here, we investigate whether facial and vocal tract movements are linked during speech production by comparing videos of the face and fast magnetic resonance (MR) image sequences of the vocal tract. The joint variation in the face and vocal tract was extracted using an application of principal components analysis (PCA), and we demonstrate that MR image sequences can be reconstructed with high fidelity using only the facial video and PCA. Reconstruction fidelity was significantly higher when images from the two sequences corresponded in time, and including implicit temporal information by combining contiguous frames also led to a significant increase in fidelity. A "Bubbles" technique was used to identify which areas of the face were important for recovering information about the vocal tract, and vice versa, on a frame-by-frame basis. Our data reveal that there is sufficient information in the face to recover vocal tract shape during speech. In addition, the facial and vocal tract regions that are important for reconstruction are those that are used to generate the acoustic speech signal.

SUBMITTER: Scholes C 

PROVIDER: S-EPMC7768679 | biostudies-literature | 2020 Dec

REPOSITORIES: biostudies-literature

altmetric image

Publications

The interrelationship between the face and vocal tract configuration during audiovisual speech.

Scholes Chris C   Skipper Jeremy I JI   Johnston Alan A  

Proceedings of the National Academy of Sciences of the United States of America 20201208 51


It is well established that speech perception is improved when we are able to see the speaker talking along with hearing their voice, especially when the speech is noisy. While we have a good understanding of where speech integration occurs in the brain, it is unclear how visual and auditory cues are combined to improve speech perception. One suggestion is that integration can occur as both visual and auditory cues arise from a common generator: the vocal tract. Here, we investigate whether faci  ...[more]

Similar Datasets

| S-EPMC6347527 | biostudies-literature
| S-EPMC5939209 | biostudies-literature
| S-EPMC6753102 | biostudies-literature
| S-EPMC8323486 | biostudies-literature
| S-EPMC9239541 | biostudies-literature
| S-EPMC2185744 | biostudies-literature
| S-EPMC4711920 | biostudies-literature