I See What You Are Saying: Hearing Infants’ Visual Attention and Social Engagement in Response to Spoken and Sign Language
Ontology highlight
ABSTRACT: Infants are endowed with a proclivity to acquire language, whether it is presented in the auditory or visual modality. Moreover, in the first months of life, listening to language supports fundamental cognitive capacities, including infants’ facility to form object categories (e.g., dogs and bottles). Recently, we have found that for English-acquiring infants as young as 4 months of age, this precocious interface between language and cognition is sufficiently broad to include not only their native spoken language (English), but also sign language (American Sign Language, ASL). In the current study, we take this work one step further, asking how “sign-naïve” infants—hearing infants with no prior exposure to sign language—deploy their attentional and social strategies in the context of episodes involving either spoken or sign language. We adopted a now-standard categorization task, presenting 4- to 6-month-old infants with a series of exemplars from a single category (e.g., dinosaurs). Each exemplar was introduced by a woman who appeared on the screen together with the object. What varied across conditions was whether this woman introduced the exemplar by speaking (English) or signing (ASL). We coded infants’ visual attentional strategies and their spontaneous vocalizations during this task. Infants’ division of attention and visual switches between the woman and exemplar varied as a function of language modality. In contrast, infants’ spontaneous vocalizations revealed similar patterns across languages. These results, which advance our understanding of how infants allocate attentional resources and engage with communicative partners across distinct modalities, have implications for specifying our theories of language acquisition.
SUBMITTER: Novack M
PROVIDER: S-EPMC9280667 | biostudies-literature |
REPOSITORIES: biostudies-literature
ACCESS DATA