Categorical congruence facilitates multisensory associative learning.
Ontology highlight
ABSTRACT: Learning about objects often requires making arbitrary associations among multisensory properties, such as the taste and appearance of a food or the face and voice of a person. However, the multisensory properties of individual objects usually are statistically constrained, such that some properties are more likely to co-occur than others, on the basis of their category. For example, male faces are more likely to co-occur with characteristically male voices than with female voices. Here, we report evidence that these natural multisensory statistics play a critical role in the learning of novel, arbitrary associative pairs. In Experiment 1, we found that learning of pairs consisting of human voices and gender-congruent faces was superior to learning of pairs consisting of human voices and gender-incongruent faces or of pairs consisting of human voices and pictures of inanimate objects (plants and rocks). In Experiment 2, we found that this "categorical congruency" advantage extended to nonhuman stimuli, as well-namely, to pairs of class-congruent animal pictures and vocalizations (e.g., dogs and barks) versus class-incongruent pairs (e.g., dogs and bird chirps). These findings suggest that associating multisensory properties that are statistically consistent with the various objects that we encounter in our daily lives is a privileged form of learning.
SUBMITTER: Barenholtz E
PROVIDER: S-EPMC6469507 | biostudies-literature | 2014 Oct
REPOSITORIES: biostudies-literature
ACCESS DATA