A computer-generated animated face stimulus set for psychophysiological research.
Ontology highlight
ABSTRACT: Human faces are fundamentally dynamic, but experimental investigations of face perception have traditionally relied on static images of faces. Although naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this article, we describe a novel set of computer-generated, dynamic face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, location, and the size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether, 222 stimuli were created, spanning three different categories of movement: (1) an affective movement (fearful face), (2) a neutral movement (close-lipped, puffed cheeks with open eyes), and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between the expressions, we measured the occipital P100 event-related potential, which is known to reflect differences in early stages of visual processing, and the N170, which reflects structural encoding of faces. We found no differences between the faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces, controlled for low-level image characteristics, that are applicable to a range of research questions in social perception.
SUBMITTER: Naples A
PROVIDER: S-EPMC4297263 | biostudies-literature | 2015 Jun
REPOSITORIES: biostudies-literature
ACCESS DATA