Considering Hemispheric Specialization in Emotional Face Processing: An Eye Tracking Study in Left- and Right-Lateralised Semantic Dementia.
Ontology highlight
ABSTRACT: Face processing relies on a network of occipito-temporal and frontal brain regions. Temporal regions are heavily involved in looking at and processing emotional faces; however, the contribution of each hemisphere to this process remains under debate. Semantic dementia (SD) is a rare neurodegenerative brain condition characterized by anterior temporal lobe atrophy, which is either predominantly left- (left-SD) or right-lateralised (right-SD). This syndrome therefore provides a unique lesion model to understand the role of laterality in emotional face processing. Here, we investigated facial scanning patterns in 10 left-SD and 6 right-SD patients, compared to 22 healthy controls. Eye tracking was recorded via a remote EyeLink 1000 system, while participants passively viewed fearful, happy, and neutral faces over 72 trials. Analyses revealed that right-SD patients had more fixations to the eyes than controls in the Fear (p = 0.04) condition only. Right-SD patients also showed more fixations to the eyes than left-SD patients in all conditions: Fear (p = 0.01), Happy (p = 0.008), and Neutral (p = 0.04). In contrast, no differences between controls and left-SD patients were observed for any emotion. No group differences were observed for fixations to the mouth, or the whole face. This study is the first to examine patterns of facial scanning in left- versus right- SD, demonstrating more of a focus on the eyes in right-SD. Neuroimaging analyses showed that degradation of the right superior temporal sulcus was associated with increased fixations to the eyes. Together these results suggest that right lateralised brain regions of the face processing network are involved in the ability to efficiently utilise changeable cues from the face.
Project description:When actively classifying abstract patterns according to their regularity, alpha desynchronization (ERD) becomes right lateralized over posterior brain areas. This could reflect temporary enhancement of contralateral visual inputs and specifically a shift of attention to the left, or right hemisphere specialization for regularity discrimination. This study tested these competing hypotheses. Twenty-four participants discriminated between dot patterns containing a reflection or a translation. The direction of the transformation, which matched one half onto the other half, was either vertical or horizontal. The strategy of shifting attention to one side of the patterns would not produce lateralized ERD in the horizontal condition. However, right-lateralized ERD was found in all conditions, regardless of orientation. We conclude that right hemisphere networks that incorporate the early posterior regions are specialized for regularity discrimination.
Project description:Autism Spectrum Disorder (ASD), Oppositional Defiant Disorder (ODD), and Conduct Disorder (CD) are often associated with emotion recognition difficulties. This is the first eye-tracking study to examine emotional face recognition (i.e., gazing behavior) in a direct comparison of male adolescents with Autism Spectrum Disorder or Oppositional Defiant Disorder/Conduct Disorder, and typically developing (TD) individuals. We also investigate the role of psychopathic traits, callous-unemotional (CU) traits, and subtypes of aggressive behavior in emotional face recognition. A total of 122 male adolescents (N = 50 ASD, N = 44 ODD/CD, and N = 28 TD) aged 12-19 years (M = 15.4 years, SD= 1.9) were included in the current study for the eye-tracking experiment. Participants were presented with neutral and emotional faces using a Tobii 1750 eye-tracking monitor to record gaze behavior. Our main dependent eye-tracking variables were: (1) fixation duration to the eyes of a face and (2) time to the first fixation to the eyes. Since distributions of eye-tracking variables were not completely Gaussian, non-parametric tests were chosen to investigate gaze behavior across the diagnostic groups with Autism Spectrum Disorder, Oppositional Defiant Disorder/Conduct Disorder, and Typically Developing individuals. Furthermore, we used Spearman correlations to investigate the links with psychopathy, callous, and unemotional traits and subtypes of aggression as assessed by questionnaires. The relative total fixation duration to the eyes was decreased in both the Autism Spectrum Disorder group and the Oppositional Defiant Disorder/Conduct Disorder group for several emotional expressions. In both the Autism Spectrum Disorder and the Oppositional Defiant Disorder/Conduct Disorder group, increased time to first fixation on the eyes of fearful faces only was nominally significant. The time to first fixation on the eyes was nominally correlated with psychopathic traits and proactive aggression. The current findings do not support strong claims for differential cross-disorder eye-gazing deficits and for a role of shared underlying psychopathic traits, callous-unemotional traits, and aggression subtypes. Our data provide valuable and novel insights into gaze timing distributions when looking at the eyes of a fearful face.
Project description:Sex differences in cognitive functions are heavily debated. Recent work suggests that sex differences do stem from different processing strategies utilized by men and women. While these processing strategies are likely reflected in different brain networks, so far the link between brain networks and processing strategies remains speculative. In the present study we seek for the first time to link sex differences in brain activation patterns to sex differences in processing strategies utilizing a semantic verbal fluency task in a large sample of 35 men and 35 women, all scanned thrice. For verbal fluency, strategies of clustering and switching have been described. Our results show that men show higher activation in the brain network supporting clustering, while women show higher activation in the brain network supporting switching. Furthermore, converging evidence from activation results, lateralization indices and connectivity analyses suggests that men recruit the right hemisphere more strongly during clustering, but women during switching. These results may explain findings of differential performance and strategy-use in previous behavioral studies.
Project description:BackgroundCriminal associates such as terrorist members are likely to deny knowing members of their network when questioned by police. Eye tracking research suggests that lies about familiar faces can be detected by distinct markers of recognition (e.g. fewer fixations and longer fixation durations) across multiple eye fixation parameters. However, the effect of explicit eye movement strategies to concealed recognition on such markers has not been examined. Our aim was to assess the impact of fixed-sequence eye movement strategies (across the forehead, ears, eyes, nose, mouth and chin) on markers of familiar face recognition. Participants were assigned to one of two groups: a standard guilty group who were simply instructed to conceal knowledge but with no specific instructions on how to do so; and a countermeasures group who were instructed to look at every familiar and unfamiliar face in the same way by executing a consistent sequence of fixations.ResultsIn the standard guilty group, lies about recognition of familiar faces showed longer average fixation durations, a lower proportion of fixations to the inner face regions, and proportionately more viewing of the eyes than honest responses to genuinely unknown faces. In the countermeasures condition, familiar face recognition was detected by longer fixations durations, fewer fixations to the inner regions of the face, and fewer interest areas of the face viewed. Longer fixation durations were a consistent marker of recognition across both conditions for most participants; differences were detectable from the first fixation.ConclusionThe results suggest that individuals can exert a degree of executive control over fixation patterns but that: the eyes are particularly attention-grabbing for familiar faces; the more viewers look around the face, the more they give themselves away; and attempts to deploy the same fixation patterns to familiar and unfamiliar faces were unsuccessful. The results suggest that the best strategy for concealing recognition might be to keep the eyes fixated in the centre of the screen but, even then, recognition is apparent in longer fixation durations. We discuss potential optimal conditions for detecting concealed knowledge of faces.
Project description:Self-face recognition has been shown to be impaired in schizophrenia (SZ), according to studies using behavioral tasks implicating cognitive demands. Here, we employed an eye-tracking methodology, which is a relevant tool to understand impairments in self-face recognition deficits in SZ because it provides a natural, continuous and online record of face processing. Moreover, it allows collecting the most relevant and informative features each individual looks at during the self-face recognition. These advantages are especially relevant considering the fundamental role played by the patterns of visual exploration on face processing. Thus, this paper aims to investigate self-face recognition deficits in SZ using eye-tracking methodology. Visual scan paths were monitored in 20 patients with SZ and 20 healthy controls. Self, famous, and unknown faces were morphed in steps of 20%. Location, number, and duration of fixations on relevant areas were recorded with an eye-tracking system. Participants performed a passive exploration task (no specific instruction was provided), followed by an active decision making task (individuals were explicitly requested to recognize the different faces). Results showed that patients with SZ had fewer and longer fixations compared to controls. Nevertheless, both groups focused their attention on relevant facial features in a similar way. No significant difference was found between groups when participants were requested to recognize the faces (active task). In conclusion, using an eye tracking methodology and two tasks with low levels of cognitive demands, our results suggest that patients with SZ are able to: (1) explore faces and focus on relevant features of the face in a similar way as controls; and (2) recognize their own face.
Project description:Face gaze is a fundamental non-verbal behaviour and can be assessed using eye-tracking glasses. Methodological guidelines are lacking on which measure to use to determine face gaze. To evaluate face gaze patterns we compared three measures: duration, frequency and dwell time. Furthermore, state of the art face gaze analysis requires time and manual effort. We tested if face gaze patterns in the first 30, 60 and 120 s predict face gaze patterns in the remaining interaction. We performed secondary analyses of mobile eye-tracking data of 16 internal medicine physicians in consultation with 100 of their patients. Duration and frequency of face gaze were unrelated. The lack of association between duration and frequency suggests that research may yield different results depending on which measure of face gaze is used. Dwell time correlates both duration and frequency. Face gaze during the first seconds of the consultations predicted face gaze patterns of the remaining consultation time (R2 0.26 to 0.73). Therefore, face gaze during the first minutes of the consultations can be used to predict face gaze patterns over the complete interaction. Researchers interested to study face gaze may use these findings to make optimal methodological choices.
Project description:Adults typically exhibit right hemispheric dominance in the processing of faces. In this cross-sectional study, we investigated age-dependent changes in face processing lateralization from infancy to adulthood (1-48 years old; N = 194). We co-registered anatomical and resting state functional Magnetic Resonance Imaging (fMRI) scans of toddlers, children, adolescents, and adults into a common space and examined functional connectivity across the face, as well as place, and object-selective regions identified in adults. As expected, functional connectivity between core face-selective regions was stronger in the right compared to the left hemisphere in adults. Most importantly, the same lateralization was evident in all other age groups (infants, children, adolescents) and appeared only in face-selective regions, and not in place or object-selective regions. These findings suggest that the physiological development of face-selective brain areas may differ from that of object and place-selective areas. Specifically, the functional connectivity of the core-face selective regions exhibits rightward lateralization from infancy, years before these areas develop mature face-selective responses.
Project description:Many eye tracking studies use facial stimuli presented on a display to investigate attentional processing of social stimuli. To introduce a more realistic approach that allows interaction between two real people, we evaluated a new eye tracking setup in three independent studies in terms of data quality, short-term reliability and feasibility. Study 1 measured the robustness, precision and accuracy for calibration stimuli compared to a classical display-based setup. Study 2 used the identical measures with an independent study sample to compare the data quality for a photograph of a face (2D) and the face of the real person (3D). Study 3 evaluated data quality over the course of a real face-to-face conversation and examined the gaze behavior on the facial features of the conversation partner. Study 1 provides evidence that quality indices for the scene-based setup were comparable to those of a classical display-based setup. Average accuracy was better than 0.4° visual angle. Study 2 demonstrates that eye tracking quality is sufficient for 3D stimuli and robust against short interruptions without re-calibration. Study 3 confirms the long-term stability of tracking accuracy during a face-to-face interaction and demonstrates typical gaze patterns for facial features. Thus, the eye tracking setup presented here seems feasible for studying gaze behavior in dyadic face-to-face interactions. Eye tracking data obtained with this setup achieves an accuracy that is sufficient for investigating behavior such as eye contact in social interactions in a range of populations including clinical conditions, such as autism spectrum and social phobia.
Project description:Previous research in computational linguistics dedicated a lot of effort to using language modeling and/or distributional semantic models to predict metrics extracted from eye-tracking data. However, it is not clear whether the two components have a distinct contribution, with recent studies claiming that surprisal scores estimated with large-scale, deep learning-based language models subsume the semantic relatedness component. In our study, we propose a regression experiment for estimating different eye-tracking metrics on two English corpora, contrasting the quality of the predictions with and without the surprisal and the relatedness components. Different types of relatedness scores derived from both static and contextual models have also been tested. Our results suggest that both components play a role in the prediction, with semantic relatedness surprisingly contributing also to the prediction of function words. Moreover, they show that when the metric is computed with the contextual embeddings of the BERT model, it is able to explain a higher amount of variance.
Project description:Background. The ability to identify faces has been interpreted as a cerebral specialization based on the evolutionary importance of these social stimuli, and a number of studies have shown that this function is mainly lateralized in the right hemisphere. The aim of this study was to assess the right-hemispheric specialization in face recognition in unfamiliar circumstances. Methods. Using a divided visual field paradigm, we investigated hemispheric asymmetries in the matching of two subsequent faces, using two types of transformation hindering identity recognition, namely upside-down rotation and spatial "explosion" (female and male faces were fractured into parts so that their mutual spatial relations were left intact), as well as their combination. Results. We confirmed the right-hemispheric superiority in face processing. Moreover, we found a decrease of the identity recognition for more extreme "levels of explosion" and for faces presented upside-down (either as sample or target stimuli) than for faces presented upright, as well as an advantage in the matching of female compared to male faces. Discussion. We conclude that the right-hemispheric superiority for face processing is not an epiphenomenon of our expertise, because we are not often exposed to inverted and "exploded" faces, but rather a robust hemispheric lateralization. We speculate that these results could be attributable to the prevalence of right-handedness in humans and/or to early biases in social interactions.