A computational account of the mechanisms underlying face perception biases in depression.
Ontology highlight
ABSTRACT: Here, we take a computational approach to understand the mechanisms underlying face perception biases in depression. Thirty participants diagnosed with major depressive disorder and 30 healthy control participants took part in three studies involving recognition of identity and emotion in faces. We used signal detection theory to determine whether any perceptual biases exist in depression aside from decisional biases. We found lower sensitivity to happiness in general, and lower sensitivity to both happiness and sadness with ambiguous stimuli. Our use of highly-controlled face stimuli ensures that such asymmetry is truly perceptual in nature, rather than the result of studying expressions with inherently different discriminability. We found no systematic effect of depression on the perceptual interactions between face expression and identity. We also found that decisional strategies used in our task were different for people with depression and controls, but in a way that was highly specific to the stimulus set presented. We show through simulation that the observed perceptual effects, as well as other biases found in the literature, can be explained by a computational model in which channels encoding positive expressions are selectively suppressed. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Project description:We present a novel strategy for deriving a classification system of functional neuroimaging paradigms that relies on hierarchical clustering of experiments archived in the BrainMap database. The goal of our proof-of-concept application was to examine the underlying neural architecture of the face perception literature from a meta-analytic perspective, as these studies include a wide range of tasks. Task-based results exhibiting similar activation patterns were grouped as similar, while tasks activating different brain networks were classified as functionally distinct. We identified four sub-classes of face tasks: (1) Visuospatial Attention and Visuomotor Coordination to Faces, (2) Perception and Recognition of Faces, (3) Social Processing and Episodic Recall of Faces, and (4) Face Naming and Lexical Retrieval. Interpretation of these sub-classes supports an extension of a well-known model of face perception to include a core system for visual analysis and extended systems for personal information, emotion, and salience processing. Overall, these results demonstrate that a large-scale data mining approach can inform the evolution of theoretical cognitive models by probing the range of behavioral manipulations across experimental tasks.
Project description:Dinucleotide microsatellites are dynamic DNA sequences that affect genome stability. Here, we focused on mature microsatellites, defined as pure repeats of lengths above the threshold and unlikely to mutate below it in a single mutational event. We investigated the prevalence and mutational behavior of these sequences by using human genome sequence data, human cells in culture, and purified DNA polymerases. Mature dinucleotides (?10 units) are present within exonic sequences of >350 genes, resulting in vulnerability to cellular genetic integrity. Mature dinucleotide mutagenesis was examined experimentally using ex vivo and in vitro approaches. We observe an expansion bias for dinucleotide microsatellites up to 20 units in length in somatic human cells, in agreement with previous computational analyses of germ-line biases. Using purified DNA polymerases and human cell lines deficient for mismatch repair (MMR), we show that the expansion bias is caused by functional MMR and is not due to DNA polymerase error biases. Specifically, we observe that the MutS? and MutL? complexes protect against expansion mutations. Our data support a model wherein different MMR complexes shift the balance of mutations toward deletion or expansion. Finally, we show that replication fork progression is stalled within long dinucleotides, suggesting that mutational mechanisms within long repeats may be distinct from shorter lengths, depending on the biochemistry of fork resolution. Our work combines computational and experimental approaches to explain the complex mutational behavior of dinucleotide microsatellites in humans.
Project description:The ability to switch between multiple tasks is central to flexible behavior. Although switching between tasks is readily accomplished, a well established consequence of task switching (TS) is behavioral slowing. The source of this switch cost and the contribution of cognitive control to its resolution remain highly controversial. Here, we tested whether proactive interference arising from memory places fundamental constraints on flexible performance, and whether prefrontal control processes contribute to overcoming these constraints. Event-related functional MRI indexed neural responses during TS. The contributions of cognitive control and interference were made theoretically explicit in a computational model of task performance. Model estimates of two levels of proactive interference, "conceptual conflict" and "response conflict," produced distinct preparation-related profiles. Left ventrolateral prefrontal cortical activation paralleled model estimates of conceptual conflict, dissociating from that in left inferior parietal cortex, which paralleled model estimates of response conflict. These computationally informed neural measures specify retrieved conceptual representations as a source of conflict during TS and suggest that left ventrolateral prefrontal cortex resolves this conflict to facilitate flexible performance.
Project description:The frequency to which an organism is exposed to a particular type of face influences recognition performance. For example, Asians are better in individuating Asian than Caucasian faces, known as the own-race advantage. Similarly, humans in general are better in individuating human than monkey faces, known as the own-species advantage. It is an open question whether the underlying mechanisms causing these effects are similar. We hypothesize that these processes are governed by neural plasticity of the face discrimination system to retain optimal discrimination performance in its environment. Using common face features derived from a set of images from various face classes, we show that maximizing the feature variance between different individuals while ensuring minimal variance within individuals achieved good discrimination performances on own-class faces when selecting a subset of feature dimensions. Further, the selected subset of features does not necessarily lead to an optimal performance on the other class of faces. Thus, the face discrimination system continuously re-optimizes its space constraint face representation to optimize recognition performance on the current distribution of faces in its environment. This model can account for both, the own-race and own-species advantages. We name this approach Space Constraint Optimized Representational Embedding (SCORE).
Project description:How do we choose when confronted with many alternatives? There is surprisingly little decision modelling work with large choice sets, despite their prevalence in everyday life. Even further, there is an apparent disconnect between research in small choice sets, supporting a process of gaze-driven evidence accumulation, and research in larger choice sets, arguing for models of optimal choice, satisficing, and hybrids of the two. Here, we bridge this divide by developing and comparing different versions of these models in a many-alternative value-based choice experiment with 9, 16, 25, or 36 alternatives. We find that human choices are best explained by models incorporating an active effect of gaze on subjective value. A gaze-driven, probabilistic version of satisficing generally provides slightly better fits to choices and response times, while the gaze-driven evidence accumulation and comparison model provides the best overall account of the data when also considering the empirical relation between gaze allocation and choice.
Project description:Learning to predict action outcomes in morally conflicting situations is essential for social decision-making but poorly understood. Here we tested which forms of Reinforcement Learning Theory capture how participants learn to choose between self-money and other-shocks, and how they adapt to changes in contingencies. We find choices were better described by a reinforcement learning model based on the current value of separately expected outcomes than by one based on the combined historical values of past outcomes. Participants track expected values of self-money and other-shocks separately, with the substantial individual difference in preference reflected in a valuation parameter balancing their relative weight. This valuation parameter also predicted choices in an independent costly helping task. The expectations of self-money and other-shocks were biased toward the favored outcome but fMRI revealed this bias to be reflected in the ventromedial prefrontal cortex while the pain-observation network represented pain prediction errors independently of individual preferences.
Project description:While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness-related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization.
Project description:Categorization of visual stimuli is an intrinsic aspect of human perception. Whether the cortical mechanisms underlying categorization operate in an all-or-none or graded fashion remains unclear. In this study, we addressed this issue in the context of the face-specific N170. Specifically, we investigated whether N170 amplitudes grade with the amount of face information available in an image, or a full response is generated whenever a face is perceived. We employed linear mixed-effects modeling to inspect the dependency of N170 amplitudes on stimulus properties and duration, and their relationships to participants' subjective perception. Consistent with previous studies, we found a stronger N170 evoked by faces presented for longer durations. However, further analysis with equivalence tests revealed that this duration effect was eliminated when only faces perceived with high confidence were considered. Therefore, previous evidence supporting the graded hypothesis is more likely to be an artifact of mixing heterogeneous "all" and "none" trial types in signal averaging. These results support the hypothesis that the N170 is generated in an all-or-none manner and, by extension, suggest that categorization of faces may follow a similar pattern.
Project description:This paper studies the effect of income (wealth) inequality on interpersonal trust. We propose a theoretical framework that links trust, trustworthiness and inequality. The key feature is that agents do not necessarily observe the entire income distribution but base their assessment on reference groups (i.e. they might hold a biased view of reality). In this framework the negative impact of inequality on interpersonal trust is related to the individual-specific perception of inequality. This has important implications for the empirical analyses since researchers typically do not observe perceptions but only objective measures of inequality (e.g. the Gini coefficient). We show that the use of the latter is appropriate only under restrictive assumptions and in general will result in an underestimation of the true effect. An unbiased estimate of the effect of inequality on trust can be obtained with a measure of individual-specific perceptions of inequality. Survey data support our framework. Perceptions of higher inequality exert a strong negative effect on trust.Supplementary informationThe online version contains supplementary material available at (10.1007/s10888-021-09490-x).
Project description:BackgroundConvolutional Neural Network (DCNN), with its great performance, has attracted attention of researchers from many disciplines. The studies of the DCNN and that of biological neural systems have inspired each other reciprocally. The brain-inspired neural networks not only achieve great performance but also serve as a computational model of biological neural systems.MethodsHere in this study, we trained and tested several typical DCNNs (AlexNet, VGG11, VGG13, VGG16, DenseNet, MobileNet, and EfficientNet) with a face ethnicity categorization task for experiment 1, and an emotion categorization task for experiment 2. We measured the performance of DCNNs by testing them with original and lossy visual inputs (various kinds of image occlusion) and compared their performance with human participants. Moreover, the class activation map (CAM) method allowed us to visualize the foci of the "attention" of these DCNNs.ResultsThe results suggested that the VGG13 performed the best: Its performance closely resembled human participants in terms of psychophysics measurements, it utilized similar areas of visual inputs as humans, and it had the most consistent performance with inputs having various kinds of impairments.DiscussionIn general, we examined the processing mechanism of DCNNs using a new paradigm and found that VGG13 might be the most human-like DCNN in this task. This study also highlighted a possible paradigm to study and develop DCNNs using human perception as a benchmark.