Project description:Apparent biological motion is the perception of plausible movements when two alternating images depicting the initial and final phase of an action are presented at specific stimulus onset asynchronies. Here, we show lower subjective apparent biological motion perception when actions are observed from a first relative to a third visual perspective. These findings are discussed within the context of sensorimotor contributions to body ownership.
Project description:The perspective of perceiving one's action affects its speed and accuracy. In the present study, we investigated the change in accuracy and kinematics when subjects throw darts from the first-person perspective and the third-person perspective with varying angles of view. To model the third-person perspective, subjects were looking at themselves as well as the scene through the virtual reality head-mounted display (VR HMD). The scene was supplied by a video feed from the camera located to the up and 0, 20 and 40 degrees to the right behind the subjects. The 28 subjects wore a motion capture suit to register their right hand displacement, velocity and acceleration, as well as torso rotation during the dart throws. The results indicated that mean accuracy shifted in opposite direction with the changes of camera location in vertical axis and in congruent direction in horizontal axis. Kinematic data revealed a smaller angle of torso rotation to the left in all third-person perspective conditions before and during the throw. The amplitude, speed and acceleration in third-person condition were lower compared to the first-person view condition, before the peak velocity of the hand in the direction toward the target and after the peak velocity in lowering the hand. Moreover, the hand movement angle was smaller in the third-person perspective conditions with 20 and 40 angle of view, compared with the first-person perspective condition just preceding the time of peak velocity, and the difference between conditions predicted the changes in mean accuracy of the throws. Thus, the results of this study revealed that subject's localization contributed to the transformation of the motor program.
Project description:It is well established that aesthetic appreciation is related with activity in several different brain regions. The identification of the neural correlates of beauty or liking ratings has been the focus of most prior studies. Not much attention has been directed towards the fact that humans are surrounded by objects that lead them to experience aesthetic indifference or leave them with a negative aesthetic impression. Here we explore the neural substrate of such experiences. Given the neuroimaging techniques that have been used, little is known about the temporal features of such brain activity. By means of magnetoencephalography we registered the moment at which brain activity differed while participants viewed images they considered to be beautiful or not. Results show that the first differential activity appears between 300 and 400 ms after stimulus onset. During this period activity in right lateral orbitofrontal cortex (lOFC) was greater while participants rated visual stimuli as not beautiful than when they rated them as beautiful. We argue that this activity is associated with an initial negative aesthetic impression formation, driven by the relative hedonic value of stimuli regarded as not beautiful. Additionally, our results contribute to the understanding of the nature of the functional roles of the lOFC.
Project description:Episodic autobiographical memories are characterized by a spatial context and an affective component. But how do affective and spatial aspects interact? Does affect modulate the way we encode the spatial context of events? We investigated how one element of affect, namely aesthetic liking, modulates memory for location, in three online experiments (n = 124, 79, and 80). Participants visited a professionally curated virtual art exhibition. They then relocated previously viewed artworks on the museum map and reported how much they liked them. Across all experiments, liking an artwork was associated with increased ability to recall the wall on which it was hung. The effect was not explained by viewing time and appeared to modulate recognition speed. The liking-wall memory effect remained when participants attended to abstractness, rather than liking, and when testing occurred 24 h after the museum visit. Liking also modulated memory for the room where a work of art was hung, but this effect primarily involved reduced room memory for disliked artworks. Further, the liking-wall memory effect remained after controlling for effects of room memory. Recalling the wall requires recalling one's facing direction, so our findings suggest that positive aesthetic experiences enhance first-person spatial representations. More generally, a first-person component of positive affect transfers to wider spatial representation and facilitates the encoding of locations in a subject-centered reference frame. Affect and spatial representations are therefore important, and linked, elements of sentience and subjectivity. Memories of aesthetic experiences are also spatial memories of how we encountered a work of art. This linkage may have implications for museum design.
Project description:When building personal relationships, it is important to select optimal partners, even based on the first meeting. This study was inspired by the idea that people who smile are considered more trustworthy and attractive. However, this may not always be true in daily life. Previous studies have used a relatively simple method of judging others by presenting a photograph of one person's face. To move beyond this approach and examine more complex situations, we presented the faces of two people confronted with each other to participants and asked them to judge them from a third-person perspective. Through three experiments, participants were asked to judge which of the two persons was more appropriate for forming alliances, more trustworthy, or more attractive, respectively. In all experiments, images were shown for a short (500 ms) or a long time (5 s). In all three experiments, the results showed that participants were more likely to choose persons with happy faces than those with neutral, sad, or angry faces when the image presentation was short. Contrarily, the facial expressions did not affect those judgments when the image presentation was long. Instead, judgments were correlated with personality estimated from the model's neutral face in a single-person presentation. These results suggest that although facial expressions can affect the judgments of others when observing two-person confrontations from a third-person perspective, when participants have more time to elaborate their judgments, they go beyond expressions.
Project description:Human beings often observe other people's social interactions without being a part of them. Whereas the implications of some brain regions (e.g. amygdala) have been extensively examined, the implication of the precuneus remains yet to be determined. Here we examined the implication of the precuneus in third-person perspective of social interaction using functional magnetic resonance imaging (fMRI). Participants performed a socially irrelevant task while watching the biological motion of two agents acting in either typical (congruent to social conventions) or atypical (incongruent to social conventions) ways. When compared to typical displays, the atypical displays elicited greater activation in the central and posterior bilateral precuneus, and in frontoparietal and occipital regions. Whereas the right precuneus responded with greater activation also to upside down than upright displays, the left precuneus did not. Correlations and effective connectivity analysis added consistent evidence of an interhemispheric asymmetry between the right and left precuneus. These findings suggest that the precuneus reacts to violations of social expectations, and plays a crucial role in third-person perspective of others' interaction even when the social context is unattended.
Project description:Humans exhibit colour vision variations due to genetic polymorphisms, with trichromacy being the most common, while some people are classified as dichromats. Whether genetic differences in colour vision affect the way of viewing complex images remains unknown. Here, we investigated how people with different colour vision focused their gaze on aesthetic paintings by eye-tracking while freely viewing digital rendering of paintings and assessed individual impressions through a decomposition analysis of adjective ratings for the images. Gaze-concentrated areas among trichromats were more highly correlated than those among dichromats. However, compared with the brief dichromatic experience with the simulated images, there was little effect of innate colour vision differences on impressions. These results indicate that chromatic information is instructive as a cue for guiding attention, whereas the impression of each person is generated according to their own sensory experience and normalized through one's own colour space.
Project description:Robotic algorithms that augment movement errors have been proposed as promising training strategies to enhance motor learning and neurorehabilitation. However, most research effort has focused on rehabilitation of upper limbs, probably because large movement errors are especially dangerous during gait training, as they might result in stumbling and falling. Furthermore, systematic large movement errors might limit the participants' motivation during training. In this study, we investigated the effect of training with novel error modulating strategies, which guarantee a safe training environment, on motivation and learning of a modified asymmetric gait pattern. Thirty healthy young participants walked in the exoskeletal robotic system Lokomat while performing a foot target-tracking task, which required an increased hip and knee flexion in the dominant leg. Learning the asymmetric gait pattern with three different strategies was evaluated: (i) No disturbance: no robot disturbance/guidance was applied, (ii) haptic error amplification: unsafe and discouraging large errors were limited with haptic guidance, while haptic error amplification enhanced awareness of small errors relevant for learning, and (iii) visual error amplification: visually observed errors were amplified in a virtual reality environment. We also evaluated whether increasing the movement variability during training by adding randomly varying haptic disturbances on top of the other training strategies further enhances learning. We analyzed participants' motor performance and self-reported intrinsic motivation before, during and after training. We found that training with the novel haptic error amplification strategy did not hamper motor adaptation and enhanced transfer of the practiced asymmetric gait pattern to free walking. Training with visual error amplification, on the other hand, increased errors during training and hampered motor learning. Participants who trained with visual error amplification also reported a reduced perceived competence. Adding haptic disturbance increased the movement variability during training, but did not have a significant effect on motor adaptation, probably because training with haptic disturbance on top of visual and haptic error amplification decreased the participants' feelings of competence. The proposed novel haptic error modulating controller that amplifies small task-relevant errors while limiting large errors outperformed visual error augmentation and might provide a promising framework to improve robotic gait training outcomes in neurological patients.
Project description:Life experience suggests that visual aesthetic fatigue (VAF) is quite common. However, few academic works have focussed on VAF in landscapes, thus our understanding of this issue is very poor, not to mention what measures can be taken to mitigate it. To address these gaps, this study investigated VAF using 16 photographs taken in urban green spaces in Xuzhou (local landscapes) and Hong Kong (non-local landscapes) as stimuli. The visual aesthetic quality (VAQ) of 16 photographs was evaluated four times by the same college students at an interval of one week. Statistical analysis demonstrated that VAF occurred in urban green spaces. Male respondents had a higher VAF than females. There were no significant differences in VAQ and VAF between local and non-local landscapes. No landscape characteristic significantly correlated to or predicted VAF, implying that it is very difficult to mitigate VAF through designing and managing static landscapes.Supplementary informationThe online version contains supplementary material available at 10.1007/s41742-023-00517-x.
Project description:In this essay, we will support the claim that at the current level of scientific advancement a) some first-person accounts cannot be reduced to their third-person neural and psychophysiological correlates and b) that these first-person accounts are the only information to reckon when it is necessary to analyse qualia contents. Consequently, for many phenomena, first-person accounts are the only reliable source of information available and the knowledge of their neural and psychophysical correlates don't offer any additional information about them.