Project description:There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.
Project description:Attentional processes are generally assumed to be involved in multiple object tracking (MOT). The attentional capture paradigm is regularly used to study conditions of attentional control. It has up to now not been used to assess influences of sudden onset distractor stimuli in MOT. We investigated whether attentional capture does occur in MOT: Are onset distractors processed at all in dynamic attentional tasks? We found that sudden onset distractors were effective in lowering probe detection, thus demonstrating attentional capture. Tracking performance as dependent measure was not affected. The attentional capture effect persisted in conditions of higher tracking load (Experiment 2) and was dramatically increased in lower presentation frequency of the onset distractor (Experiment 3). Tracking performance was shown to suffer only when onset distractors were presented serially with very short time gaps in between, thus effectively disturbing re-engaging attention on the tracking set (Experiment 4). We discuss that rapid dis- and re-engagement of the attention process on target objects and an additional more basic process that continuously provides location information allow managing strong disruptions of attention during tracking.
Project description:Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an 'irrelevant-change distracting effect', where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants' processing manner, leading to a false-positive result. The current study conducted a strict examination of OBE in VWM, by probing whether irrelevant-features guided the deployment of attention in visual search. The participants memorized an object's colour yet ignored shape and concurrently performed a visual-search task. They searched for a target line among distractor lines, each embedded within a different object. One object in the search display could match the shape, colour, or both dimensions of the memory item, but this object never contained the target line. Relative to a neutral baseline, where there was no match between the memory and search displays, search time was significantly prolonged in all match conditions, regardless of whether the memory item was displayed for 100 or 1000 ms. These results suggest that task-irrelevant shape was extracted into VWM, supporting OBE in VWM.
Project description:In the field of spatial coding it is well established that we mentally represent objects for action not only relative to ourselves, egocentrically, but also relative to other objects (landmarks), allocentrically. Several factors facilitate allocentric coding, for example, when objects are task-relevant or constitute stable and reliable spatial configurations. What is unknown, however, is how object-semantics facilitate the formation of these spatial configurations and thus allocentric coding. Here we demonstrate that (i) we can quantify the semantic similarity of objects and that (ii) semantically similar objects can serve as a cluster of landmarks that are allocentrically coded. Participants arranged a set of objects based on their semantic similarity. These arrangements were then entered into a similarity analysis. Based on the results, we created two semantic classes of objects, natural and man-made, that we used in a virtual reality experiment. Participants were asked to perform memory-guided reaching movements toward the initial position of a target object in a scene while either semantically congruent or incongruent landmarks were shifted. We found that the reaching endpoints systematically deviated in the direction of landmark shift. Importantly, this effect was stronger for shifts of semantically congruent landmarks. Our findings suggest that object-semantics facilitate allocentric coding by creating stable spatial configurations.
Project description:Humans are highly sensitive to the statistical relationships between features and objects within visual scenes. Inconsistent objects within scenes (e.g., a mailbox in a bedroom) instantly jump out to us and are known to catch our attention. However, it is debated whether such semantic inconsistencies result in boosted memory for the scene, impaired memory, or have no influence on memory. Here, we examined the relationship of scene-object consistencies on memory representations measured through drawings made during recall. Participants (N = 30) were eye-tracked while studying 12 real-world scene images with an added object that was either semantically consistent or inconsistent. After a 6-minute distractor task, they drew the scenes from memory while pen movements were tracked electronically. Online scorers (N = 1,725) rated each drawing for diagnosticity, object detail, spatial detail, and memory errors. Inconsistent scenes were recalled more frequently, but contained less object detail. Further, inconsistent objects elicited more errors reflecting looser memory binding (e.g., migration across images). These results point to a dual effect in memory of boosted global (scene) but diminished local (object) information. Finally, we observed that participants fixate longest on inconsistent objects, but these fixations during study were not correlated with recall performance, time, or drawing order. In sum, these results show a nuanced effect of scene inconsistencies on memory detail during recall.
Project description:Humans can flexibly select locations, features, or objects in a visual scene for prioritized processing. Although it is relatively straightforward to manipulate location- and feature-based attention, it is difficult to isolate object-based selection. Because objects are always composed of features, studies of object-based selection can often be interpreted as the selection of a combination of locations and features. Here we examined the neural representation of attentional priority in a paradigm that isolated object-based selection. Participants viewed two superimposed gratings that continuously changed their color, orientation, and spatial frequency, such that the gratings traversed the same exact feature values within a trial. Participants were cued at the beginning of each trial to attend to one or the other grating to detect a brief luminance increment, while their brain activity was measured with fMRI. Using multi-voxel pattern analysis, we were able to decode the attended grating in a set of frontoparietal areas, including anterior intraparietal sulcus (IPS), frontal eye field (FEF), and inferior frontal junction (IFJ). Thus, a perceptually varying object can be represented by patterned neural activity in these frontoparietal areas. We suggest that these areas can encode attentional priority for abstract, high-level objects independent of their locations and features.
Project description:The anterior temporal lobe (ATL) is considered a crucial area for the representation of transmodal concepts. Recent evidence suggests that specific regions within the ATL support the representation of individual object concepts, as shown by studies combining multivariate analysis methods and explicit measures of semantic knowledge. This research looks to further our understanding by probing conceptual representations at a spatially and temporally resolved neural scale. Representational similarity analysis was applied to human intracranial recordings from anatomically defined lateral to medial ATL sub-regions. Neural similarity patterns were tested against semantic similarity measures, where semantic similarity was defined by a hybrid corpus-based and feature-based approach. Analyses show that the perirhinal cortex, in the medial ATL, significantly related to semantic effects around 200 to 400 ms, and were greater than more lateral ATL regions. Further, semantic effects were present in low frequency (theta and alpha) oscillatory phase signals. These results provide converging support that more medial regions of the ATL support the representation of basic-level visual object concepts within the first 400 ms, and provide a bridge between prior fMRI and MEG work by offering detailed evidence for the presence of conceptual representations within the ATL.
Project description:Visual attention studies have demonstrated that the shape of space-based selection can be governed by salient object contours: when a portion of an enclosed space is cued, the selected region extends to the full enclosure. Although this form of object-based attention (OBA) is well established, one continuing investigation focuses on whether this selection is obligatory or under voluntary control. We attempt to dissociate between these alternatives by interrogating the locus coeruleus-norepinephrine (LC-NE) system - known to fluctuate with top-down attention - during a classic two-rectangle paradigm in a sample of healthy human participants (N = 36). An endogenous spatial pre-cue directed voluntary space-based attention (SBA) to one end of a rectangular frame. We manipulated the reliability of the cue, such that targets appearing at an uncued location within the frame occurred at low or moderate frequencies. Phasic pupillary responses time-locked to the cue display served to noninvasively measure LC-NE activity, reflecting top-down processing of the spatial cue. If OBA is controlled analogously to SBA, then object selection should emerge only when it is behaviorally expedient and when LC-NE activity reflects a high degree of top-down attention to the cue display. Our results bore this out. Thus, we conclude that OBA was voluntarily controlled, and furthermore show that phasic norepinephrine may modulate attentional strategy.
Project description:Eye-tracking studies using arrays of objects have demonstrated that some high-level processing of object semantics can occur in extra-foveal vision, but its role on the allocation of early overt attention is still unclear. This eye-tracking visual search study contributes novel findings by examining the role of object-to-object semantic relatedness and visual saliency on search responses and eye-movement behaviour across arrays of increasing size (3, 5, 7). Our data show that a critical object was looked at earlier and for longer when it was semantically unrelated than related to the other objects in the display, both when it was the search target (target-present trials) and when it was a target's semantically related competitor (target-absent trials). Semantic relatedness effects manifested already during the very first fixation after array onset, were consistently found for increasing set sizes, and were independent of low-level visual saliency, which did not play any role. We conclude that object semantics can be extracted early in extra-foveal vision and capture overt attention from the very first fixation. These findings pose a challenge to models of visual attention which assume that overt attention is guided by the visual appearance of stimuli, rather than by their semantics.