Eye-tracking technology in identifying visualizers and verbalizers: data on eye-movement differences and detection accuracy.
Ontology highlight
ABSTRACT: Data in this article revealed the eye movement differences of visualizers and verbalizers in viewing four pictures-in-text by analyzing gaze path and fixation data (fixation duration, fixation counts and the average time on each fixation). After imported the documents into Tobii eye-tracker, authors triggered participants' natural reading habits, recorded their eye movement data, and predicted participants as visualizers or verbalizers based on the Felder and Silverman Learning Style Model (FSLSM). Comparing the predictions with self-report results tested by the Index of Learning Styles (ILS) questionnaire, authors got the accuracy results of using eye-tracking technology in identifying visualizers and verbalizers. The data revealed natural preferences of people with different styles, and it can be used in future studies in the field of adaptive learning systems, individual differences, neuroscience in reading habits, and individualized instruction.
Project description:Eye tracking has been widely used for decades in vision research, language and usability. However, most prior research has focused on large desktop displays using specialized eye trackers that are expensive and cannot scale. Little is known about eye movement behavior on phones, despite their pervasiveness and large amount of time spent. We leverage machine learning to demonstrate accurate smartphone-based eye tracking without any additional hardware. We show that the accuracy of our method is comparable to state-of-the-art mobile eye trackers that are 100x more expensive. Using data from over 100 opted-in users, we replicate key findings from previous eye movement research on oculomotor tasks and saliency analyses during natural image viewing. In addition, we demonstrate the utility of smartphone-based gaze for detecting reading comprehension difficulty. Our results show the potential for scaling eye movement research by orders-of-magnitude to thousands of participants (with explicit consent), enabling advances in vision research, accessibility and healthcare.
Project description:Eye tracking systems have recently experienced a diversity of novel calibration procedures, including smooth pursuit and vestibulo-ocular reflex based calibrations. These approaches allowed collecting more data compared to the standard 9-point calibration. However, the computation of the mapping function which provides planar gaze positions from pupil features given as input is mostly based on polynomial regressions, and little work has investigated alternative approaches. This paper fills this gap by providing a new calibration computation method based on symbolic regression. Instead of making prior assumptions on the polynomial transfer function between input and output records, symbolic regression seeks an optimal model among different types of functions and their combinations. This approach offers an interesting perspective in terms of flexibility and accuracy. Therefore, we designed two experiments in which we collected ground truth data to compare vestibulo-ocular and smooth pursuit calibrations based on symbolic regression, both using a marker or a finger as a target, resulting in four different calibrations. As a result, we improved calibration accuracy by more than 30%, with reasonable extra computation time.
Project description:The quantitative assessment of eye tracking data quality is critical for ensuring accuracy and precision of gaze position measurements. However, researchers often report the eye tracker's optimal manufacturer's specifications rather than empirical data about the accuracy and precision of the eye tracking data being presented. Indeed, a recent report indicates that less than half of eye tracking researchers surveyed take the eye tracker's accuracy into account when determining areas of interest for analysis, an oversight that could impact the validity of reported results and conclusions. Accordingly, we designed a calibration verification protocol to augment independent quality assessment of eye tracking data and examined whether accuracy and precision varied between three age groups of participants. We also examined the degree to which our externally quantified quality assurance metrics aligned with those reported by the manufacturer. We collected data in standard laboratory conditions to demonstrate our method, to illustrate how data quality can vary with participant age, and to give a simple example of the degree to which data quality can differ from manufacturer reported values. In the sample data we collected, accuracy for adults was within the range advertised by the manufacturer, but for school-aged children, accuracy and precision measures were outside this range. Data from toddlers were less accurate and less precise than data from adults. Based on an a priori inclusion criterion, we determined that we could exclude approximately 20% of toddler participants for poor calibration quality quantified using our calibration assessment protocol. We recommend implementing and reporting quality assessment protocols for any eye tracking tasks with participants of any age or developmental ability. We conclude with general observations about our data, recommendations for what factors to consider when establishing data inclusion criteria, and suggestions for stimulus design that can help accommodate variability in calibration. The methods outlined here may be particularly useful for developmental psychologists who use eye tracking as a tool, but who are not experts in eye tracking per se. The calibration verification stimuli and data processing scripts that we developed, along with step-by-step instructions, are freely available for other researchers.
Project description:Eye-trackers are a popular tool for studying cognitive, emotional, and attentional processes in different populations (e.g., clinical and typically developing) and participants of all ages, ranging from infants to the elderly. This broad range of processes and populations implies that there are many inter- and intra-individual differences that need to be taken into account when analyzing eye-tracking data. Standard parsing algorithms supplied by the eye-tracker manufacturers are typically optimized for adults and do not account for these individual differences. This paper presents gazepath, an easy-to-use R-package that comes with a graphical user interface (GUI) implemented in Shiny (RStudio Inc 2015). The gazepath R-package combines solutions from the adult and infant literature to provide an eye-tracking parsing method that accounts for individual differences and differences in data quality. We illustrate the usefulness of gazepath with three examples of different data sets. The first example shows how gazepath performs on free-viewing data of infants and adults, compared to standard EyeLink parsing. We show that gazepath controls for spurious correlations between fixation durations and data quality in infant data. The second example shows that gazepath performs well in high-quality reading data of adults. The third and last example shows that gazepath can also be used on noisy infant data collected with a Tobii eye-tracker and low (60 Hz) sampling rate.
Project description:Importance:Objectively measuring how Mohs defect reconstruction changes casual observer attention has important implications for patients and facial plastic surgeons. Objective:To use eye-tracking technology to objectively measure the ability of Mohs facial defect reconstruction to normalize facial attention. Design, Setting, and Participants:This observational outcomes study was conducted at an academic tertiary referral center from January to June 2016. An eye-tracking system was used to record how 82 casual observers directed attention to photographs of 32 patients with Mohs facial defects of varying sizes and locations before and after reconstruction as well as 16 control faces with no facial defects. Statistical analysis was performed from November 2018 to January 2019. Main Outcomes and Measures:First, the attentional distraction caused by facial defects was quantified in milliseconds of gaze time using eye tracking. Second, the eye-tracking data were analyzed using mixed-effects linear regression to assess the association of facial defect reconstruction with normalized facial attention. Results:The 82 casual observers (63 women and 19 men; mean [SD] age, 34 [12] years) viewed control faces in a similar and consistent fashion, with most attention (65%; 95% CI, 62%-69%) directed at the central triangle, which includes the eyes, nose, and mouth. The eyes were the most visually important feature, capturing a mean of 60% (95% CI, 57%-64%) of fixation time within the central triangle and 39% (95% CI, 36%-43%) of total observer attention. The presence of Mohs defects was associated with statistically significant alterations in this pattern of normal facial attention. The larger the defect and the more centrally a defect was located, the more attentional distraction was observed, as measured by increased attention on the defect and decreased attention on the eyes, ranging from 729 (95% CI, 526-931) milliseconds for small peripheral defects to 3693 (95% CI, 3490-3896) milliseconds for large central defects. Reconstructive surgery was associated with improved gaze deviations for all faces and with normalized attention directed to the eyes for all faces except for those with large central defects. Conclusions and Relevance:Mohs defects are associated with altered facial perception, diverting attention from valuable features such as the eyes. Reconstructive surgery was associated with normalized attentional distraction for many patients with cutaneous Mohs defects. These data are important to patients who want to know how reconstructive surgery could change the way people look at their face. The data also point to the possibility of outcomes prediction based on facial defect size and location before reconstruction. Eye tracking is a valuable research tool for outcomes assessment that lays the foundation for understanding how reconstructive surgery may change perception and normalize facial deformity.
Project description:Eye tracking provides a quantitative measure of eye movements during different activities. We report the results from a bibliometric analysis to investigate trends in eye tracking research applied to the study of different medical conditions. We conducted a search on the Web of Science Core Collection (WoS) database and analyzed the dataset of 2456 retrieved articles using VOSviewer and the Bibliometrix R package. The most represented area was psychiatry (503, 20.5%) followed by neuroscience (465, 18.9%) and psychology developmental (337, 13.7%). The annual scientific production growth was 11.14% and showed exponential growth with three main peaks in 2011, 2015 and 2017. Extensive collaboration networks were identified between the three countries with the highest scientific production, the USA (35.3%), the UK (9.5%) and Germany (7.3%). Based on term co-occurrence maps and analyses of sources of articles, we identified autism spectrum disorders as the most investigated condition and conducted specific analyses on 638 articles related to this topic which showed an annual scientific production growth of 16.52%. The majority of studies focused on autism used eye tracking to investigate gaze patterns with regards to stimuli related to social interaction. Our analysis highlights the widespread and increasing use of eye tracking in the study of different neurological and psychiatric conditions.
Project description:A growing number of virtual reality devices now include eye tracking technology, which can facilitate oculomotor and cognitive research in VR and enable use cases like foveated rendering. These applications require different tracking performance, often measured as spatial accuracy and precision. While manufacturers report data quality estimates for their devices, these typically represent ideal performance and may not reflect real-world data quality. Additionally, it is unclear how accuracy and precision change across sessions within the same participant or between devices, and how performance is influenced by vision correction. Here, we measured spatial accuracy and precision of the Vive Pro Eye built-in eye tracker across a range of 30 visual degrees horizontally and vertically. Participants completed ten measurement sessions over multiple days, allowing to evaluate calibration reliability. Accuracy and precision were highest for central gaze and decreased with greater eccentricity in both axes. Calibration was successful in all participants, including those wearing contacts or glasses, but glasses yielded significantly lower performance. We further found differences in accuracy (but not precision) between two Vive Pro Eye headsets, and estimated participants' inter-pupillary distance. Our metrics suggest high calibration reliability and can serve as a baseline for expected eye tracking performance in VR experiments.
Project description:Intralingual translation has long been peripheral to empirical studies of translation. Considering its many similarities with interlingual translation, also described as translation proper, we adopted eye-tracking technology to investigate the cognitive process during translation and paraphrase, an exemplification of intralingual translation. Twenty-four postgraduate students were required to perform four types of tasks (Chinese paraphrase, English-Chinese translation, English paraphrase, Chinese-English translation) for source texts (ST) of different genres. Their eye movements were recorded for analysis of the cognitive effort and attention distribution pattern. The result demonstrated that: (1) Translation elicited significantly greater cognitive efforts than paraphrase; (2) Differences between translation and paraphrase on cognitive effort were modulated by text genre and target language; (3) Translation and paraphrase did not differ strikingly in terms of attention distribution. This process-oriented study confirmed higher cognitive efforts in inter-lingual translation, which was likely due to the additional complexity of bilingual transfer. Moreover, it revealed significant modulating effects of text genre and target language.
Project description:The present study expands the eye-tracking-while reading research toward less studied languages of different typological classes (polysynthetic Adyghe vs. synthetic Russian) that use a Cyrillic script. In the corpus reading data from the two languages, we confirmed the widely studied effects of word frequency and word length on eye movements in Adyghe-Russian bilingual individuals for both languages. We also confirmed morphological effects in Adyghe reading (part-of-speech class and the number of lexical affixes) that were previously shown in some morphologically-rich languages. Importantly, we demonstrated that bilinguals' reading in Adyghe does differ quantitatively (the effect of language on reading times) and qualitatively (different effects of landing and previous/upcoming words on the eye movements within a current word) from their reading in Russian.
Project description:The question of how to process an ambiguous word in context has been long-studied in psycholinguistics and the present study examined this question further by investigating the spoken word recognition processes of Cantonese homophones (a common type of ambiguous word) in context. Sixty native Cantonese listeners were recruited to participate in an eye-tracking experiment. Listeners were instructed to listen carefully to a sentence ending with a Cantonese homophone and then look at different visual probes (either Chinese characters or line-drawing pictures) presented on the computer screen simultaneously. Two findings were observed. First, the results revealed that sentence context exerted an early effect on homophone processes. Second, visual probes that serve as phonological competitors only had a weak effect on the spoken word recognition processes. Consistent with previous studies, the patterns of eye-movement results appeared to support an interactive processing approach in homophone recognition.