1
|
Visual interpretation of clinical images among dental students using eye-tracking technology. J Dent Educ 2024; 88:606-613. [PMID: 38445708 DOI: 10.1002/jdd.13482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 11/10/2023] [Accepted: 01/06/2024] [Indexed: 03/07/2024]
Abstract
BACKGROUND Tele-consultations are increasingly used for screening and diagnosis. Only a few studies have assessed dental students' visual attention to clinical images. AIM To (i) determine dental students' gaze behavior, visual fixations, and diagnostic competence while viewing clinical images, and (ii) explore potential opportunities to strengthen the teaching-learning approaches. DESIGN Tobii Pro Nano-device captured the eye-tracking data for 65 dental undergraduate students in this cross-sectional study. The predetermined areas of interest (AOI) for all five clinical photographs were uploaded onto Tobii software. All participants used a think-aloud protocol with no restrictions to view time. RESULTS A total of 325 clinical pictures were analyzed, and the average view time was 189.25 ± 76.90 s. Most participants started at the center of the image (three frontal photos), spent a significant share of their view time on prominent findings, did not follow a systematic pattern, and exhibited diagnostic incompetence. Also, most participants followed a "Z" viewing pattern (oscillating movement from left to right) for the remaining two pictures. CONCLUSIONS Subjects frequently fixated on the prominent AOI, however, failed to make the correct diagnosis. Their view patterns revealed no sequential viewing. Therefore, emphasizing knowledge about common dental abnormalities and focusing on full coverage of clinical pictures can improve dental students' diagnostic competence and view patterns.
Collapse
|
2
|
Autobiographical Implicit Association Test and eye movements: fixations topography enables detection of autobiographical memories. Front Psychol 2024; 15:1268256. [PMID: 38348265 PMCID: PMC10859496 DOI: 10.3389/fpsyg.2024.1268256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 01/03/2024] [Indexed: 02/15/2024] Open
Abstract
Introduction Autobiographical memory is the capacity to recollect memories of personally experienced events. The detection of such memories plays a key role in criminal trials. Among behavioral memory-detection methods, the autobiographical Implicit Association Test (aIAT) has gained popularity for its flexibility and suitability for forensic applications. The aIAT is a reaction time-based methodology aiming to assess whether information about an event is encoded in the respondent's mind. Here, we introduced the eye-D index, a measure based on the topography of fixations while performing the aIAT, as an additional measure to detect autobiographical memories covertly. Methods In this study, participants were involved in a mock-crime experiment in which they could act as Guilty or Innocent. One week later all participants underwent the aIAT combined with eye-tracking to investigate the presence of the crime-related memory. Results Guilty participants showed a higher number of fixations towards the category labels in the block in which true sentences shared the same response key with crime-related sentences, as compared to the block in which true sentences were paired with sentences describing an alternative version. Innocent participants showed the opposite pattern. This unbalanced allocation of attention to the category labels was quantified by the eye-D index and was found to be highly correlated to the standard aIAT-D index. Discussion This suggests that more fixations to the category labels could indicate increased cognitive load and monitoring of response conflicts. These preliminary results highlight eye-tracking as a tool to detect autobiographical memories covertly while performing the aIAT.
Collapse
|
3
|
Visual Exploration While Walking With and Without Visual Cues in Parkinson's Disease: Freezer Versus Non-Freezer. Neurorehabil Neural Repair 2023; 37:734-743. [PMID: 37772512 PMCID: PMC10666478 DOI: 10.1177/15459683231201149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/30/2023]
Abstract
BACKGROUND Visual cues can improve gait in Parkinson's disease (PD), including those experiencing freezing of gait (FOG). However, responses are variable and underpinning mechanisms remain unclear. Visuo-cognitive processing (measured through visual exploration) has been implicated in cue response, but this has not been comprehensively examined. OBJECTIVE To examine visual exploration and gait with and without visual cues in PD who do and do not self-report FOG, and healthy controls (HC). METHODS 17 HC, 21 PD without FOG, and 22 PD with FOG walked with and without visual cues, under single and dual-task conditions. Visual exploration (ie, saccade frequency, duration, peak velocity, amplitude, and fixation duration) was measured via mobile eye-tracking and gait (ie, gait speed, stride length, foot strike angle, stride time, and stride time variability) with inertial sensors. RESULTS PD had impaired gait compared to HC, and dual-tasking made gait variables worse across groups (all P < .01). Visual cues improved stride length, foot strike angle, and stride time in all groups (P < .01). Visual cueing also increased saccade frequency, but reduced saccade peak velocity and amplitude in all groups (P < .01). Gait improvement related to changes in visual exploration with visual cues in PD but not HC, with relationships dependent on group (FOG vs non-FOG) and task (single vs dual). CONCLUSION Visual cues improved visual exploration and gait outcomes in HC and PD, with similar responses in freezers and non-freezers. Freezer and non-freezer specific associations between cue-related changes in visual exploration and gait indicate different underlying visuo-cognitive processing within these subgroups for cue response.
Collapse
|
4
|
Extracting decision-making features from the unstructured eye movements of clinicians on glaucoma OCT reports and developing AI models to classify expertise. Front Med (Lausanne) 2023; 10:1251183. [PMID: 37841006 PMCID: PMC10571140 DOI: 10.3389/fmed.2023.1251183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 09/14/2023] [Indexed: 10/17/2023] Open
Abstract
This study aimed to investigate the eye movement patterns of ophthalmologists with varying expertise levels during the assessment of optical coherence tomography (OCT) reports for glaucoma detection. Objectives included evaluating eye gaze metrics and patterns as a function of ophthalmic education, deriving novel features from eye-tracking, and developing binary classification models for disease detection and expertise differentiation. Thirteen ophthalmology residents, fellows, and clinicians specializing in glaucoma participated in the study. Junior residents had less than 1 year of experience, while senior residents had 2-3 years of experience. The expert group consisted of fellows and faculty with over 3 to 30+ years of experience. Each participant was presented with a set of 20 Topcon OCT reports (10 healthy and 10 glaucomatous) and was asked to determine the presence or absence of glaucoma and rate their confidence of diagnosis. The eye movements of each participant were recorded as they diagnosed the reports using a Pupil Labs Core eye tracker. Expert ophthalmologists exhibited more refined and focused eye fixations, particularly on specific regions of the OCT reports, such as the retinal nerve fiber layer (RNFL) probability map and circumpapillary RNFL b-scan. The binary classification models developed using the derived features demonstrated high accuracy up to 94.0% in differentiating between expert and novice clinicians. The derived features and trained binary classification models hold promise for improving the accuracy of glaucoma detection and distinguishing between expert and novice ophthalmologists. These findings have implications for enhancing ophthalmic education and for the development of effective diagnostic tools.
Collapse
|
5
|
Predicting Visual Fixations. Annu Rev Vis Sci 2023; 9:269-291. [PMID: 37419107 DOI: 10.1146/annurev-vision-120822-072528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/09/2023]
Abstract
As we navigate and behave in the world, we are constantly deciding, a few times per second, where to look next. The outcomes of these decisions in response to visual input are comparatively easy to measure as trajectories of eye movements, offering insight into many unconscious and conscious visual and cognitive processes. In this article, we review recent advances in predicting where we look. We focus on evaluating and comparing models: How can we consistently measure how well models predict eye movements, and how can we judge the contribution of different mechanisms? Probabilistic models facilitate a unified approach to fixation prediction that allows us to use explainable information explained to compare different models across different settings, such as static and video saliency, as well as scanpath prediction. We review how the large variety of saliency maps and scanpath models can be translated into this unifying framework, how much different factors contribute, and how we can select the most informative examples for model comparison. We conclude that the universal scale of information gain offers a powerful tool for the inspection of candidate mechanisms and experimental design that helps us understand the continual decision-making process that determines where we look.
Collapse
|
6
|
Peripheral Visual Information Halves Attentional Choice Biases. Psychol Sci 2023; 34:984-998. [PMID: 37470671 DOI: 10.1177/09567976231184878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/21/2023] Open
Abstract
A growing body of research has shown that simple choices involve the construction and comparison of values at the time of decision. These processes are modulated by attention in a way that leaves decision makers susceptible to attentional biases. Here, we studied the role of peripheral visual information on the choice process and on attentional choice biases. We used an eye-tracking experiment in which participants (N = 50 adults) made binary choices between food items that were displayed in marked screen "shelves" in two conditions: (a) where both items were displayed, and (b) where items were displayed only when participants fixated within their shelves. We found that removing the nonfixated option approximately doubled the size of the attentional biases. The results show that peripheral visual information is crucial in facilitating good decisions and suggest that individuals might be influenceable by settings in which only one item is shown at a time, such as e-commerce.
Collapse
|
7
|
Optical and motor changes associated with lighting and near vision tasks in electronic devices. J Eye Mov Res 2023; 16:10.16910/jemr.16.2.3. [PMID: 38035033 PMCID: PMC10684330 DOI: 10.16910/jemr.16.2.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2023] Open
Abstract
Purpose: To assess optical and motor changes associated with near vision reading under different controlled lighting conditions performed with two different types of electronic screens. Methods: Twenty-four healthy subjects with a mean age of 22.9±2.3 years (18- 33) participated in this study. An iPad and an e-ink reader were chosen to present calibrated text, and each task lasted 5 minutes evaluating both ambient illuminance level and luminance of the screens. Results: Eye-tracker data revealed a higher number of saccadic eye movements under minimum luminance than under maximum luminance. The results showed statistically significant differences between the iPad (p=0.016) and the e-ink reader (p=0.002). The length of saccades was also higher for the minimum luminance level for both devices: 6.2±2.8 mm and 8.2±4.2 mm (e-ink max vs min), 6.8±2.9 mm and 7.6±3.6 mm (iPad max vs min), and blinking rate increased significantly for lower lighting conditions. Conclusions: Performing reading tasks on electronic devices is highly influenced by both the configuration of the screens and the ambient lighting, meanwhile, low differences in visual quality that are transient in healthy young people, were found.
Collapse
|
8
|
'Looking before and after': Can simple eye tracking patterns distinguish poetic from prosaic texts? Front Psychol 2023; 14:1066303. [PMID: 36777211 PMCID: PMC9909270 DOI: 10.3389/fpsyg.2023.1066303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 01/09/2023] [Indexed: 01/27/2023] Open
Abstract
Introduction The study of 'serious' literature has recently developed into an emerging field called neurocognitive poetics that applies cognitive neuroscientific techniques to examine how we understand and appreciate poetry. The current research used eye-tracking techniques on a small sample of young adults to see if and how the reading of short pieces of poetry differed from the reading of matched prosaic texts. Methods With 'proof of concept' intentions reflecting arguments first proposed by 19th Century literary figures, there was a particular focus on the differences between the reading of poetry and prose in terms number and frequency of fixations and regressive eye movements back and forth within the texts in this two-by-two experimental design (poetry vs. prose x need vs. no need for final line reappraisal). Results It was found that poetic pieces compared to prosaic pieces were associated with more and longer fixations and more regressive eye movements throughout the text. The need to reappraise meaning at the prompt of a final line was only significantly associated with more regressive eye movements. Comparisons examining the 4 text conditions (poetic reappraisal, poetic non-reappraisal, prosaic reappraisal, and prosaic non-reappraisal) showed that the poetic reappraisal condition was characterised by significantly more regressive eye movements as well as longer fixations compared to the prosaic non-reappraisal condition. No significant correlations were found between self-reported literary familiarity and eye tracking patterns. Discussion Despite limitations, this proof-of-concept study provides insights into reading patterns that can help to define objectively the nature of poetic material as requiring slower reading particularly characterised by more and longer fixations and eye movements backwards through the texts compared to the faster, more linear reading of prose. Future research using these, and other psychophysiological metrics can begin to unpack the putative cognitive benefits of reading literary material.
Collapse
|
9
|
Review and Evaluation of Eye Movement Event Detection Algorithms. SENSORS (BASEL, SWITZERLAND) 2022; 22:8810. [PMID: 36433407 PMCID: PMC9699548 DOI: 10.3390/s22228810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 11/04/2022] [Accepted: 11/08/2022] [Indexed: 06/16/2023]
Abstract
Eye tracking is a technology aimed at understanding the direction of the human gaze. Event detection is a process of detecting and classifying eye movements that are divided into several types. Nowadays, event detection is almost exclusively done by applying a detection algorithm to the raw recorded eye-tracking data. However, due to the lack of a standard procedure for how to perform evaluations, evaluating and comparing various detection algorithms in eye-tracking signals is very challenging. In this paper, we used data from a high-speed eye-tracker SMI HiSpeed 1250 system and compared event detection performance. The evaluation focused on fixations, saccades and post-saccadic oscillation classification. It used sample-by-sample comparisons to compare the algorithms and inter-agreement between algorithms and human coders. The impact of varying threshold values on threshold-based algorithms was examined and the optimum threshold values were determined. This evaluation differed from previous evaluations by using the same dataset to evaluate the event detection algorithms and human coders. We evaluated and compared the different algorithms from threshold-based, machine learning-based and deep learning event detection algorithms. The evaluation results show that all methods perform well for fixation and saccade detection; however, there are substantial differences in classification results. Generally, CNN (Convolutional Neural Network) and RF (Random Forest) algorithms outperform threshold-based methods.
Collapse
|
10
|
Abstract
Central and peripheral vision carry out different functions during scene processing. The ambient mode of visual processing is more likely to involve peripheral visual processes, whereas the focal mode of visual processing is more likely to involve central visual processes. Although the ambient mode is responsible for navigating space and comprehending scene layout, the focal mode gathers detailed information as central vision is oriented to salient areas of the visual field. Previous work suggests that during the time course of scene viewing, there is a transition from ambient processing during the first few seconds to focal processing during later time intervals, characterized by longer fixations and shorter saccades. In this study, we identify the influence of central and peripheral vision on changes in eye movements and the transition from ambient to focal processing during the time course of scene processing. Using a gaze-contingent protocol, we restricted the visual field to central or peripheral vision while participants freely viewed scenes for 20 seconds. Results indicated that fixation durations are shorter when vision is restricted to central vision compared to normal vision. During late visual processing, fixations in peripheral vision were longer than those in central vision. We show that a transition from more ambient to more focal processing during scene viewing will occur even when vision is restricted to only central vision or peripheral vision.
Collapse
|
11
|
Altered eye movements during reading under degraded viewing conditions: Background luminance, text blur, and text contrast. J Vis 2022; 22:4. [PMID: 36069942 PMCID: PMC9465940 DOI: 10.1167/jov.22.10.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 08/10/2022] [Indexed: 11/24/2022] Open
Abstract
Degraded viewing conditions caused by either natural environments or visual disorders lead to slow reading. Here, we systematically investigated how eye movement patterns during reading are affected by degraded viewing conditions in terms of spatial resolution, contrast, and background luminance. Using a high-speed eye tracker, binocular eye movements were obtained from 14 young normally sighted adults. Images of text passages were manipulated with varying degrees of background luminance (1.3-265 cd/m2), text blur (severe blur to no blur), or text contrast (2.6%-100%). We analyzed changes in key eye movement features, such as saccades, microsaccades, regressive saccades, fixations, and return-sweeps across different viewing conditions. No significant changes were observed for the range of tested background luminance values. However, with increasing text blur and decreasing text contrast, we observed a significant decrease in saccade amplitude and velocity, as well as a significant increase in fixation duration, number of fixations, proportion of regressive saccades, microsaccade rate, and duration of return-sweeps. Among all, saccade amplitude, fixation duration, and proportion of regressive saccades turned out to be the most significant contributors to reading speed, together accounting for 90% of variance in reading speed. Our results together showed that, when presented with degraded viewing conditions, the patterns of eye movements during reading were altered accordingly. These findings may suggest that the seemingly deviated eye movements observed in individuals with visual impairments may be in part resulting from active and optimal information acquisition strategies operated when visual sensory input becomes substantially deprived.
Collapse
|
12
|
Change Blindness in Adolescents With Attention-Deficit/Hyperactivity Disorder: Use of Eye-Tracking. Front Psychiatry 2022; 13:770921. [PMID: 35295775 PMCID: PMC8918561 DOI: 10.3389/fpsyt.2022.770921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 02/03/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This study investigated change detection of central or marginal interest in images using a change-blindness paradigm with eye tracking. METHOD Eighty-four drug-naïve adolescents [44 with attention-deficit/hyperactivity disorder (ADHD)/40 controls with typical development] searched for a change in 36 pairs of original and modified images, with an item of central or marginal interest present or absent, presented in rapid alternation. Collected data were detection rate, response time, and gaze fixation duration, latency, and dispersion data. RESULTS Both groups' change-detection times were similar, with no speed-accuracy trade-off. No between-group differences were found in time to first fixation, fixation duration, or scan paths. Both groups performed better for items of central level of interest. The ADHD group demonstrated greater fixation dispersion in scan paths for central- and marginal-interest items. CONCLUSION Results suggest the greater gaze dispersion may lead to greater fatigue in tasks that require longer attention duration.
Collapse
|
13
|
Eye movements in frontotemporal dementia: Abnormalities of fixation, saccades and anti-saccades. ALZHEIMER'S & DEMENTIA (NEW YORK, N. Y.) 2021; 7:e12218. [PMID: 35005203 PMCID: PMC8719345 DOI: 10.1002/trc2.12218] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 04/29/2021] [Accepted: 09/17/2021] [Indexed: 11/09/2022]
Abstract
INTRODUCTION Oculomotor function has not been systematically studied in frontotemporal dementia (FTD) and yet may offer a simple target to monitor disease activity. METHODS We assessed fixation stability, smooth pursuit, pro-saccades, and anti-saccades using the Eyelink 1000-plus eye-tracker in 19 individuals with behavioral variant FTD (bvFTD) and 22 controls. Neuroanatomical correlates were assessed using a region of interest magnetic resonance imaging (MRI) analysis. RESULTS Measures of fixation stability were impaired in the bvFTD group compared with controls. However, performance did not differ from controls in the pro-saccade tasks except in the vertical overlap condition. The bvFTD group performed worse in the anti-saccade task, which correlated strongly with executive function. Neural correlates included the orbitofrontal and ventromedial prefrontal cortices and striatum for fixation stability, and the dorsolateral prefrontal and parietal cortices and striatum for anti-saccades. DISCUSSION Overall, oculomotor function is abnormal in bvFTD, with performance likely related to impairment of inhibitory control and executive dysfunction.
Collapse
|
14
|
Combining fMRI and Eye-tracking for the Study of Social Cognition. Neurosci Insights 2021; 16:26331055211065497. [PMID: 34950876 PMCID: PMC8689432 DOI: 10.1177/26331055211065497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 11/22/2021] [Indexed: 11/25/2022] Open
Abstract
The study of social cognition with functional magnetic resonance imaging (fMRI) affords
the use of complex stimulus material. Visual attention to distinct aspects of these
stimuli can result in the involvement of remarkably different neural systems. Usually, the
influence of gaze on neural signal is either disregarded or dealt with by controlling gaze
of participants through instructions or tasks. However, behavioral restrictions like this
limit the study’s ecological validity. Thus, it would be preferable if participants freely
look at the stimuli while their gaze traces are measured. Yet several impediments hamper a
combination of fMRI and eye-tracking. In our recent work on neural Theory of Mind
processes in alexithymia, we propose a simple way of integrating dwell time on specific
stimulus features into general linear models of fMRI data. By parametrically modeling
fixations, we were able to distinguish neural processes asssociated with specific stimulus
features looked at. Here, I discuss opportunities and obstacles of this approach in more
detail. My goal is to motivate a wider use of parametric models — usually implemented in
common fMRI software packages — to combine fMRI and eye-tracking data.
Collapse
|
15
|
Sensitivity of the Spatial Distribution of Fixations to Variations in the Type of Task Demand and Its Relation to Visual Entropy. Front Hum Neurosci 2021; 15:642535. [PMID: 34168543 PMCID: PMC8217447 DOI: 10.3389/fnhum.2021.642535] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/26/2021] [Indexed: 11/30/2022] Open
Abstract
Ocular activity is known to be sensitive to variations in mental workload, and recent studies have successfully related the distribution of eye fixations to the mental load. This study aimed to verify the effectiveness of the spatial distribution of fixations as a measure of mental workload and its sensitivity to different types of demands imposed by the task: mental, temporal, and physical. To test the research hypothesis, two experimental studies were run: Experiment 1 evaluated the sensitivity of an index of spatial distribution (Nearest Neighbor Index; NNI) to changes in workload. A sample of 30 participants participated in a within-subject design with different types of task demands (mental, temporal, physical) applied to Tetris game; Experiment 2 investigated the accuracy of the index through the analysis of 1-min epochs during the execution of a visual-spatial task (the “spot the differences” puzzle game). Additionally, NNI was compared to a better-known ocular mental workload index, the entropy rate. The data analysis showed a relation between the NNI and the different workload levels imposed by the tasks. In particular: Experiment 1 demonstrated that increased difficulty, due to higher temporal demand, led to a more dispersed pattern with respect to the baseline, whereas the mental demand led to a more grouped pattern of fixations with respect to the baseline; Experiment 2 indicated that the entropy rate and the NNI show a similar pattern over time, indicating high mental workload after the first minute of activity. That suggests that NNI highlights the greater presence of fixation groups and, accordingly, the entropy indicates a more regular and orderly scanpath. Both indices are sensitive to changes in workload and they seem to anticipate the drop in performance. However, the entropy rate is limited by the use of the areas of interest, making it impossible to apply it in dynamic contexts. Conversely, NNI works with the entire scanpath and it shows sensitivity to different types of task demands. These results confirm the NNI as a measure applicable to different contexts and its potential use as a trigger in adaptive systems implemented in high-risk settings, such as control rooms and transportation systems.
Collapse
|
16
|
Beyond the tracked line of sight - Evaluation of the peripheral usable field of view in a simulator setting. J Eye Mov Res 2021; 12. [PMID: 34122743 PMCID: PMC8183303 DOI: 10.16910/jemr.12.3.9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Combining advanced gaze tracking systems with the latest vehicle environment sensors
opens up new fields of applications for driver assistance. Gaze tracking enables researchers
to determine the location of a fixation, and under consideration of the visual saliency of
the scene, to predict visual perception of objects. The perceptual limits, for stimulus identification,
found in literature have mostly been determined in laboratory conditions using
isolated stimuli, with a fixed gaze point, on a single screen with limited coverage of the
field of view. The found limits are usually reported as hard limits. Such commonly used
limits are therefore not applicable to settings with a wide field of view, natural viewing behavior
and multi-stimuli. As handling of sudden, potentially critical driving maneuvers
heavily relies on peripheral vision, the peripheral limits for feature perception need to be
included in the determined perceptual limits. To analyze the human visual perception of
different, simultaneously occurring, object changes (shape, color, movement) we conducted
a study with 50 participants, in a driving simulator and we propose a novel way to
determine perceptual limits, which is more applicable to driving scenarios.
Collapse
|
17
|
Validation of virtual reality system based on eye-tracking technologies to support clinical assessment of glaucoma. Eur J Ophthalmol 2020; 31:3080-3086. [PMID: 33233935 DOI: 10.1177/1120672120976047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
AIM The aim was to develop and implement a virtual reality tool based on eye-tracking technologies that allow to evaluate the characteristics of the gaze patterns of glaucoma patients in order to have a better understanding of the limitations that these patients experience in their daily life. SETTING This study took place on the Ophthalmology department of Hospital Clínico San Carlos, Madrid, Spain. METHODS In total, 56 participants collaborated in the study. They were divided in two groups, a group composed of 33 glaucoma patients selected by the Ophthalmology department and a control group composed of 23 healthy individuals. Both groups completed two virtual tasks while their gaze was being monitored. The first task, defined as "static" consisted in two exercises based on the observation of images. The second task, defined as "dynamic," consisted in a virtual driving simulator. Number of fixations, fixations duration, saccades amplitude and velocity, fixations/saccades ratio, total execution time, and other specific metrics were measured. These are the total search time for the second exercise of the first task and the number of collisions for the dynamic task. In addition, the dispersion of fixations was also discussed. RESULTS For the two exercises of the static task, patients exhibited significative differences in terms of number of fixations (p = 0.012 in free observation exercise), mean saccadic velocity (p = 0.023 and 0.017), fixations/saccades ratio (p = 0.035 and 0.04), and also the search and total execution times of the visual search exercise (p = 0.004 and 0.027, respectively). For the dynamic task, significative differences were found on average saccades amplitude (p = 0.02), average saccades velocity (p = 0.03), and the number of collisions (p = 0.02). CONCLUSION The results show that eye-tracking technologies can be used as a tool for evaluating the gaze patterns of glaucoma patients and differentiate them of healthy individuals. However, further studies with a larger cohort of participants and additional tasks are needed.
Collapse
|
18
|
Absorbing the gaze, scattering looks: Klimt's distinctive style and its two-fold effect on the eye of the beholder. J Eye Mov Res 2020; 13. [PMID: 33828789 PMCID: PMC7962790 DOI: 10.16910/jemr.13.2.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Among the most renowned painters of the early twentieth century, Gustav Klimt is often associated – by experts and laymen alike - with a distinctive style of representation: the visual juxtaposition of realistic features and flattened ornamental patterns. Art historical writing suggests that this juxtaposition allows a two-fold experience; the perception of both the realm of art and the realm of life. While Klimt adopted a variety of stylistic choices in his career, this one popularised his work and was hardly ever used by other artists. The following study was designed to observe whether Klimt’s distinctive style causes a specific behaviour of the viewer, at the level of eye-movements. Twenty-one portraits were shown to thirty viewers while their eye-movements were recorded. The pictures included artworks by Klimt in both his distinctive and non-distinctive styles, as well as other artists of the same historical period. The recorded data show that only Klimt’s distinctive paintings induce a specific eyemovement pattern with alternating longer (“absorbed”) and shorter (“scattered”) fixations. We therefore claim that there is a behavioural correspondence to what art historical interpretations have so far asserted: The perception of “Klimt’s style” can be described as two-fold also at a physiological level.
Collapse
|
19
|
Microsaccades during high speed continuous visual search. J Eye Mov Res 2020; 13:10.16910/jemr.13.5.4. [PMID: 33828809 PMCID: PMC8009256 DOI: 10.16910/jemr.13.5.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Here, we provide an analysis of the microsaccades that occurred during continuous visual search and targeting of small faces that we pasted either into cluttered background photos or into a simple gray background. Subjects continuously used their eyes to target singular 3-degree upright or inverted faces in changing scenes. As soon as the participant's gaze reached the target face, a new face was displayed in a different and random location. Regardless of the experimental context (e.g. background scene, no background scene), or target eccentricity (from 4 to 20 degrees of visual angle), we found that the microsaccade rate dropped to near zero levels within only 12 milliseconds after stimulus onset. There were almost never any microsaccades after stimulus onset and before the first saccade to the face. One subject completed 118 consecutive trials without a single microsaccade. However, in about 20% of the trials, there was a single microsaccade that occurred almost immediately after the preceding saccade's offset. These microsaccades were task oriented because their facial landmark targeting distributions matched those of saccades within both the upright and inverted face conditions. Our findings show that a single feedforward pass through the visual hierarchy for each stimulus is likely all that is needed to effectuate prolonged continuous visual search. In addition, we provide evidence that microsaccades can serve perceptual functions like correcting saccades or effectuating task-oriented goals during continuous visual search.
Collapse
|
20
|
Abstract
Eye movement of a species reflects the visual behavior strategy that it has adapted to during its evolution. What are eye movements of domestic dogs (Canis lupus familiaris) like? Investigations of dog eye movements per se have not been done, despite the increasing number of visuo-cognitive studies in dogs using eye-tracking systems. To fill this gap, we have recorded dog eye movements using a video-based eye-tracking system, and compared the dog data to that of humans. We found dog saccades follow the systematic relationships between saccade metrics previously shown in humans and other animal species. Yet, the details of the relationships, and the quantities of each metric of dog saccades and fixations differed from those of humans. Overall, dog saccades were slower and fixations were longer than those of humans. We hope our findings contribute to existing comparative analyses of eye movement across animal species, and also to improvement of algorithms used for classifying eye movement data of dogs.
Collapse
|
21
|
Reading English-language haiku: An eye-movement study of the 'cut effect'. J Eye Mov Res 2020; 13. [PMID: 33828786 PMCID: PMC7882062 DOI: 10.16910/jemr.13.2.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The current study, set within the larger enterprise of Neuro-Cognitive Poetics, was designed to examine how readers deal with the ‘cut’ – a more or less sharp semantic-conceptual break – in normative, three-line English-language haiku poems (ELH). Readers were presented with three-line haiku that consisted of two (seemingly) disparate parts, a (two-line) ‘phrase’ image and a one-line ‘fragment’ image, in order to determine how they process the conceptual gap between these images when constructing the poem’s meaning – as reflected in their patterns of reading eye movements. In addition to replicating the basic ‘cut effect’, i.e., the extended fixation dwell time on the fragment line relative to the other lines, the present study examined (a) how this effect is influenced by whether the cut is purely implicit or explicitly marked by punctuation, and (b) whether the effect pattern could be delineated against a control condition of ‘uncut’, one-image haiku. For ‘cut’ vs. ‘uncut’ haiku, the results revealed the distribution of fixations across the poems to be modulated by the position of the cut (after line 1 vs. after line 2), the presence vs. absence of a cut marker, and the semanticconceptual distance between the two images (context–action vs. juxtaposition haiku). These formal-structural and conceptual-semantic properties were associated with systematic changes in how individual poem lines were scanned at first reading and then (selectively) re-sampled in second- and third-pass reading to construct and check global meaning. No such effects were found for one-image (control) haiku. We attribute this pattern to the operation of different meaning resolution processes during the comprehension of two-image haiku, which are invoked by both form- and meaning-related features of the poems.
Collapse
|
22
|
From lab-based studies to eye-tracking in virtual and real worlds: conceptual and methodological problems and solutions. Symposium 4 at the 20th European Conference on Eye Movement Research (ECEM) in Alicante, 20.8.2019. J Eye Mov Res 2019; 12. [PMID: 33828764 PMCID: PMC7917479 DOI: 10.16910/jemr.12.7.8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Wearable mobile eye trackers have great potential as they allow the measurement of eye movements during daily activities such as driving, navigating the world and doing groceries. Although mobile eye trackers have been around for some time, developing and operating these eye trackers was generally a highly technical affair. As such, mobile eye-tracking research was not feasible for most labs. Nowadays, many mobile eye trackers are available from eye-tracking manufacturers (e.g. Tobii, Pupil labs, SMI, Ergoneers) and various implementations in virtual/augmented reality have recently been released.The wide availability has caused the number of publications using a mobile eye tracker to increase quickly. Mobile eye tracking is now applied in vision science, educational science, developmental psychology, marketing research (using virtual and real supermarkets), clinical psychology, usability, architecture, medicine, and more. Yet, transitioning from lab-based studies where eye trackers are fixed to the world to studies where eye trackers are fixed to the head presents researchers with a number of problems. These problems range from the conceptual frameworks used in world-fixed and head-fixed eye tracking and how they relate to each other, to the lack of data quality comparisons and field tests of the different mobile eye trackers and how the gaze signal can be classified or mapped to the visual stimulus. Such problems need to be addressed in order to understand how world-fixed and head-fixed eye-tracking research can be compared and to understand the full potential and limits of what mobile eye-tracking can deliver. In this symposium, we bring together presenting researchers from five different institutions (Lund University, Utrecht University, Clemson University, Birkbeck University of London and Rochester Institute of Technology) addressing problems and innovative solutions across the entire breadth of mobile eye-tracking research. Hooge, presenting Hessels et al. paper, focus on the definitions of fixations and saccades held by researchers in the eyemovement field and argue how they need to be clarified in order to allow comparisons between world-fixed and head-fixed eye-tracking research. - Diaz et al. introduce machine-learning techniques for classifying the gaze signal in mobile eye-tracking contexts where head and body are unrestrained. Niehorster et al. compare data quality of mobile eye trackers during natural behavior and discuss the application range of these eye trackers. Duchowski et al. introduce a method for automatically mapping gaze to faces using computer vision techniques. Pelz et al. employ state-of-the-art techniques to map fixations to objects of interest in the scene video and align grasp and eye-movement data in the same reference frame to investigate the guidance of eye movements during manual interaction. Video stream:https://vimeo.com/357473408
Collapse
|
23
|
What Color Was It? A Psychophysical Paradigm for Tracking Subjective Progress in Continuous Tasks. Perception 2019; 49:21-38. [PMID: 31690183 DOI: 10.1177/0301006619886247] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
When making a sequence of fixations, how does the timing of visual experience compare with the timing of fixation onsets? Previous studies have tracked shifts of attention or perceived gaze direction using self-report methods. We used a similar method, a dynamic color technique, to measure subjective timing in continuous tasks involving fixation sequences. Does the time that observers report reading a word coincide with their fixation on it, or is there an asynchrony, and does this relationship depend on the observer’s task? Observers read sentences that continuously changed in hue and identified the color of a word at the time that they read it using a color palette. We compared responses with a nonreading condition, where observers reproduced their fixations, but viewed nonword stimuli. Results showed a delay between the color of stimuli at fixation onset and the reported color during perception. For nonword tasks, the delay was constant. However, in the reading task, the delay was larger for earlier compared with later words in the sentence. Our results offer a new method for measuring awareness or subjective progress within fixation sequences, which can be extended to other continuous tasks.
Collapse
|
24
|
Development of Open-source Software and Gaze Data Repositories for Performance Evaluation of Eye Tracking Systems. Vision (Basel) 2019; 3:vision3040055. [PMID: 31735856 PMCID: PMC6969935 DOI: 10.3390/vision3040055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2019] [Revised: 10/10/2019] [Accepted: 10/15/2019] [Indexed: 11/17/2022] Open
Abstract
In this paper, a range of open-source tools, datasets, and software that have been developed for quantitative and in-depth evaluation of eye gaze data quality are presented. Eye tracking systems in contemporary vision research and applications face major challenges due to variable operating conditions such as user distance, head pose, and movements of the eye tracker platform. However, there is a lack of open-source tools and datasets that could be used for quantitatively evaluating an eye tracker’s data quality, comparing performance of multiple trackers, or studying the impact of various operating conditions on a tracker’s accuracy. To address these issues, an open-source code repository named GazeVisual-Lib is developed that contains a number of algorithms, visualizations, and software tools for detailed and quantitative analysis of an eye tracker’s performance and data quality. In addition, a new labelled eye gaze dataset that is collected from multiple user platforms and operating conditions is presented in an open data repository for benchmark comparison of gaze data from different eye tracking systems. The paper presents the concept, development, and organization of these two repositories that are envisioned to improve the performance analysis and reliability of eye tracking systems.
Collapse
|
25
|
Abstract
The point of interest in three-dimensional space in eye tracking is often computed based on intersecting the lines of sight with geometry, or finding the point closest to the two lines of sight. We first start by theoretical analysis with synthetic simulations. We show that the mean point of vergence is generally biased for centrally symmetric errors and that the bias depends on the horizontal vs. vertical noise distribution of the tracked eye positions. Our analysis continues with an evaluation on real experimental data. The estimated mean vergence points seem to contain different errors among individuals but they generally show the same bias towards the observer. And it tends to be larger with an increased viewing distance. We also provided a recipe to minimize the bias, which applies to general computations of gaze estimation under projection. These findings not only have implications for choosing the calibration method in eye tracking experiments and interpreting the observed eye movements data; but also suggest to us that we shall consider the mathematical models of calibration as part of the experiment.
Collapse
|
26
|
Visual training could be useful for improving reading capabilities in dyslexia. APPLIED NEUROPSYCHOLOGY-CHILD 2019; 10:199-208. [PMID: 31407599 DOI: 10.1080/21622965.2019.1646649] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
The term dyslexia originated in 1887 when an ophthalmologist described the difficulty of learning to read. After more than a century of research, we still do not know the etiology of such pathology. Several hypotheses have been suggested to explain dyslexia and in the present article we will describe in detail the visual attentional deficits reported in dyslexia. Reading is a complex cognitive process during which several mechanisms are involved (visual perception, eye movements -saccades and fixations-, semantic and linguistic abilities); consequently, a deficit in one of these different components could cause impairment in reading acquisition. In children with dyslexia, we observed abnormal oculomotor patterns during reading: frequent saccades of small amplitude, long-term fixation, high number of saccades to the left (retro-saccades), and poor binocular coordination during and after the saccades. These results suggest a deficit of visual information processing as well as an immaturity of the interaction between the saccade and vergence systems. In the present review, we will discuss different methods that use short periods of visual rehabilitation or text manipulation, and by using an eye tracker in order to obtain objective information on eye movement's performance during reading, assist in improved reading performance of dyslexic children.
Collapse
|
27
|
Abstract
Previous research indicates that dance expertise affects eye-movement behaviour—dance
experts tend to have faster saccades and more tightly clustered fixations than novices when
observing dance, suggesting that experts are able to predict movements and process choreographic
information more quickly. Relating to this, the present study aimed to explore (1)
the effects of expertise on eye movements (as a proxy for attentional focus and the existence
of movement-dance schemas) in Indian Bharatanatyam dance, and (2) narrative
dance, which is an important component of Bharatanatyam. Fixation durations, dwell
times, and fixation-position dispersions were recorded for novices and experts in Bharatanatyam
(N = 28) while they observed videos of narrative and non-narrative Bharatanatyam
dance. Consistent with previous research, experts had shorter fixation durations
and more tightly clustered fixations than novices. Tighter clustering of fixations was
also found for narrative dance versus non-narrative. Our results are discussed in relation to
previous dance and eye-tracking research.
Collapse
|
28
|
Abstract
The purpose of this study was to examine whether prior biped tennis playing experience results in different visual search strategies compared with no prior biped playing experience. A total of 32 wheelchair (WC) tennis players, 17 males and 15 females, ranked between 1 and 16 on the International Tennis Federation rankings participated in this study. Half the players had prior experience playing tennis as a biped player, and half had no prior experience in biped tennis. The athletes viewed 18 different serves from an expert WC player while their gaze was monitored using eye tracking. Results revealed significant differences between the groups in fixation duration and number of fixations. Differences were also found in fixation locations and durations across biomechanical phases of the serve. The WC only players had more fixations for shorter periods than did WC with biped players in the ritual phase. In the preparatory and execution phases, however, the WC only players had fewer fixations for longer duration than the WC with biped players. Results are discussed in terms of long-term memory structures, learning, and considerations when coaching and training WC tennis players.
Collapse
|
29
|
Processing Information During Regressions: An Application of the Reverse Boundary-Change Paradigm. Front Psychol 2018; 9:1630. [PMID: 30233466 PMCID: PMC6132172 DOI: 10.3389/fpsyg.2018.01630] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Accepted: 08/14/2018] [Indexed: 11/23/2022] Open
Abstract
Although 10–15% of eye-movements during reading are regressions, we still know little about the information that is processed during regressive episodes. Here, we report an eye-movement study that uses what we call the reverse boundary change technique to examine the processing of lexical-semantic information during regressions, and to establish the role of this information during recovery from processing difficulty. In the critical condition of the experiment, an initially implausible sentence (e.g., There was an old house that John had ridden when he was a boy) was rendered plausible by changing a context word (house) to a lexical neighbor (horse) using a gaze-contingent display change, at the point where the reader's gaze crossed an invisible boundary further on in the sentence. Due to the initial implausibility of the sentence, readers often launched regressions from the later part of the sentence. However, despite this initial processing difficulty, reading was facilitated, relative to a condition where the display change did not occur (i.e., the word house remained on screen throughout the trial). This result implies that the relevant lexical semantic information was processed during the regression, and was used to aid recovery from the initial processing difficulty.
Collapse
|
30
|
[Oculomotor activity as an indicator of disturbances in perception and programming in patients with schizotypal disorder]. Zh Nevrol Psikhiatr Im S S Korsakova 2018; 118:50-57. [PMID: 30040801 DOI: 10.17116/jnevro20181186150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
AIM To consider the patterns of stimuli analysis and task execution in conditions of voluntary and involuntary attention in people with schizotypal disorder from the psychopathological and psychological points of view. MATERIAL AND METHODS A total of 40 subjects were examined including 16 patients diagnosed with schizotypal disorder, 13 with personality disorders and 11 without psychiatric pathology (controls). In the first part of the study, subjects looked upon various static images without any instructions, followed by a test with antisaccade task in vertical and horizontal directions; the last task duplicated the first, but the subjects were asked to remember the presented images as best as possible. RESULTS In the group of patients with schizotypal disorder, an increase in average and maximum fixation variance, its greater variation between images, and a decrease in the scanpath length were found. These differences were associated with the intentional focus of attention and several indicators of the accomplish efficiency of the antisaccade task. CONCLUSION The changes in fixative eye movements observed in individuals with schizotypal disorder indicate violations of information selectivity and construction of an image of future action. Assumptions about inadequacy of their perceptual strategy to the simultaneous one and the primacy of the sense-bearing (motivational) violations in the programming of saccadic activity were made.
Collapse
|
31
|
The impact of music and stretched time on pupillary responses and eye movements in slow-motion film scenes. J Eye Mov Res 2018; 11. [PMID: 33828693 PMCID: PMC7887771 DOI: 10.16910/jemr.11.2.10] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
This study investigated the effects of music and playback speed on arousal and visual perception in slow-motion scenes taken from commercial films. Slow-motion scenes are a ubiquitous film technique and highly popular. Yet the psychological effects of mediated time-stretching compared to real-time motion have not been empirically investigated. We hypothesised that music affects arousal and attentional processes. Furthermore, we as-sumed that playback speed influences viewers’ visual perception, resulting in a higher number of eye movements and larger gaze dispersion. Thirty-nine participants watched three film excerpts in a repeated-measures design in conditions with or without music and in slow motion vs. adapted real-time motion (both visual-only). Results show that music in slow-motion film scenes leads to higher arousal compared to no music as indicated by larger pupil diameters in the former. There was no systematic effect of music on visual perception in terms of eye movements. Playback speed influenced visual perception in eye movement parameters such that slow motion resulted in more and shorter fixations as well as more saccades compared to adapted real-time motion. Furthermore, in slow motion there was a higher gaze dispersion and a smaller centre bias, indicating that individuals attended to more detail in slow motion scenes.
Collapse
|
32
|
Abstract
This work presents a study of an extensive set of 101 categories of eye movement features
from three types of eye movement events: fixations, saccades, and post-saccadic oscillations.
We present a unified framework of methods for the extraction of features that describe
the temporal, positional and dynamic characteristics of eye movements. We perform
statistical analysis of feature values by employing eye movement data from a normative
population of 298 subjects, recorded during a text reading task. We present overall
measures for the central tendency and variability of feature values, and we quantify the
test-retest reliability of features using either the Intraclass Correlation Coefficient (for
normally distributed and normalized features) or Kendall’s coefficient of concordance (for
non-normally distributed features). Finally, for the case of normally distributed and normalized
features we additionally perform factor analysis and provide interpretations of the
resulting factors. The presented methods and analysis can provide a valuable tool for
researchers in various fields that explore eye movements, such as in behavioral studies,
attention and cognition research, medical research, biometric recognition, and humancomputer
interaction.
Collapse
|
33
|
Comparison of three fixation methods in treatment of tibial fracture in adolescents. ANZ J Surg 2017; 88:E480-E485. [PMID: 29159851 DOI: 10.1111/ans.14258] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Revised: 08/23/2017] [Accepted: 09/05/2017] [Indexed: 11/28/2022]
Abstract
BACKGROUND Tibial fractures are the most common musculoskeletal injury in adolescents. The optimal management of tibial fractures in adolescents is controversial. In this study, we compared the outcomes including complications of three fixation methods in tibial fractures of adolescents and explored the factors associated with the complications. METHODS A retrospective cohort study about 83 diaphyseal tibial fractures in 79 children and adolescents, who were treated with plate fixation (PF), elastic stable intramedullary nail fixation (ESINF), or external fixation (EF), was conducted. After adjustment for age, weight, energy of the injury, polytrauma, fracture level and pattern, and extent of comminution, treatment outcomes were compared in accordance with the length of the hospital stay, time to union, and complication rates including many factors. RESULTS The mean age of the patients was 13.4 years, and their mean weight was 44.2 kg. There was a loss of reduction in two of 33 fractures treated with ESINF and four of 13 treated with EF (P < 0.001). At the time of final follow-up, three patients (two treated with EF and one treated with ESINF) had ≥2.0 cm of shortening. Four of the 32 patients (33 fractures) treated with ESINF underwent a reoperation (two due to loss of reduction and one each because of delayed union and nonunion). Six patients treated with EF required a reoperation (four due to loss of reduction, one for malunion and one for replacement of a pin complicated by infection). Two fracture treated with PF required refixation attributing to nonunion and malunion. A multivariate analysis with adjustment for baseline differences showed that EF was associated with a 7.56-times (95% confidence interval 3.74-29.87) greater risk of loss of reduction and/or malunion than ESINF. CONCLUSIONS All three treatments had satisfactory outcomes, and EF was correlated with the highest rate of complications in our series of adolescents treated with a tibial fracture. However, we cannot currently recommend that all fractures might be suitable for ESINF. The choice of fixation will remain influenced by surgeon preference in term of expertise and experience, patient and fracture characteristics, and patients and family preferences.
Collapse
|
34
|
Gender Classification Based on Eye Movements: A Processing Effect During Passive Face Viewing. Adv Cogn Psychol 2017; 13:232-240. [PMID: 29071007 PMCID: PMC5648518 DOI: 10.5709/acp-0223-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 06/19/2017] [Indexed: 01/10/2023] Open
Abstract
Studies have revealed superior face recognition skills in females, partially due
to their different eye movement strategies when encoding faces. In the current
study, we utilized these slight but important differences and proposed a model
that estimates the gender of the viewers and classifies them into two subgroups,
males and females. An eye tracker recorded participant’s eye movements while
they viewed images of faces. Regions of interest (ROIs) were defined for each
face. Results showed that the gender dissimilarity in eye movements was not due
to differences in frequency of fixations in the ROI s per se. Instead, it was
caused by dissimilarity in saccade paths between the ROIs. The difference
enhanced when saccades were towards the eyes. Females showed significant
increase in transitions from other ROI s to the eyes. Consequently, the
extraction of temporal transient information of saccade paths through a
transition probability matrix, similar to a first order Markov chain model,
significantly improved the accuracy of the gender classification results.
Collapse
|
35
|
Differential vergence movements in reading Chinese and English: Greater fixation-initial binocular disparity is advantageous in reading the denser orthography. Q J Exp Psychol (Hove) 2017; 71:1-33. [PMID: 28695758 DOI: 10.1080/17470218.2017.1350866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
We explore two aspects of exovergence: we test whether smaller binocular fixation disparities accompany the shorter saccades and longer fixations observed in reading Chinese; we test whether potentially advantageous psychophysical effects of exovergence (cf. Arnold & Schindel, 2010; Kersten & Murray, 2010) transfer to text reading. We report differential exovergence in reading Chinese and English: Chinese readers begin fixations with more binocular disparity, but end fixations with a disparity closely similar to that of the English readers. We conclude that greater fixation-initial binocular fixation disparity can be adaptive in the reading of visually and cognitively denser text.
Collapse
|
36
|
Human Eye Movements After Viewpoint Shifts in Edited Dynamic Scenes are Under Cognitive Control. Adv Cogn Psychol 2017; 13:128-139. [PMID: 28702093 PMCID: PMC5502321 DOI: 10.5709/acp-0213-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Accepted: 03/09/2017] [Indexed: 11/23/2022] Open
Abstract
We tested whether viewers have cognitive control over their eye movements after cuts in videos of real-world scenes. In the critical conditions, scene cuts constituted panoramic view shifts: Half of the view following a cut matched the view on the same scene before the cut. We manipulated the viewing task between two groups of participants. The main experimental group judged whether the scene following a cut was a continuation of the scene before the cut. Results showed that following view shifts, fixations were determined by the task from 250 ms until 1.5 s: Participants made more and earlier fixations on scene regions that matched across cuts, compared to nonmatching scene regions. This was evident in comparison to a control group of participants that performed a task that did not require judging scene continuity across cuts, and did not show the preference for matching scene regions. Our results illustrate that viewing intentions can have robust and consistent effects on gaze behavior in dynamic scenes, immediately after cuts.
Collapse
|
37
|
Dissociating Attention and Eye Movements in a Quantitative Analysis of Attention Allocation. Front Psychol 2017; 8:715. [PMID: 28567024 PMCID: PMC5434143 DOI: 10.3389/fpsyg.2017.00715] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Accepted: 04/21/2017] [Indexed: 11/13/2022] Open
Abstract
In a recent paper, we introduced a method and equation for inferring the allocation of attention on a continuous scale. The size of the stimuli, the estimated size of the fovea, and the pattern of results implied that the subjects' responses reflected shifts in covert attention rather than shifts in eye movements. This report describes an experiment that tests this implication. We measured eye movements. The monitor briefly displayed (e.g., 130 ms) two small stimuli (≈1.0° × 1.2°), situated one atop another. When the stimuli were close together, as in the previous study, fixations that supported correct responses at one stimulus also supported correct responses at the other stimulus, as measured over the entire session. Yet, on any particular trial, correct responses were limited to just one stimulus. This pattern suggests that the constraints on responding within a trial were due to limits on cognitive processing, whereas the ability to respond correctly to either stimulus on different trials must have entailed shifts in attention (that were not accompanied by eye movements). In contrast, when the stimuli were far apart, fixations that had a high probability of supporting correct responses at one stimulus had a low probability of supporting correct responses at the other stimulus. Thus, conditions could be arranged so that correct responses depended on eye movements, whereas in the "standard" procedure, correct responses were independent of eye movements. The results dissociate covert and overt attention and support the claim that our procedure measures covert attention.
Collapse
|
38
|
Abstract
Commonly conflated with sleepiness, fatigue is a distinct multidimensional condition with physical and mental effects. Fatigue in health care providers and any secondary effects on patient care are an important societal concern. As medical image interpretation is highly dependent on visual input, visual fatigue is of particular interest to radiologists. Humans analyze their surroundings with rapid eye movements called saccades, and fatigue decreases saccadic velocity. Oculomotor parameters may, therefore, be an objective and reproducible metric of fatigue and eye movement analysis can provide valuable insight into the etiology of fatigue-related error.
Collapse
|
39
|
A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment. J Neuroeng Rehabil 2016; 13:10. [PMID: 26812907 PMCID: PMC4728792 DOI: 10.1186/s12984-015-0107-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2015] [Accepted: 12/08/2015] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. RESULTS Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. CONCLUSIONS The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.
Collapse
|
40
|
Abstract
The way the cognitive system scans the visual environment for relevant information - visual search in short - has been a long-standing central topic in vision science. From its inception as a research topic, and despite a number of promising alternative perspectives, the study of visual search has been governed by the assumption that a search proceeds on the basis of individual items (whether processed in parallel or not). This has led to the additional assumptions that shallow search slopes (at most a few tens of milliseconds per item for target-present trials) are most informative about the underlying process, and that eye movements are an epiphenomenon that can be safely ignored. We argue that the evidence now overwhelmingly favours an approach that takes fixations, not individual items, as its central unit. Within fixations, items are processed in parallel, and the functional field of view determines how many fixations are needed. In this type of theoretical framework, there is a direct connection between target discrimination difficulty, fixations, and reaction time (RT) measures. It therefore promises a more fundamental understanding of visual search by offering a unified account of both eye movement and manual response behaviour across the entire range of observed search efficiency, and provides new directions for research. A high-level conceptual simulation with just one free and four fixed parameters shows the viability of this approach.
Collapse
|
41
|
The influence of object shape and center of mass on grasp and gaze. Front Psychol 2015; 6:1537. [PMID: 26528207 PMCID: PMC4607879 DOI: 10.3389/fpsyg.2015.01537] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Accepted: 09/22/2015] [Indexed: 11/13/2022] Open
Abstract
Recent experiments examining where participants look when grasping an object found that fixations favor the eventual index finger landing position on the object. Even though the act of picking up an object must involve complex high-level computations such as the visual analysis of object contours, surface properties, knowledge of an object's function and center of mass (COM) location, these investigations have generally used simple symmetrical objects - where COM and horizontal midline overlap. Less research has been aimed at looking at how variations in object properties, such as differences in curvature and changes in COM location, affect visual and motor control. The purpose of this study was to examine grasp and fixation locations when grasping objects whose COM was positioned to the left or right of the objects horizontal midline (Experiment 1) and objects whose COM was moved progressively further from the midline of the objects based on the alteration of the object's shape (Experiment 2). Results from Experiment 1 showed that object COM position influenced fixation locations and grasp locations differently, with fixations not as tightly linked to index finger grasp locations as was previously reported with symmetrical objects. Fixation positions were also found to be more central on the non-symmetrical objects. This difference in gaze position may provide a more holistic view, which would allow both index finger and thumb positions to be monitored while grasping. Finally, manipulations of COM distance (Experiment 2) exerted marked effects on the visual analysis of the objects when compared to its influence on grasp locations, with fixation locations more sensitive to these manipulations. Together, these findings demonstrate how object features differentially influence gaze vs. grasp positions during object interaction.
Collapse
|
42
|
|
43
|
Emotional signals from faces, bodies and scenes influence observers' face expressions, fixations and pupil-size. Front Hum Neurosci 2013; 7:810. [PMID: 24391567 PMCID: PMC3866922 DOI: 10.3389/fnhum.2013.00810] [Citation(s) in RCA: 92] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2013] [Accepted: 11/07/2013] [Indexed: 11/13/2022] Open
Abstract
We receive emotional signals from different sources, including the face, the whole body, and the natural scene. Previous research has shown the importance of context provided by the whole body and the scene on the recognition of facial expressions. This study measured physiological responses to face-body-scene combinations. Participants freely viewed emotionally congruent and incongruent face-body and body-scene pairs whilst eye fixations, pupil-size, and electromyography (EMG) responses were recorded. Participants attended more to angry and fearful vs. happy or neutral cues, independent of the source and relatively independent from whether the face body and body scene combinations were emotionally congruent or not. Moreover, angry faces combined with angry bodies and angry bodies viewed in aggressive social scenes elicited greatest pupil dilation. Participants' face expressions matched the valence of the stimuli but when face-body compounds were shown, the observed facial expression influenced EMG responses more than the posture. Together, our results show that the perception of emotional signals from faces, bodies and scenes depends on the natural context, but when threatening cues are presented, these threats attract attention, induce arousal, and evoke congruent facial reactions.
Collapse
|
44
|
What are you looking at? Moving toward an attentional timeline in insomnia: a novel semantic eye tracking study. Sleep 2013; 36:1491-9. [PMID: 24082308 DOI: 10.5665/sleep.3042] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
STUDY OBJECTIVES To date, cognitive probe paradigms have been used in different guises to obtain reaction time measurements suggestive of an attention bias towards sleep in insomnia. This study adopts a methodology which is novel to sleep research to obtain a continual record of where the eyes-and therefore attention-are being allocated with regard to sleep and neutral stimuli. DESIGN A head mounted eye tracker (Eyelink II,SR Research, Ontario, Canada) was used to monitor eye movements in respect to two words presented on a computer screen, with one word being a sleep positive, sleep negative, or neutral word above or below a second distracter pseudoword. Probability and reaction times were the outcome measures. PARTICIPANTS Sleep group classification was determined by screening interview and PSQI (> 8 = insomnia, < 3 = good sleeper) score. MEASUREMENTS AND RESULTS Those individuals with insomnia took longer to fixate on the target word and remained fixated for less time than the good sleep controls. Word saliency had an effect with longer first fixations on positive and negative sleep words in both sleep groups, with largest effect sizes seen with the insomnia group. CONCLUSIONS This overall delay in those with insomnia with regard to vigilance and maintaining attention on the target words moves away from previous attention bias work showing a bias towards sleep, particularly negative, stimuli but is suggestive of a neurocognitive deficit in line with recent research.
Collapse
|
45
|
Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes. J Vis 2013; 13:13.8.16. [PMID: 23863509 DOI: 10.1167/13.8.16] [Citation(s) in RCA: 70] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). Static scenes were also presented as controls for scene dynamics. Free viewing of dynamic scenes showed greater attentional synchrony, longer fixations, and more gaze to people and areas of high flicker compared with static scenes. These differences were minimized by the viewing task. In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion.
Collapse
|
46
|
Abstract
Previous studies have demonstrated eye movement abnormalities during smooth pursuit and antisaccadic tasks in schizophrenia. However, eye movements have not been investigated during reading. The purpose of this study was to determine whether schizophrenic subjects and their nonsymptomatic first-degree relatives show eye movement abnormalities during reading. Reading rate, number of saccades per line, amplitudes of saccades, percentage regressions (reverse saccades), and fixation durations were measured using an eye tracker (EyeLink, SensoMotoric Instruments, Germany) in 38 schizophrenic volunteers, 14 nonaffected first-degree relatives, and 57 control volunteers matched for age and National Adult Reading Test scores. Parameters were examined when volunteers read full pages of text and text was limited to progressively smaller viewing areas around the point of fixation using a gaze-contingent window. Schizophrenic volunteers showed significantly slower reading rates (P = .004), increase in total number of saccades (P ≤ .001), and a decrease in saccadic amplitude (P = .025) while reading. Relatives showed a significant increase in total number of saccades (P = .013) and decrease in saccadic amplitude (P = .020). Limitation of parafoveal information by reducing the amount of visible characters did not change the reading rate of schizophrenics but controls showed a significant decrease in reading rate with reduced parafoveal information (P < .001). Eye movement abnormalities during reading of schizophrenic volunteers and their first-degree relatives suggest that visual integration of foveal and parafoveal information may be reduced in schizophrenia. Reading abnormalities in relatives suggest a genetic influence in reading ability in schizophrenia and rule out confounding effects of medication.
Collapse
|
47
|
Perception of face and body expressions using electromyography, pupillometry and gaze measures. Front Psychol 2013; 4:28. [PMID: 23403886 PMCID: PMC3567353 DOI: 10.3389/fpsyg.2013.00028] [Citation(s) in RCA: 101] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2012] [Accepted: 01/11/2013] [Indexed: 11/18/2022] Open
Abstract
Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important as well. In these experiments we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and emotionally congruent and incongruent face-body compounds. Participants’ fixations were measured and their pupil size recorded with eye-tracking equipment and their facial reactions measured with electromyography. The results support our prediction that the recognition of a facial expression is improved in the context of a matching posture and importantly, vice versa as well. From their facial expressions, it appeared that observers acted with signs of negative emotionality (increased corrugator activity) to angry and fearful facial expressions and with positive emotionality (increased zygomaticus) to happy facial expressions. What we predicted and found, was that angry and fearful cues from the face or the body, attracted more attention than happy cues. We further observed that responses evoked by angry cues were amplified in individuals with high anxiety scores. In sum, we show that people process bodily expressions of emotion in a similar fashion as facial expressions and that the congruency between the emotional signals from the face and body facilitates the recognition of the emotion.
Collapse
|
48
|
Coding of saliency by ensemble bursting in the amygdala of primates. Front Behav Neurosci 2012; 6:38. [PMID: 22848193 PMCID: PMC3404502 DOI: 10.3389/fnbeh.2012.00038] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2012] [Accepted: 06/19/2012] [Indexed: 11/28/2022] Open
Abstract
Salient parts of a visual scene attract longer and earlier fixations of the eyes. Saliency is driven by bottom-up (image dependent) factors and top-down factors such as behavioral relevance, goals, and expertise. It is currently assumed that a saliency map defining eye fixation priorities is stored in neural structures that remain to be determined. Lesion studies support a role for the amygdala in detecting saliency. Here we show that neurons in the amygdala of primates fire differentially when the eyes approach to or fixate behaviorally relevant parts of visual scenes. Ensemble bursting in the amygdala accurately predicts main fixations during the free-viewing of natural images. However, fixation prediction is significantly better for faces—where a bottom-up computational saliency model fails—compared to unfamiliar objects and landscapes. On this basis we propose the amygdala as a locus for a saliency map and ensemble bursting as a saliency coding mechanism.
Collapse
|
49
|
Is it reliable to assess visual attention of drivers affected by Parkinson's disease from the backseat?-a simulator study. EMERGING HEALTH THREATS JOURNAL 2012; 5:EHTJ-5-15343. [PMID: 22461850 PMCID: PMC3290114 DOI: 10.3402/ehtj.v5i0.15343] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2011] [Revised: 01/29/2012] [Accepted: 01/31/2012] [Indexed: 11/26/2022]
Abstract
BACKGROUND Current methods of determining licence retainment or cancellation is through on-road driving tests. Previous research has shown that occupational therapists frequently assess drivers' visual attention while sitting in the back seat on the opposite side of the driver. Since the eyes of the driver are not always visible, assessment by eye contact becomes problematic. Such procedural drawbacks may challenge validity and reliability of the visual attention assessments. In terms of correctly classified attention, the aim of the study was to establish the accuracy and the inter-rater reliability of driving assessments of visual attention from the back seat. Furthermore, by establishing eye contact between the assessor and the driver through an additional mirror on the wind screen, the present study aimed to establish how much such an intervention would enhance the accuracy of the visual attention assessment. METHODS Two drivers with Parkinson's disease (PD) and six control drivers drove a fixed route in a driving simulator while wearing a head mounted eye tracker. The eye tracker data showed where the foveal visual attention actually was directed. These data were time stamped and compared with the simultaneous manual scoring of the visual attention of the drivers. In four of the drivers, one with Parkinson's disease, a mirror on the windscreen was set up to arrange for eye contact between the driver and the assessor. Inter-rater reliability was performed with one of the Parkinson drivers driving, but without the mirror. RESULTS Without mirror, the overall accuracy was 56% when assessing the three control drivers and with mirror 83%. However, for the PD driver without mirror the accuracy was 94%, whereas for the PD driver with a mirror the accuracy was 90%. With respect to the inter-rater reliability, a 73% agreement was found. CONCLUSION If the final outcome of a driving assessment is dependent on the subcategory of a protocol assessing visual attention, we suggest the use of an additional mirror to establish eye contact between the assessor and the driver. The clinicians' observations on-road should not be a standalone assessment in driving assessments. Instead, eye trackers should be employed for further analyses and correlation in cases where there is doubt about a driver's attention.
Collapse
|
50
|
Abstract
Recent advances in the technology affording eye movement recordings carry the risk of neglecting past achievements. Without the assistance of this modern armoury, great strides were made in describing the ways the eyes move. For Aristotle the fundamental features of eye movements were binocular, and he described the combined functions of the eyes. This was later given support using simple procedures like placing a finger over the eyelid of the closed eye and culminated in Hering's law of equal innervation. However, the overriding concern in the 19th century was with eye position rather than eye movements. Appreciating discontinuities of eye movements arose from studies of vertigo. The characteristics of nystagmus were recorded before those of saccades and fixations. Eye movements during reading were described by Hering and by Lamare in 1879; both used similar techniques of listening to sounds made during contractions of the extraocular muscles. Photographic records of eye movements during reading were made by Dodge early in the 20th century, and this stimulated research using a wider array of patterns. In the mid-20th century attention shifted to the stability of the eyes during fixation, with the emphasis on involuntary movements. The contributions of pioneers from Aristotle to Yarbus are outlined.
Collapse
|