1
|
Specian Junior FC, Litchfield D, Sandars J, Cecilio-Fernandes D. Use of eye tracking in medical education. MEDICAL TEACHER 2024; 46:1502-1509. [PMID: 38382474 DOI: 10.1080/0142159x.2024.2316863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 02/06/2024] [Indexed: 02/23/2024]
Abstract
Eye tracking has become increasingly applied in medical education research for studying the cognitive processes that occur during the performance of a task, such as image interpretation and surgical skills development. However, analysis and interpretation of the large amount of data obtained by eye tracking can be confusing. In this article, our intention is to clarify the analysis and interpretation of the data obtained from eye tracking. Understanding the relationship between eye tracking metrics (such as gaze, pupil and blink rate) and cognitive processes (such as visual attention, perception, memory and cognitive workload) is essential. The importance of calibration and how the limitations of eye tracking can be overcome is also highlighted.
Collapse
Affiliation(s)
| | | | - John Sandars
- Health Research Institute, Edge Hill University, Ormskirk, UK
| | - Dario Cecilio-Fernandes
- Department of Medical Psychology and Psychiatry, School of Medical Sciences, University of Campinas, Campinas, São Paulo, Brazil
| |
Collapse
|
2
|
van Boxtel WS, Linge M, Manning R, Haven LN, Lee J. Online Eye Tracking for Aphasia: A Feasibility Study Comparing Web and Lab Tracking and Implications for Clinical Use. Brain Behav 2024; 14:e70112. [PMID: 39469815 PMCID: PMC11519703 DOI: 10.1002/brb3.70112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 10/01/2024] [Accepted: 10/05/2024] [Indexed: 10/30/2024] Open
Abstract
BACKGROUND & AIMS Studies using eye-tracking methodology have made important contributions to the study of language disorders such as aphasia. Nevertheless, in clinical groups especially, eye-tracking studies often include small sample sizes, limiting the generalizability of reported findings. Online, webcam-based tracking offers a potential solution to this issue, but web-based tracking has not been compared with in-lab tracking in past studies and has never been attempted in groups with language impairments. MATERIALS & METHODS Patients with post-stroke aphasia (n = 16) and age-matched controls (n = 16) completed identical sentence-picture matching tasks in the lab (using an EyeLink system) and on the web (using WebGazer.js), with the order of sessions counterbalanced. We examined whether web-based eye tracking is as sensitive as in-lab eye tracking in detecting group differences in sentence processing. RESULTS Patients were less accurate and slower to respond to all sentence types than controls. Proportions of gazes to the target and foil picture were computed in 100 ms increments, which showed that the two modes of tracking were comparably sensitive to overall group differences across different sentence types. Web tracking showed comparable fluctuations in gaze proportions to target pictures to lab tracking in most analyses, whereas a delay of approximately 500-800 ms appeared in web compared to lab data. DISCUSSION & CONCLUSIONS Web-based eye tracking is feasible to study impaired language processing in aphasia and is sensitive enough to detect most group differences between controls and patients. Given that validations of webcam-based tracking are in their infancy and how transformative this method could be to several disciplines, much more testing is warranted.
Collapse
Affiliation(s)
- Willem S. van Boxtel
- Department of Speech, Language, and Hearing SciencesPurdue UniversityWest LafayetteIndianaUSA
- Department of Communication Sciences and DisordersLouisiana State UniversityBaton RougeLouisianaUSA
| | - Michael Linge
- Department of Speech, Language, and Hearing SciencesPurdue UniversityWest LafayetteIndianaUSA
| | - Rylee Manning
- Department of Speech, Language, and Hearing SciencesPurdue UniversityWest LafayetteIndianaUSA
| | - Lily N. Haven
- Department of Speech, Language, and Hearing SciencesPurdue UniversityWest LafayetteIndianaUSA
| | - Jiyeon Lee
- Department of Speech, Language, and Hearing SciencesPurdue UniversityWest LafayetteIndianaUSA
| |
Collapse
|
3
|
Hilbert LP, Noordewier MK, Seck L, van Dijk WW. Financial scarcity and financial avoidance: an eye-tracking and behavioral experiment. PSYCHOLOGICAL RESEARCH 2024; 88:2211-2220. [PMID: 39158712 PMCID: PMC11522046 DOI: 10.1007/s00426-024-02019-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Accepted: 08/05/2024] [Indexed: 08/20/2024]
Abstract
When having less money than needed, people experience financial scarcity. Here, we conducted a laboratory experiment to investigate whether financial scarcity increases financial avoidance - the tendency to avoid dealing with ones finances. Participants completed an incentivized task where they managed the finances of a household by earning income and paying expenses across multiple rounds. We manipulated participants' financial situation such that they either had sufficient (financial abundance) or insufficient (financial scarcity) financial resources. At the end of each round, participants received an additional expense in the form of a letter. To measure financial avoidance in the form of attentional disengagement, we used an eye-tracker and assessed whether participants in the financial scarcity condition avoided looking at the expense letters. As a behavioral measure of financial avoidance, participants had the option to delay the payment of these expenses until the end of the experiment at no additional cost. Results showed no effect of financial scarcity on the eye-tracking measure, but there was an effect on the behavioral measure: Participants that experienced financial scarcity were more likely to delay payments. The behavioral finding corroborates the notion that financial scarcity can lead to financial avoidance. We explore potential reasons for the null-effect on the eye-tracking measure and discuss how future research can build upon our findings.
Collapse
Affiliation(s)
- Leon P Hilbert
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands.
- Knowledge Centre Psychology and Economic Behaviour, Leiden, The Netherlands.
| | - Marret K Noordewier
- Knowledge Centre Psychology and Economic Behaviour, Leiden, The Netherlands
- Department of Social, Economic and Organisational Psychology, Leiden University, Leiden, The Netherlands
| | - Lisa Seck
- Department of Human Resource Management, Universität Duisburg-Essen Mercator School of Management, Duisburg, Germany
| | - Wilco W van Dijk
- Knowledge Centre Psychology and Economic Behaviour, Leiden, The Netherlands
- Department of Social, Economic and Organisational Psychology, Leiden University, Leiden, The Netherlands
| |
Collapse
|
4
|
Benedi-Garcia C, Concepcion-Grande P, Chamorro E, Cleva JM, Alonso J. Experimental Method for Identifying Regions of Use of a Progressive Power Lens Using an Eye-Tracker: Validation Study. Life (Basel) 2024; 14:1178. [PMID: 39337961 PMCID: PMC11433045 DOI: 10.3390/life14091178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 09/11/2024] [Accepted: 09/13/2024] [Indexed: 09/30/2024] Open
Abstract
Power distribution of progressive power lenses provides usable regions based on power distribution analysis. However, recent studies demonstrated that these regions are not always used for certain tasks as predicted. This work determines the concordance between the actual region of lens use and compares it with the theoretically located regions. The pupil position of 26 subjects was recorded using an eye-tracking system (Tobii-Pro-Glasses 3) at distance and near-reading tasks while wearing a general use progressive power lens. Subjects were asked to read aloud a text showed on a screen placed at 5.25 m and 37 cm while looking though the central and lateral regions of the lens. The pupil position was projected onto the back surface of the lens to obtain the actual region of use for each fixation. Results showed that the actual region of use matched with the theoretically located. On average, the concordance between the actual and theoretical regions of use was 85% for a distance-reading task and 73% for a near-reading task. In conclusion, the proposed method effectively located the actual regions of the lens used, revealing how users' posture affects lens usage. This insight enables the design of more customized progressive lenses based on the areas used during vision-based tasks.
Collapse
|
5
|
Dunn MJ, Alexander RG, Amiebenomo OM, Arblaster G, Atan D, Erichsen JT, Ettinger U, Giardini ME, Gilchrist ID, Hamilton R, Hessels RS, Hodgins S, Hooge ITC, Jackson BS, Lee H, Macknik SL, Martinez-Conde S, Mcilreavy L, Muratori LM, Niehorster DC, Nyström M, Otero-Millan J, Schlüssel MM, Self JE, Singh T, Smyrnis N, Sprenger A. Minimal reporting guideline for research involving eye tracking (2023 edition). Behav Res Methods 2024; 56:4351-4357. [PMID: 37507649 PMCID: PMC11225961 DOI: 10.3758/s13428-023-02187-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/28/2023] [Indexed: 07/30/2023]
Abstract
A guideline is proposed that comprises the minimum items to be reported in research studies involving an eye tracker and human or non-human primate participant(s). This guideline was developed over a 3-year period using a consensus-based process via an open invitation to the international eye tracking community. This guideline will be reviewed at maximum intervals of 4 years.
Collapse
Affiliation(s)
- Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK.
| | - Robert G Alexander
- Departments of Ophthalmology, Neurology, and Physiology/Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Gemma Arblaster
- Health Sciences School, University of Sheffield, Sheffield, UK
- Orthoptic Department, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Denize Atan
- Bristol Medical School, University of Bristol, Bristol, UK
| | | | | | - Mario E Giardini
- Department of Biomedical Engineering, University of Strathclyde, Glasgow, UK
| | - Iain D Gilchrist
- School of Psychological Science, University of Bristol, Bristol, UK
| | - Ruth Hamilton
- Department of Clinical Physics & Bioengineering, Royal Hospital for Children, NHS Greater Glasgow & Clyde, Glasgow, UK
- College of Medical, Veterinary & Life Sciences, University of Glasgow, Glasgow, UK
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | | | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Brooke S Jackson
- Department of Psychology, University of Georgia, Athens, GA, USA
| | - Helena Lee
- Clinical and Experimental Sciences, University of Southampton, Southampton, UK
| | - Stephen L Macknik
- Departments of Ophthalmology, Neurology, and Physiology/Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Susana Martinez-Conde
- Departments of Ophthalmology, Neurology, and Physiology/Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Lee Mcilreavy
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | - Lisa M Muratori
- Department of Physical Therapy, School of Health Professions, Stony Brook University, Stony Brook, NY, USA
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
- Department of Neurology, Johns Hopkins University, Baltimore, MD, USA
| | - Michael M Schlüssel
- UK EQUATOR Centre, Centre for Statistics in Medicine (CSM), Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences (NDORMS), University of Oxford, Oxford, UK
| | - Jay E Self
- Clinical and Experimental Sciences, University of Southampton, Southampton, UK
| | - Tarkeshwar Singh
- Department of Kinesiology, Pennsylvania State University, University Park, PA, USA
| | - Nikolaos Smyrnis
- 2nd Department of Psychiatry, National and Kapodistrian University of Athens, Medical School, General University Hospital Attikon, Athens, Greece
| | - Andreas Sprenger
- Department of Neurology and Institute of Psychology II, Center of Brain, Behavior and Metabolism (CBBM), University of Luebeck, Luebeck, Germany
| |
Collapse
|
6
|
Miljković N, Sodnik J. Effectiveness of a time to fixate for fitness to drive evaluation in neurological patients. Behav Res Methods 2024; 56:4277-4292. [PMID: 37488465 PMCID: PMC11289031 DOI: 10.3758/s13428-023-02177-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/16/2023] [Indexed: 07/26/2023]
Abstract
We present a method to automatically calculate time to fixate (TTF) from the eye-tracker data in subjects with neurological impairment using a driving simulator. TTF presents the time interval for a person to notice the stimulus from its first occurrence. Precisely, we measured the time since the children started to cross the street until the drivers directed their look to the children. From 108 neurological patients recruited for the study, the analysis of TTF was performed in 56 patients to assess fit-, unfit-, and conditionally-fit-to-drive patients. The results showed that the proposed method based on the YOLO (you only look once) object detector is efficient for computing TTFs from the eye-tracker data. We obtained discriminative results for fit-to-drive patients by application of Tukey's honest significant difference post hoc test (p < 0.01), while no difference was observed between conditionally-fit and unfit-to-drive groups (p = 0.542). Moreover, we show that time-to-collision (TTC), initial gaze distance (IGD) from pedestrians, and speed at the hazard onset did not influence the result, while the only significant interaction is among fitness, IGD, and TTC on TTF. Obtained TTFs are also compared with the perception response times (PRT) calculated independently from eye-tracker data and YOLO. Although we reached statistically significant results that speak in favor of possible method application for assessment of fitness to drive, we provide detailed directions for future driving simulation-based evaluation and propose processing workflow to secure reliable TTF calculation and its possible application in for example psychology and neuroscience.
Collapse
Affiliation(s)
- Nadica Miljković
- University of Belgrade - School of Electrical Engineering, Bulevar kralja Aleksandra 73, 11000, Belgrade, Serbia.
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška cesta 25, 1000, Ljubljana, Slovenia.
| | - Jaka Sodnik
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška cesta 25, 1000, Ljubljana, Slovenia
| |
Collapse
|
7
|
Onkhar V, Dodou D, de Winter JCF. Evaluating the Tobii Pro Glasses 2 and 3 in static and dynamic conditions. Behav Res Methods 2024; 56:4221-4238. [PMID: 37550466 PMCID: PMC11289326 DOI: 10.3758/s13428-023-02173-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/15/2023] [Indexed: 08/09/2023]
Abstract
Over the past few decades, there have been significant developments in eye-tracking technology, particularly in the domain of mobile, head-mounted devices. Nevertheless, questions remain regarding the accuracy of these eye-trackers during static and dynamic tasks. In light of this, we evaluated the performance of two widely used devices: Tobii Pro Glasses 2 and Tobii Pro Glasses 3. A total of 36 participants engaged in tasks under three dynamicity conditions. In the "seated with a chinrest" trial, only the eyes could be moved; in the "seated without a chinrest" trial, both the head and the eyes were free to move; and during the walking trial, participants walked along a straight path. During the seated trials, participants' gaze was directed towards dots on a wall by means of audio instructions, whereas in the walking trial, participants maintained their gaze on a bullseye while walking towards it. Eye-tracker accuracy was determined using computer vision techniques to identify the target within the scene camera image. The findings showed that Tobii 3 outperformed Tobii 2 in terms of accuracy during the walking trials. Moreover, the results suggest that employing a chinrest in the case of head-mounted eye-trackers is counterproductive, as it necessitates larger eye eccentricities for target fixation, thereby compromising accuracy compared to not using a chinrest, which allows for head movement. Lastly, it was found that participants who reported higher workload demonstrated poorer eye-tracking accuracy. The current findings may be useful in the design of experiments that involve head-mounted eye-trackers.
Collapse
Affiliation(s)
- V Onkhar
- Department of Cognitive Robotics, Delft University of Technology, Delft, The Netherlands
| | - D Dodou
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands
| | - J C F de Winter
- Department of Cognitive Robotics, Delft University of Technology, Delft, The Netherlands.
| |
Collapse
|
8
|
Postuma EMJL, Heutink J, Tol S, Jansen JL, Koopman J, Cornelissen FW, de Haan GA. A systematic review on visual scanning behaviour in hemianopia considering task specificity, performance improvement, spontaneous and training-induced adaptations. Disabil Rehabil 2024; 46:3221-3242. [PMID: 37563867 PMCID: PMC11259206 DOI: 10.1080/09638288.2023.2243590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 07/29/2023] [Indexed: 08/12/2023]
Abstract
PURPOSE People with homonymous hemianopia (HH) benefit from applying compensatory scanning behaviour that limits the consequences of HH in a specific task. The aim of the study is to (i) review the current literature on task-specific scanning behaviour that improves performance and (ii) identify differences between this performance-enhancing scanning behaviour and scanning behaviour that is spontaneously adopted or acquired through training. MATERIALS AND METHODS The databases PsycInfo, Medline, and Web of Science were searched for articles on scanning behaviour in people with HH. RESULTS The final sample contained 60 articles, reporting on three main tasks, i.e., search (N = 17), reading (N = 16) and mobility (N = 14), and other tasks (N = 18). Five articles reported on two different tasks. Specific scanning behaviour related to task performance in search, reading, and mobility tasks. In search and reading tasks, spontaneous adaptations differed from this performance-enhancing scanning behaviour. Training could induce adaptations in scanning behaviour, enhancing performance in these two tasks. For mobility tasks, limited to no information was found on spontaneous and training-induced adaptations to scanning behaviour. CONCLUSIONS Performance-enhancing scanning behaviour is mainly task-specific. Spontaneous development of such scanning behaviour is rare. Luckily, current compensatory scanning training programs can induce such scanning behaviour, which confirms that providing scanning training is important.IMPLICATIONS FOR REHABILITATIONScanning behaviour that improves performance in people with homonymous hemianopia (HH) is task-specific.Most people with HH do not spontaneously adopt scanning behaviour that improves performance.Compensatory scanning training can induce performance-enhancing scanning behaviour.
Collapse
Affiliation(s)
- Eva M. J. L. Postuma
- Department Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, Rijksuniversiteit Groningen, Groningen, The Netherlands
| | - Joost Heutink
- Department Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, Rijksuniversiteit Groningen, Groningen, The Netherlands
- Royal Dutch Visio, Centre of Expertise for Blind and Partially Sighted People, Huizen, The Netherlands
| | - Sarah Tol
- Department Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, Rijksuniversiteit Groningen, Groningen, The Netherlands
| | - Josephien L. Jansen
- Department Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, Rijksuniversiteit Groningen, Groningen, The Netherlands
| | - Jan Koopman
- Royal Dutch Visio, Centre of Expertise for Blind and Partially Sighted People, Huizen, The Netherlands
| | - Frans W. Cornelissen
- Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Gera A. de Haan
- Department Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, Rijksuniversiteit Groningen, Groningen, The Netherlands
- Royal Dutch Visio, Centre of Expertise for Blind and Partially Sighted People, Huizen, The Netherlands
| |
Collapse
|
9
|
Murari J, Gautier J, Daout J, Krafft L, Senée P, Mecê P, Grieve K, Seiple W, Sheynikhovich D, Meimon S, Paques M, Arleo A. Foveolar Drusen Decrease Fixation Stability in Pre-Symptomatic AMD. Invest Ophthalmol Vis Sci 2024; 65:13. [PMID: 38975944 PMCID: PMC11232898 DOI: 10.1167/iovs.65.8.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/09/2024] Open
Abstract
Purpose This study aims at linking subtle changes of fixational eye movements (FEM) in controls and in patients with foveal drusen using adaptive optics retinal imaging in order to find anatomo-functional markers for pre-symptomatic age-related macular degeneration (AMD). Methods We recruited 7 young controls, 4 older controls, and 16 patients with presymptomatic AMD with foveal drusen from the Silversight Cohort. A high-speed research-grade adaptive optics flood illumination ophthalmoscope (AO-FIO) was used for monocular retinal tracking of fixational eye movements. The system allows for sub-arcminute resolution, and high-speed and distortion-free imaging of the foveal area. Foveal drusen position and size were documented using gaze-dependent imaging on a clinical-grade AO-FIO. Results FEM were measured with high precision (RMS-S2S = 0.0015 degrees on human eyes) and small foveal drusen (median diameter = 60 µm) were detected with high contrast imaging. Microsaccade amplitude, drift diffusion coefficient, and ISOline area (ISOA) were significantly larger for patients with foveal drusen compared with controls. Among the drusen participants, microsaccade amplitude was correlated to drusen eccentricity from the center of the fovea. Conclusions A novel high-speed high-precision retinal tracking technique allowed for the characterization of FEM at the microscopic level. Foveal drusen altered fixation stability, resulting in compensatory FEM changes. Particularly, drusen at the foveolar level seemed to have a stronger impact on microsaccade amplitudes and ISOA. The unexpected anatomo-functional link between small foveal drusen and fixation stability opens up a new perspective of detecting oculomotor signatures of eye diseases at the presymptomatic stage.
Collapse
Affiliation(s)
- Jimmy Murari
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Josselin Gautier
- CHNO des Quinze-Vingts, INSERM-DGOS CIC, Paris, France
- LTSI, Inserm UMR 1099, University of Rennes, Rennes, France
| | - Joël Daout
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Léa Krafft
- Office National d'Etudes et de Recherches Aérospatiales (ONERA), Hauts-de-Seine, France
| | - Pierre Senée
- Office National d'Etudes et de Recherches Aérospatiales (ONERA), Hauts-de-Seine, France
- Quantel Medical SA, Cournon d'Auvergne, France
| | - Pedro Mecê
- Office National d'Etudes et de Recherches Aérospatiales (ONERA), Hauts-de-Seine, France
- Institut Langevin, CNRS, ESPCI, Paris, France
| | - Kate Grieve
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
- CHNO des Quinze-Vingts, INSERM-DGOS CIC, Paris, France
| | | | | | - Serge Meimon
- Office National d'Etudes et de Recherches Aérospatiales (ONERA), Hauts-de-Seine, France
| | - Michel Paques
- CHNO des Quinze-Vingts, INSERM-DGOS CIC, Paris, France
| | - Angelo Arleo
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| |
Collapse
|
10
|
Cade A, Turnbull PRK. Classification of short and long term mild traumatic brain injury using computerized eye tracking. Sci Rep 2024; 14:12686. [PMID: 38830966 PMCID: PMC11148176 DOI: 10.1038/s41598-024-63540-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 05/29/2024] [Indexed: 06/05/2024] Open
Abstract
Accurate, and objective diagnosis of brain injury remains challenging. This study evaluated useability and reliability of computerized eye-tracker assessments (CEAs) designed to assess oculomotor function, visual attention/processing, and selective attention in recent mild traumatic brain injury (mTBI), persistent post-concussion syndrome (PPCS), and controls. Tests included egocentric localisation, fixation-stability, smooth-pursuit, saccades, Stroop, and the vestibulo-ocular reflex (VOR). Thirty-five healthy adults performed the CEA battery twice to assess useability and test-retest reliability. In separate experiments, CEA data from 55 healthy, 20 mTBI, and 40 PPCS adults were used to train a machine learning model to categorize participants into control, mTBI, or PPCS classes. Intraclass correlation coefficients demonstrated moderate (ICC > .50) to excellent (ICC > .98) reliability (p < .05) and satisfactory CEA compliance. Machine learning modelling categorizing participants into groups of control, mTBI, and PPCS performed reasonably (balanced accuracy control: 0.83, mTBI: 0.66, and PPCS: 0.76, AUC-ROC: 0.82). Key outcomes were the VOR (gaze stability), fixation (vertical error), and pursuit (total error, vertical gain, and number of saccades). The CEA battery was reliable and able to differentiate healthy, mTBI, and PPCS patients reasonably well. While promising, the diagnostic model accuracy should be improved with a larger training dataset before use in clinical environments.
Collapse
Affiliation(s)
- Alice Cade
- School of Optometry and Vision Science, The University of Auckland, Private Bag 92019, Auckland, 1023, New Zealand.
- New Zealand College of Chiropractic, Auckland, New Zealand.
| | - Philip R K Turnbull
- School of Optometry and Vision Science, The University of Auckland, Private Bag 92019, Auckland, 1023, New Zealand
| |
Collapse
|
11
|
Stark P, Hasenbein L, Kasneci E, Göllner R. Gaze-based attention network analysis in a virtual reality classroom. MethodsX 2024; 12:102662. [PMID: 38577409 PMCID: PMC10993185 DOI: 10.1016/j.mex.2024.102662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 03/11/2024] [Indexed: 04/06/2024] Open
Abstract
This article provides a step-by-step guideline for measuring and analyzing visual attention in 3D virtual reality (VR) environments based on eye-tracking data. We propose a solution to the challenges of obtaining relevant eye-tracking information in a dynamic 3D virtual environment and calculating interpretable indicators of learning and social behavior. With a method called "gaze-ray casting," we simulated 3D-gaze movements to obtain information about the gazed objects. This information was used to create graphical models of visual attention, establishing attention networks. These networks represented participants' gaze transitions between different entities in the VR environment over time. Measures of centrality, distribution, and interconnectedness of the networks were calculated to describe the network structure. The measures, derived from graph theory, allowed for statistical inference testing and the interpretation of participants' visual attention in 3D VR environments. Our method provides useful insights when analyzing students' learning in a VR classroom, as reported in a corresponding evaluation article with N = 274 participants. •Guidelines on implementing gaze-ray casting in VR using the Unreal Engine and the HTC VIVE Pro Eye.•Creating gaze-based attention networks and analyzing their network structure.•Implementation tutorials and the Open Source software code are provided via OSF: https://osf.io/pxjrc/?view_only=1b6da45eb93e4f9eb7a138697b941198.
Collapse
Affiliation(s)
- Philipp Stark
- University of Tübingen, Hector Research Institute, Europastraße 6, 72072 Tübingen, Germany
| | - Lisa Hasenbein
- University of Tübingen, Hector Research Institute, Europastraße 6, 72072 Tübingen, Germany
| | - Enkelejda Kasneci
- Technical University of Munich, Chair for Human-Centered Technologies for Learning, Arcisstraße 21, 80333 München, Germany
| | - Richard Göllner
- University of Tübingen, Hector Research Institute, Europastraße 6, 72072 Tübingen, Germany
- University of Regensburg, Institute of Educational Science, Universitätsstraße 31, 93053 Regensburg, Germany
| |
Collapse
|
12
|
Garces P, Antoniades CA, Sobanska A, Kovacs N, Ying SH, Gupta AS, Perlman S, Szmulewicz DJ, Pane C, Németh AH, Jardim LB, Coarelli G, Dankova M, Traschütz A, Tarnutzer AA. Quantitative Oculomotor Assessment in Hereditary Ataxia: Systematic Review and Consensus by the Ataxia Global Initiative Working Group on Digital-motor Biomarkers. CEREBELLUM (LONDON, ENGLAND) 2024; 23:896-911. [PMID: 37117990 PMCID: PMC11102387 DOI: 10.1007/s12311-023-01559-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/18/2023] [Indexed: 04/30/2023]
Abstract
Oculomotor deficits are common in hereditary ataxia, but disproportionally neglected in clinical ataxia scales and as outcome measures for interventional trials. Quantitative assessment of oculomotor function has become increasingly available and thus applicable in multicenter trials and offers the opportunity to capture severity and progression of oculomotor impairment in a sensitive and reliable manner. In this consensus paper of the Ataxia Global Initiative Working Group On Digital Oculomotor Biomarkers, based on a systematic literature review, we propose harmonized methodology and measurement parameters for the quantitative assessment of oculomotor function in natural-history studies and clinical trials in hereditary ataxia. MEDLINE was searched for articles reporting on oculomotor/vestibular properties in ataxia patients and a study-tailored quality-assessment was performed. One-hundred-and-seventeen articles reporting on subjects with genetically confirmed (n=1134) or suspected hereditary ataxia (n=198), and degenerative ataxias with sporadic presentation (n=480) were included and subject to data extraction. Based on robust discrimination from controls, correlation with disease-severity, sensitivity to change, and feasibility in international multicenter settings as prerequisite for clinical trials, we prioritize a core-set of five eye-movement types: (i) pursuit eye movements, (ii) saccadic eye movements, (iii) fixation, (iv) eccentric gaze holding, and (v) rotational vestibulo-ocular reflex. We provide detailed guidelines for their acquisition, and recommendations on the quantitative parameters to extract. Limitations include low study quality, heterogeneity in patient populations, and lack of longitudinal studies. Standardization of quantitative oculomotor assessments will facilitate their implementation, interpretation, and validation in clinical trials, and ultimately advance our understanding of the evolution of oculomotor network dysfunction in hereditary ataxias.
Collapse
Affiliation(s)
- Pilar Garces
- Roche Pharma Research and Early Development, Neuroscience and Rare Diseases, Roche Innovation Center Basel, Basel, Switzerland
| | - Chrystalina A Antoniades
- NeuroMetrology Lab, Nuffield Department of Clinical Neurosciences, Clinical Neurology, Medical Sciences Division, University of Oxford, Oxford, OX3 9DU, UK
| | - Anna Sobanska
- Institute of Psychiatry and Neurology, Warsaw, Poland
| | - Norbert Kovacs
- Department of Neurology, University of Pécs, Medical School, Pécs, Hungary
| | - Sarah H Ying
- Department of Otology and Laryngology and Department of Neurology, Harvard Medical School, Boston, MA, USA
| | - Anoopum S Gupta
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Susan Perlman
- University of California Los Angeles, Los Angeles, California, USA
| | - David J Szmulewicz
- Balance Disorders and Ataxia Service, Royal Victoria Eye and Ear Hospital, East Melbourne, Melbourne, VIC, 3002, Australia
- The Florey Institute of Neuroscience and Mental Health, Parkville, Melbourne, VIC, 3052, Australia
| | - Chiara Pane
- Department of Neurosciences and Reproductive and Odontostomatological Sciences, University of Naples "Federico II", Naples, Italy
| | - Andrea H Németh
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
- Oxford Centre for Genomic Medicine, Oxford University Hospitals NHS Trust, Oxford, UK
| | - Laura B Jardim
- Departamento de Medicina Interna, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil
- Serviço de Genética Médica/Centro de Pesquisa Clínica e Experimental, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil
| | - Giulia Coarelli
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm U1127, CNRS UMR7225, Paris, France
- Department of Genetics, Neurogene National Reference Centre for Rare Diseases, Pitié-Salpêtrière University Hospital, Assistance Publique, Hôpitaux de Paris, Paris, France
| | - Michaela Dankova
- Department of Neurology, Centre of Hereditary Ataxias, 2nd Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - Andreas Traschütz
- Research Division "Translational Genomics of Neurodegenerative Diseases", Hertie-Institute for Clinical Brain Research and Center of Neurology, University of Tübingen, Tübingen, Germany
- German Center for Neurodegenerative Diseases (DZNE), University of Tübingen, Tübingen, Germany
| | - Alexander A Tarnutzer
- Neurology, Cantonal Hospital of Baden, 5404, Baden, Switzerland.
- Faculty of Medicine, University of Zurich, Zurich, Switzerland.
| |
Collapse
|
13
|
Pradeep V, Jayachandra AB, Askar SS, Abouhawwash M. Hyperparameter tuning using Lévy flight and interactive crossover-based reptile search algorithm for eye movement event classification. Front Physiol 2024; 15:1366910. [PMID: 38812881 PMCID: PMC11134024 DOI: 10.3389/fphys.2024.1366910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Accepted: 04/10/2024] [Indexed: 05/31/2024] Open
Abstract
Introduction: Eye movement is one of the cues used in human-machine interface technologies for predicting the intention of users. The developing application in eye movement event detection is the creation of assistive technologies for paralyzed patients. However, developing an effective classifier is one of the main issues in eye movement event detection. Methods: In this paper, bidirectional long short-term memory (BILSTM) is proposed along with hyperparameter tuning for achieving effective eye movement event classification. The Lévy flight and interactive crossover-based reptile search algorithm (LICRSA) is used for optimizing the hyperparameters of BILSTM. The issues related to overfitting are avoided by using fuzzy data augmentation (FDA), and a deep neural network, namely, VGG-19, is used for extracting features from eye movements. Therefore, the optimization of hyperparameters using LICRSA enhances the classification of eye movement events using BILSTM. Results and Discussion: The proposed BILSTM-LICRSA is evaluated by using accuracy, precision, sensitivity, F1-score, area under the receiver operating characteristic (AUROC) curve measure, and area under the precision-recall curve (AUPRC) measure for four datasets, namely, Lund2013, collected dataset, GazeBaseR, and UTMultiView. The gazeNet, human manual classification (HMC), and multi-source information-embedded approach (MSIEA) are used for comparison with the BILSTM-LICRSA. The F1-score of BILSTM-LICRSA for the GazeBaseR dataset is 98.99%, which is higher than that of the MSIEA.
Collapse
Affiliation(s)
- V. Pradeep
- Department of Information Science and Engineering, Alva’s Institute of Engineering and Technology, Mangaluru, India
| | - Ananda Babu Jayachandra
- Department of Information Science and Engineering, Malnad College of Engineering, Hassan, India
| | - S. S. Askar
- Department of Statistics and Operations Research, College of Science, King Saud University, Riyadh, Saudi Arabia
| | - Mohamed Abouhawwash
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, Egypt
| |
Collapse
|
14
|
Osanami Törngren S, Schütze C, Van Belle E, Nyström M. "We choose this CV because we choose diversity" - What do eye movements say about the choices recruiters make? FRONTIERS IN SOCIOLOGY 2024; 9:1222850. [PMID: 38515653 PMCID: PMC10954785 DOI: 10.3389/fsoc.2024.1222850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 02/02/2024] [Indexed: 03/23/2024]
Abstract
Introduction A large body of research has established a consensus that racial discrimination in CV screening occurs and persists. Nevertheless, we still know very little about how recruiters look at the CV and how this is connected to the discriminatory patterns. This article examines the way recruiters view and select CVs and how they reason about their CV selection choices, as a first step in unpacking the patterns of hiring discrimination. Specifically, we explore how race and ethnicity signaled through the CV matter, and how recruiters reason about the choices they make. Methods We recorded data from 40 respondents (20 pairs) who are real-life recruiters with experiences in recruitment of diverse employees in three large Swedish-based firms in the finance and retail sector in two large cities. The participating firms all value diversity, equity and inclusion in their recruitment. Their task was to individually rate 10 fictious CVs where race (signaled by face image) and ethnicity (signaled by name) were systematically manipulated, select the top three candidates, and then discuss their choices in pairs to decide on a single top candidate. We examined whether respondents' choices were associated with the parts of the CV they looked at, and how they reasoned and justified their choices through dialog. Results Our results show that non-White CVs were rated higher than White CVs. While we do not observe any statistically significant differences in the ratings between different racial groups, we see a statistically significant preference for Chinese over Iraqi names. There were no significant differences in time spent looking at the CV across different racial groups, but respondents looked longer at Polish names compared to Swedish names when presented next to a White face. The dialog data reveal how respondents assess different CVs by making assumptions about the candidates' job and organizational fit through limited information on the CVs, especially when the qualifications of the candidates are evaluated to be equal.
Collapse
Affiliation(s)
- Sayaka Osanami Törngren
- Department of Global Political Studies, Malmö Institute for Studies of Migration, Diversity, and Welfare, Malmö University, Malmö, Sweden
| | - Carolin Schütze
- Department of Global Political Studies, Malmö Institute for Studies of Migration, Diversity, and Welfare, Malmö University, Malmö, Sweden
| | - Eva Van Belle
- Brussels Institute for Social and Population Studies (BRISPO), Vrije Universiteit Brussel, Brussels, Belgium
| | | |
Collapse
|
15
|
Valtakari NV, Hessels RS, Niehorster DC, Viktorsson C, Nyström P, Falck-Ytter T, Kemner C, Hooge ITC. A field test of computer-vision-based gaze estimation in psychology. Behav Res Methods 2024; 56:1900-1915. [PMID: 37101100 PMCID: PMC10990994 DOI: 10.3758/s13428-023-02125-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/07/2023] [Indexed: 04/28/2023]
Abstract
Computer-vision-based gaze estimation refers to techniques that estimate gaze direction directly from video recordings of the eyes or face without the need for an eye tracker. Although many such methods exist, their validation is often found in the technical literature (e.g., computer science conference papers). We aimed to (1) identify which computer-vision-based gaze estimation methods are usable by the average researcher in fields such as psychology or education, and (2) evaluate these methods. We searched for methods that do not require calibration and have clear documentation. Two toolkits, OpenFace and OpenGaze, were found to fulfill these criteria. First, we present an experiment where adult participants fixated on nine stimulus points on a computer screen. We filmed their face with a camera and processed the recorded videos with OpenFace and OpenGaze. We conclude that OpenGaze is accurate and precise enough to be used in screen-based experiments with stimuli separated by at least 11 degrees of gaze angle. OpenFace was not sufficiently accurate for such situations but can potentially be used in sparser environments. We then examined whether OpenFace could be used with horizontally separated stimuli in a sparse environment with infant participants. We compared dwell measures based on OpenFace estimates to the same measures based on manual coding. We conclude that OpenFace gaze estimates may potentially be used with measures such as relative total dwell time to sparse, horizontally separated areas of interest, but should not be used to draw conclusions about measures such as dwell duration.
Collapse
Affiliation(s)
- Niilo V Valtakari
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands.
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Charlotte Viktorsson
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Pär Nyström
- Uppsala Child and Baby Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Terje Falck-Ytter
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
- Karolinska Institutet Center of Neurodevelopmental Disorders (KIND), Department of Women's and Children's Health, Karolinska Institutet, Stockholm, Sweden
| | - Chantal Kemner
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands
| |
Collapse
|
16
|
Niehorster DC, Hessels RS, Benjamins JS, Nyström M, Hooge ITC. GlassesValidator: A data quality tool for eye tracking glasses. Behav Res Methods 2024; 56:1476-1484. [PMID: 37326770 PMCID: PMC10991001 DOI: 10.3758/s13428-023-02105-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/03/2023] [Indexed: 06/17/2023]
Abstract
According to the proposal for a minimum reporting guideline for an eye tracking study by Holmqvist et al. (2022), the accuracy (in degrees) of eye tracking data should be reported. Currently, there is no easy way to determine accuracy for wearable eye tracking recordings. To enable determining the accuracy quickly and easily, we have produced a simple validation procedure using a printable poster and accompanying Python software. We tested the poster and procedure with 61 participants using one wearable eye tracker. In addition, the software was tested with six different wearable eye trackers. We found that the validation procedure can be administered within a minute per participant and provides measures of accuracy and precision. Calculating the eye-tracking data quality measures can be done offline on a simple computer and requires no advanced computer skills.
Collapse
Affiliation(s)
- Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden.
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute & Social, Health and Organisational Psychology, Utrecht University, Utrecht, Netherlands
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
17
|
Yuan XF, Ji YQ, Zhang TX, Xiang HB, Ye ZY, Ye Q. Effects of Exercise Habits and Gender on Sports e-Learning Behavior: Evidence from an Eye-Tracking Study. Psychol Res Behav Manag 2024; 17:813-826. [PMID: 38434961 PMCID: PMC10909329 DOI: 10.2147/prbm.s442863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 02/20/2024] [Indexed: 03/05/2024] Open
Abstract
Background/Objective In the post-epidemic era, an increasing number of individuals were accustomed to learning sports and physical activity knowledge online for fitness and health demands. However, most previous studies have examined the influence of e-learning materials and resources on learners and have neglected intrinsic factors such as experience and physiological characteristics. Therefore, we conducted a study to investigate the effect of exercise habits and gender on sports e-learning behavior via eye-tracking technology. Methods We recruited a sample of 60 undergraduate students (mean age = 19.6) from a university in Nanjing, China. They were randomly assigned into 4 groups based on 2 genders × 2 exercise habits. Their gaze behavior was collected by an eye-tracking device during the experiment. The cognitive Load Test and Learning Effect Test were conducted at the end of the individual experiment. Results (1) Compared to the non-exercise habit group, the exercise habit group had a higher fixation count (P<0.05), a shorter average fixation duration (P<0.05), a smaller average pupil diameter (P<0.05), and a lower subjective cognitive load (P<0.05) and better learning outcome (P<0.05). (2) Male participants showed a greater tendency to process information from the video area of interest (AOIs), and had lower subjective cognitive load (P < 0.05) and better learning outcomes (P < 0.05). (3) There was no interaction effect between exercise habits and gender for any of the indicators (P > 0.05). Conclusion Our results indicate that exercise habits effectively enhance sports e-learning outcomes and reduce cognitive load. The exercise habits group showed significant improvements in fixation counts, average fixation duration, and average pupil diameter. Furthermore, male subjects exhibited superior learning outcomes, experienced lower cognitive load, and demonstrated greater attentiveness to dynamic visual information. These conclusions are expected to improve sports e-learning success and address educational inequality.
Collapse
Affiliation(s)
- Xu-Fu Yuan
- School of Sports Training, Nanjing Sport Institute, Nanjing, Jiangsu, People’s Republic of China
| | - Yu-Qin Ji
- School of Sport and Human Science, Nanjing Sport Institute, Nanjing, Jiangsu, People’s Republic of China
| | - Teng-Xiao Zhang
- School of Physical Education and Humanities, Nanjing Sport Institute, Nanjing, Jiangsu, People’s Republic of China
| | - Hong-Bin Xiang
- School of Sport, Exercise and Health Sciences, Loughborough University, Leicestershire, UK
| | - Zhuo-Yan Ye
- Nanjing Foreign Language School Xianlin Campus, Nanjing, Jiangsu, People’s Republic of China
| | - Qiang Ye
- School of Physical Education and Humanities, Nanjing Sport Institute, Nanjing, Jiangsu, People’s Republic of China
| |
Collapse
|
18
|
Zeng G, Simpson EA, Paukner A. Maximizing valid eye-tracking data in human and macaque infants by optimizing calibration and adjusting areas of interest. Behav Res Methods 2024; 56:881-907. [PMID: 36890330 DOI: 10.3758/s13428-022-02056-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/24/2022] [Indexed: 03/10/2023]
Abstract
Remote eye tracking with automated corneal reflection provides insights into the emergence and development of cognitive, social, and emotional functions in human infants and non-human primates. However, because most eye-tracking systems were designed for use in human adults, the accuracy of eye-tracking data collected in other populations is unclear, as are potential approaches to minimize measurement error. For instance, data quality may differ across species or ages, which are necessary considerations for comparative and developmental studies. Here we examined how the calibration method and adjustments to areas of interest (AOIs) of the Tobii TX300 changed the mapping of fixations to AOIs in a cross-species longitudinal study. We tested humans (N = 119) at 2, 4, 6, 8, and 14 months of age and macaques (Macaca mulatta; N = 21) at 2 weeks, 3 weeks, and 6 months of age. In all groups, we found improvement in the proportion of AOI hits detected as the number of successful calibration points increased, suggesting calibration approaches with more points may be advantageous. Spatially enlarging and temporally prolonging AOIs increased the number of fixation-AOI mappings, suggesting improvements in capturing infants' gaze behaviors; however, these benefits varied across age groups and species, suggesting different parameters may be ideal, depending on the population studied. In sum, to maximize usable sessions and minimize measurement error, eye-tracking data collection and extraction approaches may need adjustments for the age groups and species studied. Doing so may make it easier to standardize and replicate eye-tracking research findings.
Collapse
Affiliation(s)
- Guangyu Zeng
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | | - Annika Paukner
- Department of Psychology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
19
|
Arsiwala-Scheppach LT, Castner NJ, Rohrer C, Mertens S, Kasneci E, Cejudo Grano de Oro JE, Schwendicke F. Impact of artificial intelligence on dentists' gaze during caries detection: A randomized controlled trial. J Dent 2024; 140:104793. [PMID: 38016620 DOI: 10.1016/j.jdent.2023.104793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/15/2023] [Accepted: 11/24/2023] [Indexed: 11/30/2023] Open
Abstract
OBJECTIVES We aimed to understand how artificial intelligence (AI) influences dentists by comparing their gaze behavior when using versus not using an AI software to detect primary proximal carious lesions on bitewing radiographs. METHODS 22 dentists assessed a median of 18 bitewing images resulting in 170 datasets from dentists without AI and 179 datasets from dentists with AI, after excluding data with poor gaze recording quality. We compared time to first fixation, fixation count, average fixation duration, and fixation frequency between both trial groups. Analyses were performed for the entire image and stratified by (1) presence of carious lesions and/or restorations and (2) lesion depth (E1/2: outer/inner enamel; D1-3 outer-inner third of dentin). We also compared the transitional pattern of the dentists' gaze between the trial groups. RESULTS Median time to first fixation was shorter in all groups of teeth for dentists with AI versus without AI, although p>0.05. Dentists with AI had more fixations (median=68, IQR=31, 116) on teeth with restorations compared to dentists without AI (median=47, IQR=19, 100), p = 0.01. In turn, average fixation duration was longer on teeth with caries for the dentists with AI than those without AI; although p>0.05. The visual search strategy employed by dentists with AI was less systematic with a lower proportion of lateral tooth-wise transitions compared to dentists without AI. CONCLUSIONS Dentists with AI exhibited more efficient viewing behavior compared to dentists without AI, e.g., lesser time taken to notice caries and/or restorations, more fixations on teeth with restorations, and fixating for shorter durations on teeth without carious lesions and/or restorations. CLINICAL SIGNIFICANCE Analysis of dentists' gaze patterns while using AI-generated annotations of carious lesions demonstrates how AI influences their data extraction methods for dental images. Such insights can be exploited to improve, and even customize, AI-based diagnostic tools, thus reducing the dentists' extraneous attentional processing and allowing for more thorough examination of other image areas.
Collapse
Affiliation(s)
- Lubaina T Arsiwala-Scheppach
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Aßmannshauser Straße 4-6, 14197, Berlin, Germany; ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, Germany.
| | | | - Csaba Rohrer
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Aßmannshauser Straße 4-6, 14197, Berlin, Germany
| | - Sarah Mertens
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Aßmannshauser Straße 4-6, 14197, Berlin, Germany
| | - Enkelejda Kasneci
- Human-Centered Technologies for Learning, Technical University Munich, Germany
| | - Jose Eduardo Cejudo Grano de Oro
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Aßmannshauser Straße 4-6, 14197, Berlin, Germany
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Aßmannshauser Straße 4-6, 14197, Berlin, Germany; ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, Germany
| |
Collapse
|
20
|
Velisar A, Shanidze NM. Noise estimation for head-mounted 3D binocular eye tracking using Pupil Core eye-tracking goggles. Behav Res Methods 2024; 56:53-79. [PMID: 37369939 PMCID: PMC11062346 DOI: 10.3758/s13428-023-02150-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/23/2023] [Indexed: 06/29/2023]
Abstract
Head-mounted, video-based eye tracking is becoming increasingly common and has promise in a range of applications. Here, we provide a practical and systematic assessment of the sources of measurement uncertainty for one such device - the Pupil Core - in three eye-tracking domains: (1) the 2D scene camera image; (2) the physical rotation of the eye relative to the scene camera 3D space; and (3) the external projection of the estimated gaze point location onto the target plane or in relation to world coordinates. We also assess eye camera motion during active tasks relative to the eye and the scene camera, an important consideration as the rigid arrangement of eye and scene camera is essential for proper alignment of the detected gaze. We find that eye camera motion, improper gaze point depth estimation, and erroneous eye models can all lead to added noise that must be considered in the experimental design. Further, while calibration accuracy and precision estimates can help assess data quality in the scene camera image, they may not be reflective of errors and variability in gaze point estimation. These findings support the importance of eye model constancy for comparisons across experimental conditions and suggest additional assessments of data reliability may be warranted for experiments that require the gaze point or measure eye movements relative to the external world.
Collapse
Affiliation(s)
- Anca Velisar
- The Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA.
| | - Natela M Shanidze
- The Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA
| |
Collapse
|
21
|
Lotze A, Love K, Velisar A, Shanidze NM. A low-cost robotic oculomotor simulator for assessing eye tracking accuracy in health and disease. Behav Res Methods 2024; 56:80-92. [PMID: 35948762 PMCID: PMC9911554 DOI: 10.3758/s13428-022-01938-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/21/2022] [Indexed: 12/24/2022]
Abstract
Eye tracking accuracy is affected in individuals with vision and oculomotor deficits, impeding our ability to answer important scientific and clinical questions about these disorders. It is difficult to disambiguate decreases in eye movement accuracy and changes in accuracy of the eye tracking itself. We propose the EyeRobot-a low-cost, robotic oculomotor simulator capable of emulating healthy and compromised eye movements to provide ground truth assessment of eye tracker performance, and how different aspects of oculomotor deficits might affect tracking accuracy and performance. The device can operate with eccentric optical axes or large deviations between the eyes, as well as simulate oculomotor pathologies, such as large fixational instabilities. We find that our design can provide accurate eye movements for both central and eccentric viewing conditions, which can be tracked by using a head-mounted eye tracker, Pupil Core. As proof of concept, we examine the effects of eccentric fixation on calibration accuracy and find that Pupil Core's existing eye tracking algorithm is robust to large fixation offsets. In addition, we demonstrate that the EyeRobot can simulate realistic eye movements like saccades and smooth pursuit that can be tracked using video-based eye tracking. These tests suggest that the EyeRobot, an easy to build and flexible tool, can aid with eye tracking validation and future algorithm development in healthy and compromised vision.
Collapse
Affiliation(s)
- Al Lotze
- Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA
| | | | - Anca Velisar
- Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA
| | - Natela M Shanidze
- Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA, 94115, USA.
| |
Collapse
|
22
|
Rodwell V, Patil M, Kuht HJ, Neuhauss SCF, Norton WHJ, Thomas MG. Zebrafish Optokinetic Reflex: Minimal Reporting Guidelines and Recommendations. BIOLOGY 2023; 13:4. [PMID: 38275725 PMCID: PMC10813647 DOI: 10.3390/biology13010004] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 12/14/2023] [Accepted: 12/18/2023] [Indexed: 01/27/2024]
Abstract
Optokinetic reflex (OKR) assays in zebrafish models are a valuable tool for studying a diverse range of ophthalmological and neurological conditions. Despite its increasing popularity in recent years, there are no clear reporting guidelines for the assay. Following reporting guidelines in research enhances reproducibility, reduces bias, and mitigates underreporting and poor methodologies in published works. To better understand optimal reporting standards for an OKR assay in zebrafish, we performed a systematic literature review exploring the animal, environmental, and technical factors that should be considered. Using search criteria from three online databases, a total of 109 research papers were selected for review. Multiple crucial factors were identified, including larval characteristics, sample size, fixing method, OKR set-up, distance of stimulus, detailed stimulus parameters, eye recording, and eye movement analysis. The outcome of the literature analysis highlighted the insufficient information provided in past research papers and the lack of a systematic way to present the parameters related to each of the experimental factors. To circumvent any future errors and champion robust transparent research, we have created the zebrafish optokinetic (ZOK) reflex minimal reporting guideline.
Collapse
Affiliation(s)
- Vanessa Rodwell
- Ulverscroft Eye Unit, School of Psychology and Vision Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Manjiri Patil
- Ulverscroft Eye Unit, School of Psychology and Vision Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Helen J. Kuht
- Ulverscroft Eye Unit, School of Psychology and Vision Sciences, University of Leicester, Leicester LE1 7RH, UK
| | | | - William H. J. Norton
- Department of Genetics and Genome Biology, University of Leicester, Leicester LE1 7RH, UK
| | - Mervyn G. Thomas
- Ulverscroft Eye Unit, School of Psychology and Vision Sciences, University of Leicester, Leicester LE1 7RH, UK
- Department of Ophthalmology, University Hospitals of Leicester NHS Trust, Leicester Royal Infirmary, Leicester LE1 5WW, UK
- Department of Clinical Genetics, University Hospitals of Leicester NHS Trust, Leicester Royal Infirmary, Leicester LE1 5WW, UK
| |
Collapse
|
23
|
Franco FO, Oliveira JS, Portolese J, Sumiya FM, Silva AF, Machado-Lima A, Nunes FLS, Brentani H. Computer-aided autism diagnosis using visual attention models and eye-tracking: replication and improvement proposal. BMC Med Inform Decis Mak 2023; 23:285. [PMID: 38098001 PMCID: PMC10722824 DOI: 10.1186/s12911-023-02389-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 12/04/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND Autism Spectrum Disorder (ASD) diagnosis can be aided by approaches based on eye-tracking signals. Recently, the feasibility of building Visual Attention Models (VAMs) from features extracted from visual stimuli and their use for classifying cases and controls has been demonstrated using Neural Networks and Support Vector Machines. The present work has three aims: 1) to evaluate whether the trained classifier from the previous study was generalist enough to classify new samples with a new stimulus; 2) to replicate the previously approach to train a new classifier with a new dataset; 3) to evaluate the performance of classifiers obtained by a new classification algorithm (Random Forest) using the previous and the current datasets. METHODS The previously approach was replicated with a new stimulus and new sample, 44 from the Typical Development group and 33 from the ASD group. After the replication, Random Forest classifier was tested to substitute Neural Networks algorithm. RESULTS The test with the trained classifier reached an AUC of 0.56, suggesting that the trained classifier requires retraining of the VAMs when changing the stimulus. The replication results reached an AUC of 0.71, indicating the potential of generalization of the approach for aiding ASD diagnosis, as long as the stimulus is similar to the originally proposed. The results achieved with Random Forest were superior to those achieved with the original approach, with an average AUC of 0.95 for the previous dataset and 0.74 for the new dataset. CONCLUSION In summary, the results of the replication experiment were satisfactory, which suggests the robustness of the approach and the VAM-based approaches feasibility to aid in ASD diagnosis. The proposed method change improved the classification performance. Some limitations are discussed and additional studies are encouraged to test other conditions and scenarios.
Collapse
Affiliation(s)
- Felipe O Franco
- Interunit PostGraduate Program on Bioinformatics, Institute of Mathematics and Statistics (IME), University of São Paulo (USP), 05508-090, São Paulo, SP, Brazil.
- Department of Psychiatry, University of São Paulo's School of Medicine (FMUSP), 05403-903, São Paulo-SP, Brazil.
| | - Jessica S Oliveira
- School of Arts, Sciences and Humanities (EACH), University of São Paulo (USP), 03828-000, São Paulo-SP, Brazil
| | - Joana Portolese
- Department of Psychiatry, University of São Paulo's School of Medicine (FMUSP), 05403-903, São Paulo-SP, Brazil
| | - Fernando M Sumiya
- Department of Psychiatry, University of São Paulo's School of Medicine (FMUSP), 05403-903, São Paulo-SP, Brazil
| | - Andréia F Silva
- Department of Psychiatry, University of São Paulo's School of Medicine (FMUSP), 05403-903, São Paulo-SP, Brazil
| | - Ariane Machado-Lima
- School of Arts, Sciences and Humanities (EACH), University of São Paulo (USP), 03828-000, São Paulo-SP, Brazil
| | - Fatima L S Nunes
- School of Arts, Sciences and Humanities (EACH), University of São Paulo (USP), 03828-000, São Paulo-SP, Brazil
| | - Helena Brentani
- Department of Psychiatry, University of São Paulo's School of Medicine (FMUSP), 05403-903, São Paulo-SP, Brazil
| |
Collapse
|
24
|
Jenner LA, Farran EK, Welham A, Jones C, Moss J. The use of eye-tracking technology as a tool to evaluate social cognition in people with an intellectual disability: a systematic review and meta-analysis. J Neurodev Disord 2023; 15:42. [PMID: 38044457 PMCID: PMC10694880 DOI: 10.1186/s11689-023-09506-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 10/25/2023] [Indexed: 12/05/2023] Open
Abstract
BACKGROUND Relatively little is known about social cognition in people with intellectual disability (ID), and how this may support understanding of co-occurring autism. A limitation of previous research is that traditional social-cognitive tasks place a demand on domain-general cognition and language abilities. These tasks are not suitable for people with ID and lack the sensitivity to detect subtle social-cognitive processes. In autism research, eye-tracking technology has offered an effective method of evaluating social cognition-indicating associations between visual social attention and autism characteristics. The present systematic review synthesised research which has used eye-tracking technology to study social cognition in ID. A meta-analysis was used to explore whether visual attention on socially salient regions (SSRs) of stimuli during these tasks correlated with degree of autism characteristics presented on clinical assessment tools. METHOD Searches were conducted using four databases, research mailing lists, and citation tracking. Following in-depth screening and exclusion of studies with low methodological quality, 49 articles were included in the review. A correlational meta-analysis was run on Pearson's r values obtained from twelve studies, reporting the relationship between visual attention on SSRs and autism characteristics. RESULTS AND CONCLUSIONS Eye-tracking technology was used to measure different social-cognitive abilities across a range of syndromic and non-syndromic ID groups. Restricted scan paths and eye-region avoidance appeared to impact people's ability to make explicit inferences about mental states and social cues. Readiness to attend to social stimuli also varied depending on social content and degree of familiarity. A meta-analysis using a random effects model revealed a significant negative correlation (r = -.28, [95% CI -.47, -.08]) between visual attention on SSRs and autism characteristics across ID groups. Together, these findings highlight how eye-tracking can be used as an accessible tool to measure more subtle social-cognitive processes, which appear to reflect variability in observable behaviour. Further research is needed to be able to explore additional covariates (e.g. ID severity, ADHD, anxiety) which may be related to visual attention on SSRs, to different degrees within syndromic and non-syndromic ID groups, in order to determine the specificity of the association with autism characteristics.
Collapse
Affiliation(s)
- L A Jenner
- School of Psychology, University of Surrey, Surrey, UK.
| | - E K Farran
- School of Psychology, University of Surrey, Surrey, UK
| | - A Welham
- School of Psychology, University of Birmingham, Birmingham, UK
| | - C Jones
- School of Psychology, University of Birmingham, Birmingham, UK
| | - J Moss
- School of Psychology, University of Surrey, Surrey, UK
| |
Collapse
|
25
|
Hooge ITC, Niehorster DC, Hessels RS, Benjamins JS, Nyström M. How robust are wearable eye trackers to slow and fast head and body movements? Behav Res Methods 2023; 55:4128-4142. [PMID: 36326998 PMCID: PMC10700439 DOI: 10.3758/s13428-022-02010-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2022] [Indexed: 06/16/2023]
Abstract
How well can modern wearable eye trackers cope with head and body movement? To investigate this question, we asked four participants to stand still, walk, skip, and jump while fixating a static physical target in space. We did this for six different eye trackers. All the eye trackers were capable of recording gaze during the most dynamic episodes (skipping and jumping). The accuracy became worse as movement got wilder. During skipping and jumping, the biggest error was 5.8∘. However, most errors were smaller than 3∘. We discuss the implications of decreased accuracy in the context of different research scenarios.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, and Social, Health and Organisational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
26
|
Fella A, Loizou M, Christoforou C, Papadopoulos TC. Eye Movement Evidence for Simultaneous Cognitive Processing in Reading. CHILDREN (BASEL, SWITZERLAND) 2023; 10:1855. [PMID: 38136057 PMCID: PMC10741511 DOI: 10.3390/children10121855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 11/08/2023] [Accepted: 11/24/2023] [Indexed: 12/24/2023]
Abstract
Measuring simultaneous processing, a reliable predictor of reading development and reading difficulties (RDs), has traditionally involved cognitive tasks that test reaction or response time, which only capture the efficiency at the output processing stage and neglect the internal stages of information processing. However, with eye-tracking methodology, we can reveal the underlying temporal and spatial processes involved in simultaneous processing and investigate whether these processes are equivalent across chronological or reading age groups. This study used eye-tracking to investigate the simultaneous processing abilities of 15 Grade 6 and 15 Grade 3 children with RDs and their chronological-age controls (15 in each Grade). The Grade 3 typical readers were used as reading-level (RL) controls for the Grade 6 RD group. Participants were required to listen to a question and then point to a picture among four competing illustrations demonstrating the spatial relationship raised in the question. Two eye movements (fixations and saccades) were recorded using the EyeLink 1000 Plus eye-tracking system. The results showed that the Grade 3 RD group produced more and longer fixations than their CA controls, indicating that the pattern of eye movements of young children with RD is typically deficient compared to that of their typically developing counterparts when processing verbal and spatial stimuli simultaneously. However, no differences were observed between the Grade 6 groups in eye movement measures. Notably, the Grade 6 RD group outperformed the RL-matched Grade 3 group, yielding significantly fewer and shorter fixations. The discussion centers on the role of the eye-tracking method as a reliable means of deciphering the simultaneous cognitive processing involved in learning.
Collapse
Affiliation(s)
- Argyro Fella
- School of Education, University of Nicosia, Nicosia 1700, Cyprus;
| | - Maria Loizou
- Ministry of Education, Sport, and Youth, Nicosia 1434, Cyprus;
| | - Christoforos Christoforou
- Division of Computer Science, Mathematics and Science, St. John’s University, New York, NY 11439, USA;
| | - Timothy C. Papadopoulos
- Department of Psychology, Center for Applied Neuroscience, University of Cyprus, Nicosia 1678, Cyprus
| |
Collapse
|
27
|
Tahri Sqalli M, Aslonov B, Gafurov M, Mukhammadiev N, Sqalli Houssaini Y. Eye tracking technology in medical practice: a perspective on its diverse applications. FRONTIERS IN MEDICAL TECHNOLOGY 2023; 5:1253001. [PMID: 38045887 PMCID: PMC10691255 DOI: 10.3389/fmedt.2023.1253001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 11/06/2023] [Indexed: 12/05/2023] Open
Abstract
Eye tracking technology has emerged as a valuable tool in the field of medicine, offering a wide range of applications across various disciplines. This perspective article aims to provide a comprehensive overview of the diverse applications of eye tracking technology in medical practice. By summarizing the latest research findings, this article explores the potential of eye tracking technology in enhancing diagnostic accuracy, assessing and improving medical performance, as well as improving rehabilitation outcomes. Additionally, it highlights the role of eye tracking in neurology, cardiology, pathology, surgery, as well as rehabilitation, offering objective measures for various medical conditions. Furthermore, the article discusses the utility of eye tracking in autism spectrum disorders, attention-deficit/hyperactivity disorder (ADHD), and human-computer interaction in medical simulations and training. Ultimately, this perspective article underscores the transformative impact of eye tracking technology on medical practice and suggests future directions for its continued development and integration.
Collapse
Affiliation(s)
- Mohammed Tahri Sqalli
- Department of Economics, School of Foreign Services, Georgetown University in Qatar, Doha, Qatar
- Department of Engineering, New York University, Abu Dhabi, United Arab Emirates
| | - Begali Aslonov
- Department of Control and Computer Engineering, Polytechnic University of Turin, Turin, Italy
| | - Mukhammadjon Gafurov
- Department of Business Administration, Carnegie Mellon University in Qatar, Doha, Qatar
| | | | - Yahya Sqalli Houssaini
- Department of Medicine, Faculty of Medecine and Pharmacy, Mohammed V University, Rabat, Morocco
| |
Collapse
|
28
|
Le Floch A, Ropars G. Hebbian Control of Fixations in a Dyslexic Reader: A Case Report. Brain Sci 2023; 13:1478. [PMID: 37891845 PMCID: PMC10605338 DOI: 10.3390/brainsci13101478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 09/23/2023] [Accepted: 10/15/2023] [Indexed: 10/29/2023] Open
Abstract
When reading, dyslexic readers exhibit more and longer fixations than normal readers. However, there is no significant difference when dyslexic and control readers perform only visual tasks on a string of letters, showing the importance of cognitive processes in reading. This linguistic and cognitive processing requirement in reading is often perturbed for dyslexic readers by perceived additional letters and word mirror images superposed on the primary images on the primary cortex, inducing internal visual crowding. Here, we show that while for a normal reader, the number and the duration of fixations remain invariant whatever the nature of the lighting, the excess of fixations and total duration of reading can be controlled for a dyslexic reader using the Hebbian mechanisms to erase extra images in optimized pulse-width lighting. In this case, the number of fixations can then be reduced by a factor of about 1.8, recovering the normal reading experiment.
Collapse
Affiliation(s)
- Albert Le Floch
- Laser Physics Laboratory, University of Rennes, CEDEX, 35042 Rennes, France;
- Quantum Electronics and Chiralities Laboratory, 20 Square Marcel Bouget, 35700 Rennes, France
| | - Guy Ropars
- Laser Physics Laboratory, University of Rennes, CEDEX, 35042 Rennes, France;
- Unité de Formation et de Recherche Sciences et Propriétés de la Matière, University of Rennes, CEDEX, 35042 Rennes, France
| |
Collapse
|
29
|
Chana K, Mikuni J, Schnebel A, Leder H. Reading in the city: mobile eye-tracking and evaluation of text in an everyday setting. Front Psychol 2023; 14:1205913. [PMID: 37928598 PMCID: PMC10622808 DOI: 10.3389/fpsyg.2023.1205913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/21/2023] [Indexed: 11/07/2023] Open
Abstract
Reading is often regarded as a mundane aspect of everyday life. However, little is known about the natural reading experiences in daily activities. To fill this gap, this study presents two field studies (N = 39 and 26, respectively), where we describe how people explore visual environments and divide their attention toward text elements in highly ecological settings, i.e., urban street environments, using mobile eye-tracking glasses. Further, the attention toward the text elements (i.e., shop signs) as well as their memorability, measured via follow-up recognition test, were analysed in relation to their aesthetic quality, which is assumed to be key for attracting visual attention and memorability. Our results revealed that, within these urban streets, text elements were looked at most, and looking behaviour was strongly directed, especially toward shop signs, across both street contexts; however, aesthetic values were not correlated either with the most looked at signs or the viewing time for the signs. Aesthetic ratings did however have an effect on memorability, with signs rated higher being better recognised. The results will be discussed in terms aesthetic reading experiences and implications for future field studies.
Collapse
Affiliation(s)
- Kirren Chana
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
- Department of Foreign Languages and Literatures, University of Verona, Verona, Italy
| | - Jan Mikuni
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| | - Alina Schnebel
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Helmut Leder
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| |
Collapse
|
30
|
Misthos LM, Krassanakis V, Merlemis N, Kesidis AL. Modeling the Visual Landscape: A Review on Approaches, Methods and Techniques. SENSORS (BASEL, SWITZERLAND) 2023; 23:8135. [PMID: 37836966 PMCID: PMC10574952 DOI: 10.3390/s23198135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 08/14/2023] [Accepted: 09/21/2023] [Indexed: 10/15/2023]
Abstract
Modeling the perception and evaluation of landscapes from the human perspective is a desirable goal for several scientific domains and applications. Human vision is the dominant sense, and human eyes are the sensors for apperceiving the environmental stimuli of our surroundings. Therefore, exploring the experimental recording and measurement of the visual landscape can reveal crucial aspects about human visual perception responses while viewing the natural or man-made landscapes. Landscape evaluation (or assessment) is another dimension that refers mainly to preferences of the visual landscape, involving human cognition as well, in ways that are often unpredictable. Yet, landscape can be approached by both egocentric (i.e., human view) and exocentric (i.e., bird's eye view) perspectives. The overarching approach of this review article lies in systematically presenting the different ways for modeling and quantifying the two 'modalities' of human perception and evaluation, under the two geometric perspectives, suggesting integrative approaches on these two 'diverging' dualities. To this end, several pertinent traditions/approaches, sensor-based experimental methods and techniques (e.g., eye tracking, fMRI, and EEG), and metrics are adduced and described. Essentially, this review article acts as a 'guide-map' for the delineation of the different activities related to landscape experience and/or management and to the valid or potentially suitable types of stimuli, sensors techniques, and metrics for each activity. Throughout our work, two main research directions are identified: (1) one that attempts to transfer the visual landscape experience/management from the one perspective to the other (and vice versa); (2) another one that aims to anticipate the visual perception of different landscapes and establish connections between perceptual processes and landscape preferences. As it appears, the research in the field is rapidly growing. In our opinion, it can be greatly advanced and enriched using integrative, interdisciplinary approaches in order to better understand the concepts and the mechanisms by which the visual landscape, as a complex set of stimuli, influences visual perception, potentially leading to more elaborate outcomes such as the anticipation of landscape preferences. As an effect, such approaches can support a rigorous, evidence-based, and socially just framework towards landscape management, protection, and decision making, based on a wide spectrum of well-suited and advanced sensor-based technologies.
Collapse
Affiliation(s)
- Loukas-Moysis Misthos
- Department of Surveying and Geoinformatics Engineering, University of West Attica, GR-12243 Athens, Greece; (L.-M.M.); (V.K.); (N.M.)
- Department of Public and One Health, University of Thessaly, GR-43100 Karditsa, Greece
| | - Vassilios Krassanakis
- Department of Surveying and Geoinformatics Engineering, University of West Attica, GR-12243 Athens, Greece; (L.-M.M.); (V.K.); (N.M.)
| | - Nikolaos Merlemis
- Department of Surveying and Geoinformatics Engineering, University of West Attica, GR-12243 Athens, Greece; (L.-M.M.); (V.K.); (N.M.)
| | - Anastasios L. Kesidis
- Department of Surveying and Geoinformatics Engineering, University of West Attica, GR-12243 Athens, Greece; (L.-M.M.); (V.K.); (N.M.)
| |
Collapse
|
31
|
Geangu E, Smith WAP, Mason HT, Martinez-Cedillo AP, Hunter D, Knight MI, Liang H, del Carmen Garcia de Soria Bazan M, Tse ZTH, Rowland T, Corpuz D, Hunter J, Singh N, Vuong QC, Abdelgayed MRS, Mullineaux DR, Smith S, Muller BR. EgoActive: Integrated Wireless Wearable Sensors for Capturing Infant Egocentric Auditory-Visual Statistics and Autonomic Nervous System Function 'in the Wild'. SENSORS (BASEL, SWITZERLAND) 2023; 23:7930. [PMID: 37765987 PMCID: PMC10534696 DOI: 10.3390/s23187930] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 08/25/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023]
Abstract
There have been sustained efforts toward using naturalistic methods in developmental science to measure infant behaviors in the real world from an egocentric perspective because statistical regularities in the environment can shape and be shaped by the developing infant. However, there is no user-friendly and unobtrusive technology to densely and reliably sample life in the wild. To address this gap, we present the design, implementation and validation of the EgoActive platform, which addresses limitations of existing wearable technologies for developmental research. EgoActive records the active infants' egocentric perspective of the world via a miniature wireless head-mounted camera concurrently with their physiological responses to this input via a lightweight, wireless ECG/acceleration sensor. We also provide software tools to facilitate data analyses. Our validation studies showed that the cameras and body sensors performed well. Families also reported that the platform was comfortable, easy to use and operate, and did not interfere with daily activities. The synchronized multimodal data from the EgoActive platform can help tease apart complex processes that are important for child development to further our understanding of areas ranging from executive function to emotion processing and social learning.
Collapse
Affiliation(s)
- Elena Geangu
- Psychology Department, University of York, York YO10 5DD, UK; (A.P.M.-C.); (M.d.C.G.d.S.B.)
| | - William A. P. Smith
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| | - Harry T. Mason
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | | | - David Hunter
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | - Marina I. Knight
- Department of Mathematics, University of York, York YO10 5DD, UK; (M.I.K.); (D.R.M.)
| | - Haipeng Liang
- School of Engineering and Materials Science, Queen Mary University of London, London E1 2AT, UK; (H.L.); (Z.T.H.T.)
| | | | - Zion Tsz Ho Tse
- School of Engineering and Materials Science, Queen Mary University of London, London E1 2AT, UK; (H.L.); (Z.T.H.T.)
| | - Thomas Rowland
- Protolabs, Halesfield 8, Telford TF7 4QN, UK; (T.R.); (D.C.)
| | - Dom Corpuz
- Protolabs, Halesfield 8, Telford TF7 4QN, UK; (T.R.); (D.C.)
| | - Josh Hunter
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| | - Nishant Singh
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | - Quoc C. Vuong
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE1 7RU, UK;
| | - Mona Ragab Sayed Abdelgayed
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| | - David R. Mullineaux
- Department of Mathematics, University of York, York YO10 5DD, UK; (M.I.K.); (D.R.M.)
| | - Stephen Smith
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | - Bruce R. Muller
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| |
Collapse
|
32
|
Darici D, Reissner C, Missler M. Webcam-based eye-tracking to measure visual expertise of medical students during online histology training. GMS JOURNAL FOR MEDICAL EDUCATION 2023; 40:Doc60. [PMID: 37881524 PMCID: PMC10594038 DOI: 10.3205/zma001642] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 06/06/2023] [Accepted: 07/07/2023] [Indexed: 10/27/2023]
Abstract
Objectives Visual expertise is essential for image-based tasks that rely on visual cues, such as in radiology or histology. Studies suggest that eye movements are related to visual expertise and can be measured by near-infrared eye-tracking. With the popularity of device-embedded webcam eye-tracking technology, cost-effective use in educational contexts has recently become amenable. This study investigated the feasibility of such methodology in a curricular online-only histology course during the 2021 summer term. Methods At two timepoints (t1 and t2), third-semester medical students were asked to diagnose a series of histological slides while their eye movements were recorded. Students' eye metrics, performance and behavioral measures were analyzed using variance analyses and multiple regression models. Results First, webcam-eye tracking provided eye movement data with satisfactory quality (mean accuracy=115.7 px±31.1). Second, the eye movement metrics reflected the students' proficiency in finding relevant image sections (fixation count on relevant areas=6.96±1.56 vs. irrelevant areas=4.50±1.25). Third, students' eye movement metrics successfully predicted their performance (R2adj=0.39, p<0.001). Conclusion This study supports the use of webcam-eye-tracking expanding the range of educational tools available in the (digital) classroom. As the students' interest in using the webcam eye-tracking was high, possible areas of implementation will be discussed.
Collapse
Affiliation(s)
- Dogus Darici
- Westfälische-Wilhelms-University, Institute of Anatomy and Neurobiology, Münster, Germany
| | - Carsten Reissner
- Westfälische-Wilhelms-University, Institute of Anatomy and Neurobiology, Münster, Germany
| | - Markus Missler
- Westfälische-Wilhelms-University, Institute of Anatomy and Neurobiology, Münster, Germany
| |
Collapse
|
33
|
Zafar A, Martin Calderon C, Yeboah AM, Dalton K, Irving E, Niechwiej-Szwedo E. Investigation of Camera-Free Eye-Tracking Glasses Compared to a Video-Based System. SENSORS (BASEL, SWITZERLAND) 2023; 23:7753. [PMID: 37765810 PMCID: PMC10535734 DOI: 10.3390/s23187753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/03/2023] [Accepted: 09/06/2023] [Indexed: 09/29/2023]
Abstract
Technological advances in eye-tracking have resulted in lightweight, portable solutions that are capable of capturing eye movements beyond laboratory settings. Eye-tracking devices have typically relied on heavier, video-based systems to detect pupil and corneal reflections. Advances in mobile eye-tracking technology could facilitate research and its application in ecological settings; more traditional laboratory research methods are able to be modified and transferred to real-world scenarios. One recent technology, the AdHawk MindLink, introduced a novel camera-free system embedded in typical eyeglass frames. This paper evaluates the AdHawk MindLink by comparing the eye-tracking recordings with a research "gold standard", the EyeLink II. By concurrently capturing data from both eyes, we compare the capability of each eye tracker to quantify metrics from fixation, saccade, and smooth pursuit tasks-typical elements in eye movement research-across a sample of 13 adults. The MindLink system was capable of capturing fixation stability within a radius of less than 0.5∘, estimating horizontal saccade amplitudes with an accuracy of 0.04∘± 2.3∘, vertical saccade amplitudes with an accuracy of 0.32∘± 2.3∘, and smooth pursuit speeds with an accuracy of 0.5 to 3∘s, depending on the pursuit speed. While the performance of the MindLink system in measuring fixation stability, saccade amplitude, and smooth pursuit eye movements were slightly inferior to the video-based system, MindLink provides sufficient gaze-tracking capabilities for dynamic settings and experiments.
Collapse
Affiliation(s)
- Abdullah Zafar
- Department of Kinesiology & Health Sciences, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.Z.)
| | - Claudia Martin Calderon
- Department of Kinesiology & Health Sciences, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.Z.)
| | - Anne Marie Yeboah
- School of Optometry & Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Kristine Dalton
- School of Optometry & Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Elizabeth Irving
- School of Optometry & Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Ewa Niechwiej-Szwedo
- Department of Kinesiology & Health Sciences, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.Z.)
| |
Collapse
|
34
|
Manley CE, Walter K, Micheletti S, Tietjen M, Cantillon E, Fazzi EM, Bex PJ, Merabet LB. Object identification in cerebral visual impairment characterized by gaze behavior and image saliency analysis. Brain Dev 2023; 45:432-444. [PMID: 37188548 PMCID: PMC10524860 DOI: 10.1016/j.braindev.2023.05.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 04/26/2023] [Accepted: 05/01/2023] [Indexed: 05/17/2023]
Abstract
Individuals with cerebral visual impairment (CVI) have difficulties identifying common objects, especially when presented as cartoons or abstract images. In this study, participants were shown a series of images of ten common objects, each from five possible categories ranging from abstract black & white line drawings to color photographs. Fifty individuals with CVI and 50 neurotypical controls verbally identified each object and success rates and reaction times were collected. Visual gaze behavior was recorded using an eye tracker to quantify the extent of visual search area explored and number of fixations. A receiver operating characteristic (ROC) analysis was also carried out to compare the degree of alignment between the distribution of individual eye gaze patterns and image saliency features computed by the graph-based visual saliency (GBVS) model. Compared to controls, CVI participants showed significantly lower success rates and longer reaction times when identifying objects. In the CVI group, success rate improved moving from abstract black & white images to color photographs, suggesting that object form (as defined by outlines and contours) and color are important cues for correct identification. Eye tracking data revealed that the CVI group showed significantly greater visual search areas and number of fixations per image, and the distribution of eye gaze patterns in the CVI group was less aligned with the high saliency features of the image compared to controls. These results have important implications in helping to understand the complex profile of visual perceptual difficulties associated with CVI.
Collapse
Affiliation(s)
- Claire E Manley
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, USA
| | - Kerri Walter
- Translational Vision Lab. Department of Psychology, Northeastern University, Boston, MA, USA
| | - Serena Micheletti
- Unit of Child Neurology and Psychiatry, ASST Spedali Civili of Brescia, Brescia, Italy; Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy
| | - Matthew Tietjen
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, USA
| | - Emily Cantillon
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, USA
| | - Elisa M Fazzi
- Unit of Child Neurology and Psychiatry, ASST Spedali Civili of Brescia, Brescia, Italy; Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy
| | - Peter J Bex
- Translational Vision Lab. Department of Psychology, Northeastern University, Boston, MA, USA
| | - Lotfi B Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, USA.
| |
Collapse
|
35
|
Fernandes F, Barbalho I, Bispo Júnior A, Alves L, Nagem D, Lins H, Arrais Júnior E, Coutinho KD, Morais AHF, Santos JPQ, Machado GM, Henriques J, Teixeira C, Dourado Júnior MET, Lindquist ARR, Valentim RAM. Digital Alternative Communication for Individuals with Amyotrophic Lateral Sclerosis: What We Have. J Clin Med 2023; 12:5235. [PMID: 37629277 PMCID: PMC10455505 DOI: 10.3390/jcm12165235] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 08/05/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023] Open
Abstract
Amyotrophic Lateral Sclerosis is a disease that compromises the motor system and the functional abilities of the person in an irreversible way, causing the progressive loss of the ability to communicate. Tools based on Augmentative and Alternative Communication are essential for promoting autonomy and improving communication, life quality, and survival. This Systematic Literature Review aimed to provide evidence on eye-image-based Human-Computer Interaction approaches for the Augmentative and Alternative Communication of people with Amyotrophic Lateral Sclerosis. The Systematic Literature Review was conducted and guided following a protocol consisting of search questions, inclusion and exclusion criteria, and quality assessment, to select primary studies published between 2010 and 2021 in six repositories: Science Direct, Web of Science, Springer, IEEE Xplore, ACM Digital Library, and PubMed. After the screening, 25 primary studies were evaluated. These studies showcased four low-cost, non-invasive Human-Computer Interaction strategies employed for Augmentative and Alternative Communication in people with Amyotrophic Lateral Sclerosis. The strategies included Eye-Gaze, which featured in 36% of the studies; Eye-Blink and Eye-Tracking, each accounting for 28% of the approaches; and the Hybrid strategy, employed in 8% of the studies. For these approaches, several computational techniques were identified. For a better understanding, a workflow containing the development phases and the respective methods used by each strategy was generated. The results indicate the possibility and feasibility of developing Human-Computer Interaction resources based on eye images for Augmentative and Alternative Communication in a control group. The absence of experimental testing in people with Amyotrophic Lateral Sclerosis reiterates the challenges related to the scalability, efficiency, and usability of these technologies for people with the disease. Although challenges still exist, the findings represent important advances in the fields of health sciences and technology, promoting a promising future with possibilities for better life quality.
Collapse
Affiliation(s)
- Felipe Fernandes
- Laboratory of Technological Innovation in Health (LAIS), Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil; (I.B.); (A.B.J.); (L.A.); (D.N.); (H.L.); (E.A.J.); (K.D.C.); (M.E.T.D.J.); (A.R.R.L.); (R.A.M.V.)
| | - Ingridy Barbalho
- Laboratory of Technological Innovation in Health (LAIS), Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil; (I.B.); (A.B.J.); (L.A.); (D.N.); (H.L.); (E.A.J.); (K.D.C.); (M.E.T.D.J.); (A.R.R.L.); (R.A.M.V.)
| | - Arnaldo Bispo Júnior
- Laboratory of Technological Innovation in Health (LAIS), Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil; (I.B.); (A.B.J.); (L.A.); (D.N.); (H.L.); (E.A.J.); (K.D.C.); (M.E.T.D.J.); (A.R.R.L.); (R.A.M.V.)
| | - Luca Alves
- Laboratory of Technological Innovation in Health (LAIS), Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil; (I.B.); (A.B.J.); (L.A.); (D.N.); (H.L.); (E.A.J.); (K.D.C.); (M.E.T.D.J.); (A.R.R.L.); (R.A.M.V.)
| | - Danilo Nagem
- Laboratory of Technological Innovation in Health (LAIS), Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil; (I.B.); (A.B.J.); (L.A.); (D.N.); (H.L.); (E.A.J.); (K.D.C.); (M.E.T.D.J.); (A.R.R.L.); (R.A.M.V.)
| | - Hertz Lins
- Laboratory of Technological Innovation in Health (LAIS), Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil; (I.B.); (A.B.J.); (L.A.); (D.N.); (H.L.); (E.A.J.); (K.D.C.); (M.E.T.D.J.); (A.R.R.L.); (R.A.M.V.)
| | - Ernano Arrais Júnior
- Laboratory of Technological Innovation in Health (LAIS), Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil; (I.B.); (A.B.J.); (L.A.); (D.N.); (H.L.); (E.A.J.); (K.D.C.); (M.E.T.D.J.); (A.R.R.L.); (R.A.M.V.)
| | - Karilany D. Coutinho
- Laboratory of Technological Innovation in Health (LAIS), Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil; (I.B.); (A.B.J.); (L.A.); (D.N.); (H.L.); (E.A.J.); (K.D.C.); (M.E.T.D.J.); (A.R.R.L.); (R.A.M.V.)
| | - Antônio H. F. Morais
- Advanced Nucleus of Technological Innovation (NAVI), Federal Institute of Rio Grande do Norte (IFRN), Natal 59015-000, Brazil; (A.H.F.M.); (J.P.Q.S.)
| | - João Paulo Q. Santos
- Advanced Nucleus of Technological Innovation (NAVI), Federal Institute of Rio Grande do Norte (IFRN), Natal 59015-000, Brazil; (A.H.F.M.); (J.P.Q.S.)
| | | | - Jorge Henriques
- Department of Informatics Engineering, Center for Informatics and Systems of the University of Coimbra, Universidade de Coimbra, 3030-788 Coimbra, Portugal; (J.H.); (C.T.)
| | - César Teixeira
- Department of Informatics Engineering, Center for Informatics and Systems of the University of Coimbra, Universidade de Coimbra, 3030-788 Coimbra, Portugal; (J.H.); (C.T.)
| | - Mário E. T. Dourado Júnior
- Laboratory of Technological Innovation in Health (LAIS), Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil; (I.B.); (A.B.J.); (L.A.); (D.N.); (H.L.); (E.A.J.); (K.D.C.); (M.E.T.D.J.); (A.R.R.L.); (R.A.M.V.)
- Department of Integrated Medicine, Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil
| | - Ana R. R. Lindquist
- Laboratory of Technological Innovation in Health (LAIS), Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil; (I.B.); (A.B.J.); (L.A.); (D.N.); (H.L.); (E.A.J.); (K.D.C.); (M.E.T.D.J.); (A.R.R.L.); (R.A.M.V.)
| | - Ricardo A. M. Valentim
- Laboratory of Technological Innovation in Health (LAIS), Federal University of Rio Grande do Norte (UFRN), Natal 59010-090, Brazil; (I.B.); (A.B.J.); (L.A.); (D.N.); (H.L.); (E.A.J.); (K.D.C.); (M.E.T.D.J.); (A.R.R.L.); (R.A.M.V.)
| |
Collapse
|
36
|
Nebe S, Reutter M, Baker DH, Bölte J, Domes G, Gamer M, Gärtner A, Gießing C, Gurr C, Hilger K, Jawinski P, Kulke L, Lischke A, Markett S, Meier M, Merz CJ, Popov T, Puhlmann LMC, Quintana DS, Schäfer T, Schubert AL, Sperl MFJ, Vehlen A, Lonsdorf TB, Feld GB. Enhancing precision in human neuroscience. eLife 2023; 12:e85980. [PMID: 37555830 PMCID: PMC10411974 DOI: 10.7554/elife.85980] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 07/23/2023] [Indexed: 08/10/2023] Open
Abstract
Human neuroscience has always been pushing the boundary of what is measurable. During the last decade, concerns about statistical power and replicability - in science in general, but also specifically in human neuroscience - have fueled an extensive debate. One important insight from this discourse is the need for larger samples, which naturally increases statistical power. An alternative is to increase the precision of measurements, which is the focus of this review. This option is often overlooked, even though statistical power benefits from increasing precision as much as from increasing sample size. Nonetheless, precision has always been at the heart of good scientific practice in human neuroscience, with researchers relying on lab traditions or rules of thumb to ensure sufficient precision for their studies. In this review, we encourage a more systematic approach to precision. We start by introducing measurement precision and its importance for well-powered studies in human neuroscience. Then, determinants for precision in a range of neuroscientific methods (MRI, M/EEG, EDA, Eye-Tracking, and Endocrinology) are elaborated. We end by discussing how a more systematic evaluation of precision and the application of respective insights can lead to an increase in reproducibility in human neuroscience.
Collapse
Affiliation(s)
- Stephan Nebe
- Zurich Center for Neuroeconomics, Department of Economics, University of ZurichZurichSwitzerland
| | - Mario Reutter
- Department of Psychology, Julius-Maximilians-UniversityWürzburgGermany
| | - Daniel H Baker
- Department of Psychology and York Biomedical Research Institute, University of YorkYorkUnited Kingdom
| | - Jens Bölte
- Institute for Psychology, University of Münster, Otto-Creuzfeldt Center for Cognitive and Behavioral NeuroscienceMünsterGermany
| | - Gregor Domes
- Department of Biological and Clinical Psychology, University of TrierTrierGermany
- Institute for Cognitive and Affective NeuroscienceTrierGermany
| | - Matthias Gamer
- Department of Psychology, Julius-Maximilians-UniversityWürzburgGermany
| | - Anne Gärtner
- Faculty of Psychology, Technische Universität DresdenDresdenGermany
| | - Carsten Gießing
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky University of OldenburgOldenburgGermany
| | - Caroline Gurr
- Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, University Hospital, Goethe UniversityFrankfurtGermany
- Brain Imaging Center, Goethe UniversityFrankfurtGermany
| | - Kirsten Hilger
- Department of Psychology, Julius-Maximilians-UniversityWürzburgGermany
- Department of Psychology, Psychological Diagnostics and Intervention, Catholic University of Eichstätt-IngolstadtEichstättGermany
| | - Philippe Jawinski
- Department of Psychology, Humboldt-Universität zu BerlinBerlinGermany
| | - Louisa Kulke
- Department of Developmental with Educational Psychology, University of BremenBremenGermany
| | - Alexander Lischke
- Department of Psychology, Medical School HamburgHamburgGermany
- Institute of Clinical Psychology and Psychotherapy, Medical School HamburgHamburgGermany
| | - Sebastian Markett
- Department of Psychology, Humboldt-Universität zu BerlinBerlinGermany
| | - Maria Meier
- Department of Psychology, University of KonstanzKonstanzGermany
- University Psychiatric Hospitals, Child and Adolescent Psychiatric Research Department (UPKKJ), University of BaselBaselSwitzerland
| | - Christian J Merz
- Department of Cognitive Psychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University BochumBochumGermany
| | - Tzvetan Popov
- Department of Psychology, Methods of Plasticity Research, University of ZurichZurichSwitzerland
| | - Lara MC Puhlmann
- Leibniz Institute for Resilience ResearchMainzGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Daniel S Quintana
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- NevSom, Department of Rare Disorders & Disabilities, Oslo University HospitalOsloNorway
- KG Jebsen Centre for Neurodevelopmental Disorders, University of OsloOsloNorway
- Norwegian Centre for Mental Disorders Research (NORMENT), University of OsloOsloNorway
| | - Tim Schäfer
- Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, University Hospital, Goethe UniversityFrankfurtGermany
- Brain Imaging Center, Goethe UniversityFrankfurtGermany
| | | | - Matthias FJ Sperl
- Department of Clinical Psychology and Psychotherapy, University of GiessenGiessenGermany
- Center for Mind, Brain and Behavior, Universities of Marburg and GiessenGiessenGermany
| | - Antonia Vehlen
- Department of Biological and Clinical Psychology, University of TrierTrierGermany
| | - Tina B Lonsdorf
- Department of Systems Neuroscience, University Medical Center Hamburg-EppendorfHamburgGermany
- Department of Psychology, Biological Psychology and Cognitive Neuroscience, University of BielefeldBielefeldGermany
| | - Gordon B Feld
- Department of Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg UniversityMannheimGermany
- Department of Psychology, Heidelberg UniversityHeidelbergGermany
- Department of Addiction Behavior and Addiction Medicine, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg UniversityMannheimGermany
- Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg UniversityMannheimGermany
| |
Collapse
|
37
|
Viktorsson C, Valtakari NV, Falck-Ytter T, Hooge ITC, Rudling M, Hessels RS. Stable eye versus mouth preference in a live speech-processing task. Sci Rep 2023; 13:12878. [PMID: 37553414 PMCID: PMC10409748 DOI: 10.1038/s41598-023-40017-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 08/03/2023] [Indexed: 08/10/2023] Open
Abstract
Looking at the mouth region is thought to be a useful strategy for speech-perception tasks. The tendency to look at the eyes versus the mouth of another person during speech processing has thus far mainly been studied using screen-based paradigms. In this study, we estimated the eye-mouth-index (EMI) of 38 adult participants in a live setting. Participants were seated across the table from an experimenter, who read sentences out loud for the participant to remember in both a familiar (English) and unfamiliar (Finnish) language. No statistically significant difference in the EMI between the familiar and the unfamiliar languages was observed. Total relative looking time at the mouth also did not predict the number of correctly identified sentences. Instead, we found that the EMI was higher during an instruction phase than during the speech-processing task. Moreover, we observed high intra-individual correlations in the EMI across the languages and different phases of the experiment. We conclude that there are stable individual differences in looking at the eyes versus the mouth of another person. Furthermore, this behavior appears to be flexible and dependent on the requirements of the situation (speech processing or not).
Collapse
Affiliation(s)
- Charlotte Viktorsson
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden.
| | - Niilo V Valtakari
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Terje Falck-Ytter
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
- Center of Neurodevelopmental Disorders (KIND), Division of Neuropsychiatry, Department of Women's and Children's Health, Karolinska Institutet, Stockholm, Sweden
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Maja Rudling
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
38
|
Körner HM, Faul F, Nuthmann A. Revisiting the role of attention in the "weapon focus effect": Do weapons draw gaze away from the perpetrator under naturalistic viewing conditions? Atten Percept Psychophys 2023; 85:1868-1887. [PMID: 36725782 PMCID: PMC10545598 DOI: 10.3758/s13414-022-02643-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/19/2022] [Indexed: 02/03/2023]
Abstract
The presence of a weapon in a scene has been found to attract observers' attention and to impair their memory of the person holding the weapon. Here, we examined the role of attention in this weapon focus effect (WFE) under different viewing conditions. German participants viewed stimuli in which a man committed a robbery while holding a gun or a cell phone. The stimuli were based on material used in a recent U.S. study reporting large memory effects. Recording eye movements allowed us to test whether observers' attention in the gun condition shifted away from the perpetrator towards the gun, compared with the phone condition. When using videos (Experiment 1), weapon presence did not appear to modulate the viewing time for the perpetrator, whereas the evidence concerning the critical object remained inconclusive. When using slide shows (Experiment 2), the gun attracted more gaze than the phone, replicating previous research. However, the attentional shift towards the weapon did not come at a cost of viewing time on the perpetrator. In both experiments, observers focused their attention predominantly on the depicted people and much less on the gun or phone. The presence of a weapon did not cause participants to recall fewer details about the perpetrator's appearance in either experiment. This null effect was replicated in an online study using the original videos and testing more participants. The results seem at odds with the attention-shift explanation of the WFE. Moreover, the results indicate that the WFE is not a universal phenomenon.
Collapse
Affiliation(s)
- Hannes M Körner
- Institute of Psychology, Kiel University, Olshausenstr. 62, 24118, Kiel, Germany.
| | - Franz Faul
- Institute of Psychology, Kiel University, Olshausenstr. 62, 24118, Kiel, Germany
| | - Antje Nuthmann
- Institute of Psychology, Kiel University, Olshausenstr. 62, 24118, Kiel, Germany
| |
Collapse
|
39
|
Arsiwala-Scheppach LT, Castner N, Rohrer C, Mertens S, Kasneci E, Cejudo Grano de Oro JE, Krois J, Schwendicke F. Gaze patterns of dentists while evaluating bitewing radiographs. J Dent 2023; 135:104585. [PMID: 37301462 DOI: 10.1016/j.jdent.2023.104585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 05/15/2023] [Accepted: 06/07/2023] [Indexed: 06/12/2023] Open
Abstract
OBJECTIVES Understanding dentists' gaze patterns on radiographs may allow to unravel sources of their limited accuracy and develop strategies to mitigate them. We conducted an eye tracking experiment to characterize dentists' scanpaths and thus their gaze patterns when assessing bitewing radiographs to detect primary proximal carious lesions. METHODS 22 dentists assessed a median of nine bitewing images each, resulting in 170 datasets after excluding data with poor quality of gaze recording. Fixation was defined as an area of attentional focus related to visual stimuli. We calculated time to first fixation, fixation count, average fixation duration, and fixation frequency. Analyses were performed for the entire image and stratified by (1) presence of carious lesions and/or restorations and (2) lesion depth (E1/2: outer/inner enamel; D1-3: outer-inner third of dentin). We also examined the transitional nature of the dentists' gaze. RESULTS Dentists had more fixations on teeth with lesions and/or restorations (median=138 [interquartile range=87, 204]) than teeth without them (32 [15, 66]), p<0.001. Notably, teeth with lesions had longer fixation durations (407 milliseconds [242, 591]) than those with restorations (289 milliseconds [216, 337]), p<0.001. Time to first fixation was longer for teeth with E1 lesions (17,128 milliseconds [8813, 21,540]) than lesions of other depths (p = 0.049). The highest number of fixations were on teeth with D2 lesions (43 [20, 51]) and lowest on teeth with E1 lesions (5 [1, 37]), p<0.001. Generally, a systematic tooth-by-tooth gaze pattern was observed. CONCLUSIONS As hypothesized, while visually inspecting bitewing radiographic images, dentists employed a heightened focus on certain image features/areas, relevant to the assigned task. Also, they generally examined the entire image in a systematic tooth-by-tooth pattern.
Collapse
Affiliation(s)
- Lubaina T Arsiwala-Scheppach
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Germany; ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, Switzerland.
| | - Nora Castner
- Department of Computer Science, University of Tuebingen, Tuebingen, Germany
| | - Csaba Rohrer
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Germany
| | - Sarah Mertens
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Germany
| | - Enkelejda Kasneci
- Department of Computer Science, Technical University of Munich, Germany
| | - Jose Eduardo Cejudo Grano de Oro
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Germany
| | - Joachim Krois
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Germany; ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, Switzerland
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Germany; ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, Switzerland
| |
Collapse
|
40
|
Linka M, Sensoy Ö, Karimpur H, Schwarzer G, de Haas B. Free viewing biases for complex scenes in preschoolers and adults. Sci Rep 2023; 13:11803. [PMID: 37479760 PMCID: PMC10362043 DOI: 10.1038/s41598-023-38854-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 07/16/2023] [Indexed: 07/23/2023] Open
Abstract
Adult gaze behaviour towards naturalistic scenes is highly biased towards semantic object classes. Little is known about the ontological development of these biases, nor about group-level differences in gaze behaviour between adults and preschoolers. Here, we let preschoolers (n = 34, age 5 years) and adults (n = 42, age 18-59 years) freely view 40 complex scenes containing objects with different semantic attributes to compare their fixation behaviour. Results show that preschool children allocate a significantly smaller proportion of dwell time and first fixations on Text and instead fixate Faces, Touched objects, Hands and Bodies more. A predictive model of object fixations controlling for a range of potential confounds suggests that most of these differences can be explained by drastically reduced text salience in pre-schoolers and that this effect is independent of low-level salience. These findings are in line with a developmental attentional antagonism between text and body parts (touched objects and hands in particular), which resonates with recent findings regarding 'cortical recycling'. We discuss this and other potential mechanisms driving salience differences between children and adults.
Collapse
Affiliation(s)
- Marcel Linka
- Department of Experimental Psychology, Justus Liebig University Giessen, 35394, Giessen, Germany.
| | - Özlem Sensoy
- Department of Developmental Psychology, Justus Liebig University Giessen, 35394, Giessen, Germany
| | - Harun Karimpur
- Department of Experimental Psychology, Justus Liebig University Giessen, 35394, Giessen, Germany
| | - Gudrun Schwarzer
- Department of Developmental Psychology, Justus Liebig University Giessen, 35394, Giessen, Germany
| | - Benjamin de Haas
- Department of Experimental Psychology, Justus Liebig University Giessen, 35394, Giessen, Germany
| |
Collapse
|
41
|
Wolf A, Tripanpitak K, Umeda S, Otake-Matsuura M. Eye-tracking paradigms for the assessment of mild cognitive impairment: a systematic review. Front Psychol 2023; 14:1197567. [PMID: 37546488 PMCID: PMC10399700 DOI: 10.3389/fpsyg.2023.1197567] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 06/19/2023] [Indexed: 08/08/2023] Open
Abstract
Mild cognitive impairment (MCI), representing the 'transitional zone' between normal cognition and dementia, has become a novel topic in clinical research. Although early detection is crucial, it remains logistically challenging at the same time. While traditional pen-and-paper tests require in-depth training to ensure standardized administration and accurate interpretation of findings, significant technological advancements are leading to the development of procedures for the early detection of Alzheimer's disease (AD) and facilitating the diagnostic process. Some of the diagnostic protocols, however, show significant limitations that hamper their widespread adoption. Concerns about the social and economic implications of the increasing incidence of AD underline the need for reliable, non-invasive, cost-effective, and timely cognitive scoring methodologies. For instance, modern clinical studies report significant oculomotor impairments among patients with MCI, who perform poorly in visual paired-comparison tasks by ascribing less attentional resources to novel stimuli. To accelerate the Global Action Plan on the Public Health Response to Dementia 2017-2025, this work provides an overview of research on saccadic and exploratory eye-movement deficits among older adults with MCI. The review protocol was drafted based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Electronic databases were systematically searched to identify peer-reviewed articles published between 2017 and 2022 that examined visual processing in older adults with MCI and reported gaze parameters as potential biomarkers. Moreover, following the contemporary trend for remote healthcare technologies, we reviewed studies that implemented non-commercial eye-tracking instrumentation in order to detect information processing impairments among the MCI population. Based on the gathered literature, eye-tracking-based paradigms may ameliorate the screening limitations of traditional cognitive assessments and contribute to early AD detection. However, in order to translate the findings pertaining to abnormal gaze behavior into clinical applications, it is imperative to conduct longitudinal investigations in both laboratory-based and ecologically valid settings.
Collapse
Affiliation(s)
- Alexandra Wolf
- Cognitive Behavioral Assistive Technology (CBAT), Goal-Oriented Technology Group, RIKEN Center for Advanced Intelligence Project (AIP), Tokyo, Japan
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Kornkanok Tripanpitak
- Cognitive Behavioral Assistive Technology (CBAT), Goal-Oriented Technology Group, RIKEN Center for Advanced Intelligence Project (AIP), Tokyo, Japan
| | - Satoshi Umeda
- Department of Psychology, Keio University, Tokyo, Japan
| | - Mihoko Otake-Matsuura
- Cognitive Behavioral Assistive Technology (CBAT), Goal-Oriented Technology Group, RIKEN Center for Advanced Intelligence Project (AIP), Tokyo, Japan
| |
Collapse
|
42
|
Josupeit J. Let's get it started: Eye Tracking in VR with the Pupil Labs Eye Tracking Add-On for the HTC Vive. J Eye Mov Res 2023; 15:10.16910/jemr.15.3.10. [PMID: 39139654 PMCID: PMC11321899 DOI: 10.16910/jemr.15.3.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/15/2024] Open
Abstract
Combining eye tracking and virtual reality (VR) is a promising approach to tackle various applied research questions. As this approach is relatively new, routines are not established yet and the first steps can be full of potential pitfalls. The present paper gives a practice example to lower the boundaries for getting started. More specifically, I focus on an affordable add-on technology, the Pupil Labs eye tracking add-on for the HTC Vive. As add-on technology with all relevant source code available on GitHub, a high degree of freedom in preprocessing, visualizing, and analyzing eye tracking data in VR can be achieved. At the same time, some extra preparatory steps for the setup of hardware and software are necessary. Therefore, specifics of eye tracking in VR from unboxing, software integration, and procedures to analyzing the data and maintaining the hardware will be addressed. The Pupil Labs eye tracking add-on for the HTC Vive represents a highly transparent approach to existing alternatives. Characteristics of eye tracking in VR in contrast to other headmounded and remote eye trackers applied in the physical world will be discussed. In conclusion, the paper contributes to the idea of open science in two ways: First, by making the necessary routines transparent and therefore reproducible. Second, by stressing the benefits of using open source software.
Collapse
|
43
|
Kredel R, Hernandez J, Hossner EJ, Zahno S. Eye-tracking technology and the dynamics of natural gaze behavior in sports: an update 2016-2022. Front Psychol 2023; 14:1130051. [PMID: 37359890 PMCID: PMC10286576 DOI: 10.3389/fpsyg.2023.1130051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 05/16/2023] [Indexed: 06/28/2023] Open
Abstract
Updating and complementing a previous review on eye-tracking technology and the dynamics of natural gaze behavior in sports, this short review focuses on the progress concerning researched sports tasks, applied methods of gaze data collection and analysis as well as derived gaze measures for the time interval of 2016-2022. To that end, a systematic review according to the PRISMA guidelines was conducted, searching Web of Science, PubMed Central, SPORTDiscus, and ScienceDirect for the keywords: eye tracking, gaze behavio*r, eye movement, and visual search. Thirty-one studies were identified for the review. On the one hand, a generally increased research interest and a wider area of researched sports with a particular increase in official's gaze behavior were diagnosed. On the other hand, a general lack of progress concerning sample sizes, amounts of trials, employed eye-tracking technology and gaze analysis procedures must be acknowledged. Nevertheless, first attempts to automated gaze-cue-allocations (GCA) in mobile eye-tracking studies were seen, potentially enhancing objectivity, and alleviating the burden of manual workload inherently associated with conventional gaze analyses. Reinforcing the claims of the previous review, this review concludes by describing four distinct technological approaches to automating GCA, some of which are specifically suited to tackle the validity and generalizability issues associated with the current limitations of mobile eye-tracking studies on natural gaze behavior in sports.
Collapse
|
44
|
Park SY, Holmqvist K, Niehorster DC, Huber L, Virányi Z. How to improve data quality in dog eye tracking. Behav Res Methods 2023; 55:1513-1536. [PMID: 35680764 PMCID: PMC10250523 DOI: 10.3758/s13428-022-01788-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/02/2022] [Indexed: 11/08/2022]
Abstract
Pupil-corneal reflection (P-CR) eye tracking has gained a prominent role in studying dog visual cognition, despite methodological challenges that often lead to lower-quality data than when recording from humans. In the current study, we investigated if and how the morphology of dogs might interfere with tracking of P-CR systems, and to what extent such interference, possibly in combination with dog-unique eye-movement characteristics, may undermine data quality and affect eye-movement classification when processed through algorithms. For this aim, we have conducted an eye-tracking experiment with dogs and humans, and investigated incidences of tracking interference, compared how they blinked, and examined how differential quality of dog and human data affected the detection and classification of eye-movement events. Our results show that the morphology of dogs' face and eye can interfere with tracking methods of the systems, and dogs blink less often but their blinks are longer. Importantly, the lower quality of dog data lead to larger differences in how two different event detection algorithms classified fixations, indicating that the results of key dependent variables are more susceptible to choice of algorithm in dog than human data. Further, two measures of the Nyström & Holmqvist (Behavior Research Methods, 42(4), 188-204, 2010) algorithm showed that dog fixations are less stable and dog data have more trials with extreme levels of noise. Our findings call for analyses better adjusted to the characteristics of dog eye-tracking data, and our recommendations help future dog eye-tracking studies acquire quality data to enable robust comparisons of visual cognition between dogs and humans.
Collapse
Affiliation(s)
- Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria.
- Medical University Vienna, Vienna, Austria.
- University of Vienna, Vienna, Austria.
| | - Kenneth Holmqvist
- Institute of Psychology, Nicolaus Copernicus University in Torun, Torun, Poland
- Department of Psychology, Regensburg University, Regensburg, Germany
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Ludwig Huber
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria
- Medical University Vienna, Vienna, Austria
- University of Vienna, Vienna, Austria
| | - Zsófia Virányi
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria
- Medical University Vienna, Vienna, Austria
- University of Vienna, Vienna, Austria
| |
Collapse
|
45
|
Dubourg L, Kojovic N, Eliez S, Schaer M, Schneider M. Visual processing of complex social scenes in 22q11.2 deletion syndrome: Relevance for negative symptoms. Psychiatry Res 2023; 321:115074. [PMID: 36706559 DOI: 10.1016/j.psychres.2023.115074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Revised: 01/15/2023] [Accepted: 01/22/2023] [Indexed: 01/25/2023]
Abstract
Current explanatory models of negative symptoms in schizophrenia have suggested the role of social cognition in symptom formation and maintenance. This study examined a core aspect of social cognition, namely social perception, and its association with clinical manifestations in 22q11.2 deletion syndrome (22q11DS), a genetic model of schizophrenia. We used an eye-tracking device to analyze developmental trajectories of complex and dynamic social scenes exploration in 58 participants with 22q11DS compared to 79 typically developing controls. Participants with 22q11DS showed divergent patterns of social scene exploration compared to healthy individuals from childhood to adulthood. We evidenced a more scattered gaze pattern and a lower number of shared gaze foci compared to healthy controls. Associations with negative symptoms, anxiety level, and face recognition were observed. Findings reveal abnormal visual exploration of complex social information from childhood to adulthood in 22q11DS. Atypical gaze patterns appear related to clinical manifestations in this syndrome.
Collapse
Affiliation(s)
- Lydia Dubourg
- Developmental Imaging and Psychopathology Lab, Department of Psychiatry, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Nada Kojovic
- Autism Brain & Behavior Lab, Department of Psychiatry, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Stephan Eliez
- Developmental Imaging and Psychopathology Lab, Department of Psychiatry, Faculty of Medicine, University of Geneva, Geneva, Switzerland; Department of Genetic Medicines and Development, School of Medicine, University of Geneva, Geneva, Switzerland
| | - Marie Schaer
- Autism Brain & Behavior Lab, Department of Psychiatry, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Maude Schneider
- Developmental Imaging and Psychopathology Lab, Department of Psychiatry, Faculty of Medicine, University of Geneva, Geneva, Switzerland; Clinical Psychology Unit for Intellectual and Developmental Disabilities, Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland.
| |
Collapse
|
46
|
Examining the role of attentional allocation in working memory precision with pupillometry in children and adults. J Exp Child Psychol 2023; 231:105655. [PMID: 36863172 DOI: 10.1016/j.jecp.2023.105655] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 02/02/2023] [Accepted: 02/06/2023] [Indexed: 03/04/2023]
Abstract
Working memory (WM) precision, or the fidelity with which items can be remembered, is an important aspect of WM capacity that increases over childhood. Why individuals are more or less precise from moment to moment and why WM becomes more stable with age are not yet fully understood. Here, we examined the role of attentional allocation in visual WM precision in children aged 8 to 13 years and young adults aged 18 to 27 years, as measured by fluctuations in pupil dilation during stimulus encoding and maintenance. Using mixed models, we examined intraindividual links between change in pupil diameter and WM precision across trials and the role of developmental differences in these associations. Through probabilistic modeling of error distributions and the inclusion of a visuomotor control task, we isolated mnemonic precision from other cognitive processes. We found an age-related increase in mnemonic precision that was independent of guessing behavior, serial position effects, fatigue or loss of motivation across the experiment, and visuomotor processes. Trial-by-trial analyses showed that trials with smaller changes in pupil diameter during encoding and maintenance predicted more precise responses than trials with larger changes in pupil diameter within individuals. At encoding, this relationship was stronger for older participants. Furthermore, the pupil-performance coupling grew across the delay period-particularly or exclusively for adults. These results suggest a functional link between pupil fluctuations and WM precision that grows over development; visual details may be stored more faithfully when attention is allocated efficiently to a sequence of objects at encoding and throughout a delay period.
Collapse
|
47
|
Clear Aligners and Smart Eye Tracking Technology as a New Communication Strategy between Ethical and Legal Issues. Life (Basel) 2023; 13:life13020297. [PMID: 36836654 PMCID: PMC9967915 DOI: 10.3390/life13020297] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/16/2023] [Accepted: 01/17/2023] [Indexed: 01/26/2023] Open
Abstract
Smart eye-tracking technology (SEET) that determines visual attention using smartphones can be used to determine the aesthetic perception of different types of clear aligners. Its value as a communication and comprehension tool, in addition to the ethical and legal concerns which it entails, can be assessed. One hundred subjects (50 F, 50 M; age range 15-70) were equally distributed in non-orthodontic (A) and orthodontic (B) groups. A smartphone-based SEET app assessed their knowledge of and opinions on aligners. Subjects evaluated images of smiles not wearing aligners, with/without attachments and with straight/scalloped gingival margins, as a guided calibration step which formed the image control group. Subsequently, the subjects rated the same smiles, this time wearing aligners (experimental images group). Questionnaire data and average values for each group of patients, and images relating to fixation times and overall star scores, were analyzed using these tests: chi-square, t-test, Mann-Whitney U, Spearman's rho, and Wilcoxon (p < 0.05). One-way ANOVA and related post-hoc tests were also applied. Orthodontic patients were found to be better informed than non-orthodontic patients. Aesthetic perception could be swayed by several factors. Attachments scored lower in aesthetic evaluation. Lips distracted attention from attachments and improved evaluations. Attachment-free aligners were better rated overall. A more thorough understanding as to the opinions, expectations and aesthetic perception of aligners can improve communication with patients. Mobile SEET is remarkably promising, although it does require a careful medicolegal risk-benefit assessments for responsible and professional use.
Collapse
|
48
|
Zhang R, Cao L, Xu Z, Zhang Y, Zhang L, Hu Y, Chen M, Yao D. Improving AR-SSVEP Recognition Accuracy Under High Ambient Brightness Through Iterative Learning. IEEE Trans Neural Syst Rehabil Eng 2023; 31:1796-1806. [PMID: 37030737 DOI: 10.1109/tnsre.2023.3260842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Augmented reality-based brain-computer interface (AR-BCI) system is one of the important ways to promote BCI technology outside of the laboratory due to its portability and mobility, but its performance in real-world scenarios has not been fully studied. In the current study, we first investigated the effect of ambient brightness on AR-BCI performance. 5 different light intensities were set as experimental conditions to simulate typical brightness in real scenes, while the same steady-state visual evoked potentials (SSVEP) stimulus was displayed in the AR glass. The data analysis results showed that SSVEP can be evoked under all 5 light intensities, but the response intensity became weaker when the brightness increased. The recognition accuracies of AR-SSVEP were negatively correlated to light intensity, the highest accuracies were 89.35% with FBCCA and 83.33% with CCA under 0 lux light intensity, while they decreased to 62.53% and 49.24% under 1200 lux. To solve the accuracy loss problem in high ambient brightness, we further designed a SSVEP recognition algorithm with iterative learning capability, named ensemble online adaptive CCA (eOACCA). The main strategy is to provide initial filters for high-intensity data by iteratively learning low-light-intensity AR-SSVEP data. The experimental results showed that the eOACCA algorithm had significant advantages under higher light intensities ( 600 lux). Compared with FBCCA, the accuracy of eOACCA under 1200 lux was increased by 13.91%. In conclusion, the current study contributed to the in-depth understanding of the performance variations of AR-BCI under different lighting conditions, and was helpful in promoting the AR-BCI application in complex lighting environments.
Collapse
|
49
|
Huber L, Lonardo L, Völter CJ. Eye Tracking in Dogs: Achievements and Challenges. COMPARATIVE COGNITION & BEHAVIOR REVIEWS 2023; 18:33-58. [PMID: 39045221 PMCID: PMC7616291 DOI: 10.3819/ccbr.2023.180005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024] Open
Abstract
In this article, we review eye-tracking studies with dogs (Canis familiaris) with a threefold goal; we highlight the achievements in the field of canine perception and cognition using eye tracking, then discuss the challenges that arise in the application of a technology that has been developed in human psychophysics, and finally propose new avenues in dog eye-tracking research. For the first goal, we present studies that investigated dogs' perception of humans, mainly faces, but also hands, gaze, emotions, communicative signals, goal-directed movements, and social interactions, as well as the perception of animations representing possible and impossible physical processes and animacy cues. We then discuss the present challenges of eye tracking with dogs, like doubtful picture-object equivalence, extensive training, small sample sizes, difficult calibration, and artificial stimuli and settings. We suggest possible improvements and solutions for these problems in order to achieve better stimulus and data quality. Finally, we propose the use of dynamic stimuli, pupillometry, arrival time analyses, mobile eye tracking, and combinations with behavioral and neuroimaging methods to further advance canine research and open up new scientific fields in this highly dynamic branch of comparative cognition.
Collapse
Affiliation(s)
- Ludwig Huber
- Messerli Research Institute, Unit of Comparative Cognition, University of Veterinary Medicine Vienna, Medical University of Vienna, University of Vienna
| | - Lucrezia Lonardo
- Messerli Research Institute, Unit of Comparative Cognition, University of Veterinary Medicine Vienna, Medical University of Vienna, University of Vienna
| | - Christoph J Völter
- Messerli Research Institute, Unit of Comparative Cognition, University of Veterinary Medicine Vienna, Medical University of Vienna, University of Vienna
| |
Collapse
|
50
|
Pérez Roche MT, Yam JC, Liu H, Gutierrez D, Pham C, Balasanyan V, García G, Cedillo Ley M, de Fernando S, Ortín M, Pueyo V. Visual Acuity and Contrast Sensitivity in Preterm and Full-Term Children Using a Novel Digital Test. CHILDREN (BASEL, SWITZERLAND) 2022; 10:children10010087. [PMID: 36670638 PMCID: PMC9856886 DOI: 10.3390/children10010087] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/08/2022] [Accepted: 12/21/2022] [Indexed: 01/03/2023]
Abstract
Visual assessment in preverbal children mostly relies on the preferential looking paradigm. It requires an experienced observer to interpret the child's responses to a stimulus. DIVE (Device for an Integral Visual Examination) is a digital tool with an integrated eye tracker (ET) that lifts this requirement and automatizes this process. The aim of our study was to assess the development of two visual functions, visual acuity (VA) and contrast sensitivity (CS), with DIVE, in a large sample of children from 6 months to 14 years (y) of age, and to compare the results of preterm and full-term children. Participants were recruited in clinical settings from five countries. There were 2208 children tested, 609 of them were born preterm. Both VA and CS improved throughout childhood, with the maximum increase during the first 5 years of age. Gestational age, refractive error and age had an impact on VA results, while CS values were only influenced by age. With this study we report normative reference outcomes for VA and CS throughout childhood and validate the DIVE tests as a useful tool to measure basic visual functions in children.
Collapse
Affiliation(s)
- María Teresa Pérez Roche
- Ofthalmology Department, Miguel Servet University Hospital, 50009 Zaragoza, Spain
- Aragon Institute of Heatlh Research (IIS Aragón), 50009 Zaragoza, Spain
| | | | - Hu Liu
- The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Diego Gutierrez
- I3A Institute for Research in Engineering, Universidad de Zaragoza, 50009 Zaragoza, Spain
| | - Chau Pham
- National Institute of Ophthalmology, Hanoi 100000, Vietnam
| | | | - Gerardo García
- Strabismus and Pediatric Ophthalmology Department, Hospital de la Ceguera, APEC, Ciudad de Mexico 04030, Mexico
| | - Mauricio Cedillo Ley
- Strabismus and Pediatric Ophthalmology Department, Hospital de la Ceguera, APEC, Ciudad de Mexico 04030, Mexico
| | - Sandra de Fernando
- Ophthalmology Department, Cruces University Hospital, 48903 Barakaldo, Spain
| | | | - Victoria Pueyo
- Ofthalmology Department, Miguel Servet University Hospital, 50009 Zaragoza, Spain
- Aragon Institute of Heatlh Research (IIS Aragón), 50009 Zaragoza, Spain
- Correspondence:
| | | |
Collapse
|