1
|
Cacciamani L, Tomer D, Mylod-Vargas MG, Selcov A, Peterson GA, Oseguera CI, Barbieux A. HD-tDCS to the lateral occipital complex improves haptic object recognition. Exp Brain Res 2024:10.1007/s00221-024-06888-7. [PMID: 38970654 DOI: 10.1007/s00221-024-06888-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 06/26/2024] [Indexed: 07/08/2024]
Abstract
High-definition transcranial direct current stimulation (HD-tDCS) is a non-invasive brain stimulation technique that has been shown to be safe and effective in modulating neuronal activity. The present study investigates the effect of anodal HD-tDCS on haptic object perception and memory through stimulation of the lateral occipital complex (LOC), a structure that has been shown to be involved in both visual and haptic object recognition. In this single-blind, sham-controlled, between-subjects study, blindfolded healthy, sighted participants used their right (dominant) hand to perform haptic discrimination and recognition tasks with 3D-printed, novel objects called "Greebles" while receiving 20 min of 2 milliamp (mA) anodal stimulation (or sham) to the left or right LOC. Compared to sham, those who received left LOC stimulation (contralateral to the hand used) showed an improvement in haptic object recognition but not discrimination-a finding that was evident from the start of the behavioral tasks. A second experiment showed that this effect was not observed with right LOC stimulation (ipsilateral to the hand used). These results suggest that HD-tDCS to the left LOC can improve recognition of objects perceived via touch. Overall, this work sheds light on the LOC as a multimodal structure that plays a key role in object recognition in both the visual and haptic modalities.
Collapse
Affiliation(s)
- Laura Cacciamani
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA.
| | - Daniel Tomer
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA
| | - Mary Grace Mylod-Vargas
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA
| | - Aaron Selcov
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA
| | - Grace A Peterson
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA
| | - Christopher I Oseguera
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA
| | - Aidan Barbieux
- Department of Psychology and Child Development, California Polytechnic State University, 1 Grand Ave., San Luis Obispo, CA, 93407, USA
| |
Collapse
|
2
|
Garces P, Antoniades CA, Sobanska A, Kovacs N, Ying SH, Gupta AS, Perlman S, Szmulewicz DJ, Pane C, Németh AH, Jardim LB, Coarelli G, Dankova M, Traschütz A, Tarnutzer AA. Quantitative Oculomotor Assessment in Hereditary Ataxia: Systematic Review and Consensus by the Ataxia Global Initiative Working Group on Digital-motor Biomarkers. CEREBELLUM (LONDON, ENGLAND) 2024; 23:896-911. [PMID: 37117990 PMCID: PMC11102387 DOI: 10.1007/s12311-023-01559-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/18/2023] [Indexed: 04/30/2023]
Abstract
Oculomotor deficits are common in hereditary ataxia, but disproportionally neglected in clinical ataxia scales and as outcome measures for interventional trials. Quantitative assessment of oculomotor function has become increasingly available and thus applicable in multicenter trials and offers the opportunity to capture severity and progression of oculomotor impairment in a sensitive and reliable manner. In this consensus paper of the Ataxia Global Initiative Working Group On Digital Oculomotor Biomarkers, based on a systematic literature review, we propose harmonized methodology and measurement parameters for the quantitative assessment of oculomotor function in natural-history studies and clinical trials in hereditary ataxia. MEDLINE was searched for articles reporting on oculomotor/vestibular properties in ataxia patients and a study-tailored quality-assessment was performed. One-hundred-and-seventeen articles reporting on subjects with genetically confirmed (n=1134) or suspected hereditary ataxia (n=198), and degenerative ataxias with sporadic presentation (n=480) were included and subject to data extraction. Based on robust discrimination from controls, correlation with disease-severity, sensitivity to change, and feasibility in international multicenter settings as prerequisite for clinical trials, we prioritize a core-set of five eye-movement types: (i) pursuit eye movements, (ii) saccadic eye movements, (iii) fixation, (iv) eccentric gaze holding, and (v) rotational vestibulo-ocular reflex. We provide detailed guidelines for their acquisition, and recommendations on the quantitative parameters to extract. Limitations include low study quality, heterogeneity in patient populations, and lack of longitudinal studies. Standardization of quantitative oculomotor assessments will facilitate their implementation, interpretation, and validation in clinical trials, and ultimately advance our understanding of the evolution of oculomotor network dysfunction in hereditary ataxias.
Collapse
Affiliation(s)
- Pilar Garces
- Roche Pharma Research and Early Development, Neuroscience and Rare Diseases, Roche Innovation Center Basel, Basel, Switzerland
| | - Chrystalina A Antoniades
- NeuroMetrology Lab, Nuffield Department of Clinical Neurosciences, Clinical Neurology, Medical Sciences Division, University of Oxford, Oxford, OX3 9DU, UK
| | - Anna Sobanska
- Institute of Psychiatry and Neurology, Warsaw, Poland
| | - Norbert Kovacs
- Department of Neurology, University of Pécs, Medical School, Pécs, Hungary
| | - Sarah H Ying
- Department of Otology and Laryngology and Department of Neurology, Harvard Medical School, Boston, MA, USA
| | - Anoopum S Gupta
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Susan Perlman
- University of California Los Angeles, Los Angeles, California, USA
| | - David J Szmulewicz
- Balance Disorders and Ataxia Service, Royal Victoria Eye and Ear Hospital, East Melbourne, Melbourne, VIC, 3002, Australia
- The Florey Institute of Neuroscience and Mental Health, Parkville, Melbourne, VIC, 3052, Australia
| | - Chiara Pane
- Department of Neurosciences and Reproductive and Odontostomatological Sciences, University of Naples "Federico II", Naples, Italy
| | - Andrea H Németh
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
- Oxford Centre for Genomic Medicine, Oxford University Hospitals NHS Trust, Oxford, UK
| | - Laura B Jardim
- Departamento de Medicina Interna, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil
- Serviço de Genética Médica/Centro de Pesquisa Clínica e Experimental, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil
| | - Giulia Coarelli
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm U1127, CNRS UMR7225, Paris, France
- Department of Genetics, Neurogene National Reference Centre for Rare Diseases, Pitié-Salpêtrière University Hospital, Assistance Publique, Hôpitaux de Paris, Paris, France
| | - Michaela Dankova
- Department of Neurology, Centre of Hereditary Ataxias, 2nd Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - Andreas Traschütz
- Research Division "Translational Genomics of Neurodegenerative Diseases", Hertie-Institute for Clinical Brain Research and Center of Neurology, University of Tübingen, Tübingen, Germany
- German Center for Neurodegenerative Diseases (DZNE), University of Tübingen, Tübingen, Germany
| | - Alexander A Tarnutzer
- Neurology, Cantonal Hospital of Baden, 5404, Baden, Switzerland.
- Faculty of Medicine, University of Zurich, Zurich, Switzerland.
| |
Collapse
|
3
|
Kandel M, Snedeker J. Assessing two methods of webcam-based eye-tracking for child language research. JOURNAL OF CHILD LANGUAGE 2024:1-34. [PMID: 38712583 DOI: 10.1017/s0305000924000175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
We assess the feasibility of conducting web-based eye-tracking experiments with children using two methods of webcam-based eye-tracking: automatic gaze estimation with the WebGazer.js algorithm and hand annotation of gaze direction from recorded webcam videos. Experiment 1 directly compares the two methods in a visual-world language task with five to six year-old children. Experiment 2 more precisely investigates WebGazer.js' spatiotemporal resolution with four to twelve year-old children in a visual-fixation task. We find that it is possible to conduct web-based eye-tracking experiments with children in both supervised (Experiment 1) and unsupervised (Experiment 2) settings - however, the webcam eye-tracking methods differ in their sensitivity and accuracy. Webcam video annotation is well-suited to detecting fine-grained looking effects relevant to child language researchers. In contrast, WebGazer.js gaze estimates appear noisier and less temporally precise. We discuss the advantages and disadvantages of each method and provide recommendations for researchers conducting child eye-tracking studies online.
Collapse
|
4
|
Hagihara H, Zaadnoordijk L, Cusack R, Kimura N, Tsuji S. Exploration of factors affecting webcam-based automated gaze coding. Behav Res Methods 2024:10.3758/s13428-024-02424-1. [PMID: 38693440 DOI: 10.3758/s13428-024-02424-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/09/2024] [Indexed: 05/03/2024]
Abstract
Online experiments have been transforming the field of behavioral research, enabling researchers to increase sample sizes, access diverse populations, lower the costs of data collection, and promote reproducibility. The field of developmental psychology increasingly exploits such online testing approaches. Since infants cannot give explicit behavioral responses, one key outcome measure is infants' gaze behavior. In the absence of automated eyetrackers in participants' homes, automatic gaze classification from webcam data would make it possible to avoid painstaking manual coding. However, the lack of a controlled experimental environment may lead to various noise factors impeding automatic face detection or gaze classification. We created an adult webcam dataset that systematically reproduced noise factors from infant webcam studies which might affect automated gaze coding accuracy. We varied participants' left-right offset, distance to the camera, facial rotation, and the direction of the lighting source. Running two state-of-the-art classification algorithms (iCatcher+ and OWLET) revealed that facial detection performance was particularly affected by the lighting source, while gaze coding accuracy was consistently affected by the distance to the camera and lighting source. Morphing participants' faces to be unidentifiable did not generally affect the results, suggesting facial anonymization could be used when making online video data publicly available, for purposes of further study and transparency. Our findings will guide improving study design for infant and adult participants during online experiments. Moreover, training algorithms using our dataset will allow researchers to improve robustness and allow developmental psychologists to leverage online testing more efficiently.
Collapse
Affiliation(s)
- Hiromichi Hagihara
- Graduate School of Human Sciences, Osaka University 1-2 Yamadaoka, Suita-shi Osaka, 565-0871, Japan.
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan.
- Japan Society for the Promotion of Science, 5-3-1 Kojimachi, Chiyoda-ku, Tokyo, 102-0083, Japan.
- The Institute for AI and Beyond, The University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo, 113-0032, Japan.
| | - Lorijn Zaadnoordijk
- Trinity College Institute of Neuroscience and School of Psychology, Trinity College Dublin, College Green, Dublin 2, Ireland
| | - Rhodri Cusack
- Trinity College Institute of Neuroscience and School of Psychology, Trinity College Dublin, College Green, Dublin 2, Ireland
| | - Nanako Kimura
- Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Sho Tsuji
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
- The Institute for AI and Beyond, The University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo, 113-0032, Japan
| |
Collapse
|
5
|
Fu X, Platt E, Shic F, Bradshaw J. Infant Social Attention Associated with Elevated Likelihood for Autism Spectrum Disorder: A Multi-Method Comparison. J Autism Dev Disord 2024:10.1007/s10803-024-06360-z. [PMID: 38678515 DOI: 10.1007/s10803-024-06360-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/16/2024] [Indexed: 05/01/2024]
Abstract
PURPOSE The study aimed to compare eye tracking (ET) and manual coding (MC) measures of attention to social and nonsocial information in infants with elevated familial likelihood (EL) of autism spectrum disorder (ASD) and low likelihood of ASD (LL). ET provides a temporally and spatially sensitive tool for measuring gaze allocation. Existing evidence suggests that ET is a promising tool for detecting distinct social attention patterns that may serve as a biomarker for ASD. However, ET is prone to data loss, especially in young EL infants. METHODS To increase evidence for ET as a viable tool for capturing atypical social attention in EL infants, the current prospective, longitudinal study obtained ET and MC measures of social and nonsocial attention in 25 EL and 47 LL infants at several time points between 3 and 24 months of age. RESULTS ET data was obtained with a satisfactory success rate of 95.83%, albeit with a higher degree of data loss compared to MC. Infant age and ASD likelihood status did not impact the extent of ET or MC data loss. There was a significant positive association between the ET and MC measures of attention, and separate analyses of attention using ET and AC measures yielded comparable findings. These analyses indicated group differences (EL vs. LL) in age-related change in attention to social vs. nonsocial information. CONCLUSION Together, the findings support infant ET as a promising approach for identifying very early markers associated with ASD likelihood.
Collapse
Affiliation(s)
- Xiaoxue Fu
- Department of Psychology, University of South Carolina, Columbia, SC, USA.
| | - Emma Platt
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Jessica Bradshaw
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| |
Collapse
|
6
|
Valtakari NV, Hessels RS, Niehorster DC, Viktorsson C, Nyström P, Falck-Ytter T, Kemner C, Hooge ITC. A field test of computer-vision-based gaze estimation in psychology. Behav Res Methods 2024; 56:1900-1915. [PMID: 37101100 PMCID: PMC10990994 DOI: 10.3758/s13428-023-02125-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/07/2023] [Indexed: 04/28/2023]
Abstract
Computer-vision-based gaze estimation refers to techniques that estimate gaze direction directly from video recordings of the eyes or face without the need for an eye tracker. Although many such methods exist, their validation is often found in the technical literature (e.g., computer science conference papers). We aimed to (1) identify which computer-vision-based gaze estimation methods are usable by the average researcher in fields such as psychology or education, and (2) evaluate these methods. We searched for methods that do not require calibration and have clear documentation. Two toolkits, OpenFace and OpenGaze, were found to fulfill these criteria. First, we present an experiment where adult participants fixated on nine stimulus points on a computer screen. We filmed their face with a camera and processed the recorded videos with OpenFace and OpenGaze. We conclude that OpenGaze is accurate and precise enough to be used in screen-based experiments with stimuli separated by at least 11 degrees of gaze angle. OpenFace was not sufficiently accurate for such situations but can potentially be used in sparser environments. We then examined whether OpenFace could be used with horizontally separated stimuli in a sparse environment with infant participants. We compared dwell measures based on OpenFace estimates to the same measures based on manual coding. We conclude that OpenFace gaze estimates may potentially be used with measures such as relative total dwell time to sparse, horizontally separated areas of interest, but should not be used to draw conclusions about measures such as dwell duration.
Collapse
Affiliation(s)
- Niilo V Valtakari
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands.
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Charlotte Viktorsson
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Pär Nyström
- Uppsala Child and Baby Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Terje Falck-Ytter
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
- Karolinska Institutet Center of Neurodevelopmental Disorders (KIND), Department of Women's and Children's Health, Karolinska Institutet, Stockholm, Sweden
| | - Chantal Kemner
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands
| |
Collapse
|
7
|
Zeng G, Simpson EA, Paukner A. Maximizing valid eye-tracking data in human and macaque infants by optimizing calibration and adjusting areas of interest. Behav Res Methods 2024; 56:881-907. [PMID: 36890330 DOI: 10.3758/s13428-022-02056-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/24/2022] [Indexed: 03/10/2023]
Abstract
Remote eye tracking with automated corneal reflection provides insights into the emergence and development of cognitive, social, and emotional functions in human infants and non-human primates. However, because most eye-tracking systems were designed for use in human adults, the accuracy of eye-tracking data collected in other populations is unclear, as are potential approaches to minimize measurement error. For instance, data quality may differ across species or ages, which are necessary considerations for comparative and developmental studies. Here we examined how the calibration method and adjustments to areas of interest (AOIs) of the Tobii TX300 changed the mapping of fixations to AOIs in a cross-species longitudinal study. We tested humans (N = 119) at 2, 4, 6, 8, and 14 months of age and macaques (Macaca mulatta; N = 21) at 2 weeks, 3 weeks, and 6 months of age. In all groups, we found improvement in the proportion of AOI hits detected as the number of successful calibration points increased, suggesting calibration approaches with more points may be advantageous. Spatially enlarging and temporally prolonging AOIs increased the number of fixation-AOI mappings, suggesting improvements in capturing infants' gaze behaviors; however, these benefits varied across age groups and species, suggesting different parameters may be ideal, depending on the population studied. In sum, to maximize usable sessions and minimize measurement error, eye-tracking data collection and extraction approaches may need adjustments for the age groups and species studied. Doing so may make it easier to standardize and replicate eye-tracking research findings.
Collapse
Affiliation(s)
- Guangyu Zeng
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | | - Annika Paukner
- Department of Psychology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
8
|
Steffan A, Zimmer L, Arias-Trejo N, Bohn M, Dal Ben R, Flores-Coronado MA, Franchin L, Garbisch I, Wiesmann CG, Hamlin JK, Havron N, Hay JF, Hermansen TK, Jakobsen KV, Kalinke S, Ko ES, Kulke L, Mayor J, Meristo M, Moreau D, Mun S, Prein J, Rakoczy H, Rothmaler K, Oliveira DS, Simpson EA, Sirois S, Smith ES, Strid K, Tebbe AL, Thiele M, Yuen F, Schuwerk T. Validation of an open source, remote web-based eye-tracking method (WebGazer) for research in early childhood. INFANCY 2024; 29:31-55. [PMID: 37850726 PMCID: PMC10841511 DOI: 10.1111/infa.12564] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 09/25/2023] [Accepted: 09/30/2023] [Indexed: 10/19/2023]
Abstract
Measuring eye movements remotely via the participant's webcam promises to be an attractive methodological addition to in-person eye-tracking in the lab. However, there is a lack of systematic research comparing remote web-based eye-tracking with in-lab eye-tracking in young children. We report a multi-lab study that compared these two measures in an anticipatory looking task with toddlers using WebGazer.js and jsPsych. Results of our remotely tested sample of 18-27-month-old toddlers (N = 125) revealed that web-based eye-tracking successfully captured goal-based action predictions, although the proportion of the goal-directed anticipatory looking was lower compared to the in-lab sample (N = 70). As expected, attrition rate was substantially higher in the web-based (42%) than the in-lab sample (10%). Excluding trials based on visual inspection of the match of time-locked gaze coordinates and the participant's webcam video overlayed on the stimuli was an important preprocessing step to reduce noise in the data. We discuss the use of this remote web-based method in comparison with other current methodological innovations. Our study demonstrates that remote web-based eye-tracking can be a useful tool for testing toddlers, facilitating recruitment of larger and more diverse samples; a caveat to consider is the larger drop-out rate.
Collapse
Affiliation(s)
- Adrian Steffan
- Department of Psychology, Ludwig-Maximilians-Universität München
| | - Lucie Zimmer
- Department of Psychology, Ludwig-Maximilians-Universität München
| | | | - Manuel Bohn
- Department of Comparative Cultural Psychology, Max Planck Institute for Evolutionary Anthropology
- Institute of Psychology, Leuphana University Lüneburg
| | | | | | - Laura Franchin
- Department of Psychology and Cognitive Science, University of Trento
| | - Isa Garbisch
- Department of Developmental Psychology, University of Göttingen
| | - Charlotte Grosse Wiesmann
- Research Group Milestones of Early Cognitive Development, Max Planck Institute for Human Cognitive and Brain Sciences
| | - J. Kiley Hamlin
- Department of Psychology, The University of British Columbia
| | - Naomi Havron
- School of Psychological Sciences & Center for the Study of Child Development, University of Haifa
| | | | | | | | - Steven Kalinke
- Department of Comparative Cultural Psychology, Max Planck Institute for Evolutionary Anthropology
| | - Eon-Suk Ko
- Department of English Language and Literature, Chosun University
| | - Louisa Kulke
- Developmental Psychology with Educational Psychology, University of Bremen
| | | | | | - David Moreau
- School of Psychology and Centre for Brain Research, University of Auckland
| | - Seongmin Mun
- Department of English Language and Literature, Chosun University
| | - Julia Prein
- Department of Comparative Cultural Psychology, Max Planck Institute for Evolutionary Anthropology
| | - Hannes Rakoczy
- Department of Developmental Psychology, University of Göttingen
| | - Katrin Rothmaler
- Research Group Milestones of Early Cognitive Development, Max Planck Institute for Human Cognitive and Brain Sciences
| | | | | | - Sylvain Sirois
- Department of Psychology, Université du Québec à Trois-Rivières
| | | | - Karin Strid
- Department of Psychology, University of Gothenburg
| | - Anna-Lena Tebbe
- Research Group Milestones of Early Cognitive Development, Max Planck Institute for Human Cognitive and Brain Sciences
| | - Maleen Thiele
- Department of Comparative Cultural Psychology, Max Planck Institute for Evolutionary Anthropology
| | - Francis Yuen
- Department of Psychology, The University of British Columbia
| | - Tobias Schuwerk
- Department of Psychology, Ludwig-Maximilians-Universität München
| |
Collapse
|
9
|
Hooge ITC, Niehorster DC, Hessels RS, Benjamins JS, Nyström M. How robust are wearable eye trackers to slow and fast head and body movements? Behav Res Methods 2023; 55:4128-4142. [PMID: 36326998 PMCID: PMC10700439 DOI: 10.3758/s13428-022-02010-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2022] [Indexed: 06/16/2023]
Abstract
How well can modern wearable eye trackers cope with head and body movement? To investigate this question, we asked four participants to stand still, walk, skip, and jump while fixating a static physical target in space. We did this for six different eye trackers. All the eye trackers were capable of recording gaze during the most dynamic episodes (skipping and jumping). The accuracy became worse as movement got wilder. During skipping and jumping, the biggest error was 5.8∘. However, most errors were smaller than 3∘. We discuss the implications of decreased accuracy in the context of different research scenarios.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, and Social, Health and Organisational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
10
|
Cho VY, Loh XH, Abbott L, Mohd-Isa NA, Anthonappa RP. Reporting Eye-tracking Studies In DEntistry (RESIDE) checklist. J Dent 2023; 129:104359. [PMID: 36403692 DOI: 10.1016/j.jdent.2022.104359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 10/18/2022] [Accepted: 11/05/2022] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVES To (i) provide a scoping review of eye-tracking studies in dentistry, and (ii) propose a "Reporting Eye-tracking Studies in DEntistry" (RESIDE) checklist to facilitate standard reporting of eye-tracking studies. DATA A comprehensive search of six distinct electronic databases was undertaken. SOURCES Pubmed, OVID, Wed of Knowledge, Scopus, Cochrane and Google Scholar were used to identify studies that employed eye-tracking technology and dentistry as a subfield STUDY SELECTION: 42 studies met the inclusion criteria. Most studies exhibited several inconsistencies or failed to report on the appropriate items in the RESIDE checklist. These essential components include ethical approval, sample size calculation, location and setting, eye-tracking device attributes, participant calibration, sequence of events, and eye-tracking metrics (quantitative, qualitative and data details). CONCLUSIONS Evaluation of the published eye-tracking studies in this scoping review provides empirical data, highlighting the inconsistencies and limitations. Importantly, it illustrates the applicability of the RESIDE Checklist, which provides a comprehensive list of reporting elements to assist authors and reviewers of eye-tracking studies in dentistry. Also, RESIDE provides a framework to overcome critical issues to ensure high-quality scientific publications. CLINICAL SIGNIFICANCE A minimum threshold should be applied before accepting eye-tracking studies for publication in the future. RESIDE checklist promotes transparent and reproducible scientific communication about eye-tracking applications to dentistry. In addition, it provides a comprehensive list of reporting elements to assist authors and reviewers in ensuring high-quality scientific publications.
Collapse
Affiliation(s)
- Vanessa Y Cho
- Dental School, The University of Western Australia, 17 Monash Ave, Nedlands 6009, Perth, Australia.
| | - Xin Hui Loh
- Dental School, The University of Western Australia, 17 Monash Ave, Nedlands 6009, Perth, Australia.
| | - Lyndon Abbott
- Dental School, The University of Western Australia, 17 Monash Ave, Nedlands 6009, Perth, Australia.
| | - Nur Anisah Mohd-Isa
- Dental School, The University of Western Australia, 17 Monash Ave, Nedlands 6009, Perth, Australia.
| | - Robert P Anthonappa
- Dental School, The University of Western Australia, 17 Monash Ave, Nedlands 6009, Perth, Australia.
| |
Collapse
|
11
|
Capparini C, To MPS, Dardenne C, Reid VM. Offline Calibration for Infant Gaze and Head Tracking across a Wide Horizontal Visual Field. SENSORS (BASEL, SWITZERLAND) 2023; 23:972. [PMID: 36679775 PMCID: PMC9866781 DOI: 10.3390/s23020972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 01/10/2023] [Accepted: 01/11/2023] [Indexed: 06/17/2023]
Abstract
Most well-established eye-tracking research paradigms adopt remote systems, which typically feature regular flat screens of limited width. Limitations of current eye-tracking methods over a wide area include calibration, the significant loss of data due to head movements, and the reduction of data quality over the course of an experimental session. Here, we introduced a novel method of tracking gaze and head movements that combines the possibility of investigating a wide field of view and an offline calibration procedure to enhance the accuracy of measurements. A 4-camera Smart Eye Pro system was adapted for infant research to detect gaze movements across 126° of the horizontal meridian. To accurately track this visual area, an online system calibration was combined with a new offline gaze calibration procedure. Results revealed that the proposed system successfully tracked infants' head and gaze beyond the average screen size. The implementation of an offline calibration procedure improved the validity and spatial accuracy of measures by correcting a systematic top-right error (1.38° mean horizontal error and 1.46° mean vertical error). This approach could be critical for deriving accurate physiological measures from the eye and represents a substantial methodological advance for tracking looking behaviour across both central and peripheral regions. The offline calibration is particularly useful for work with developing populations, such as infants, and for people who may have difficulties in following instructions.
Collapse
Affiliation(s)
- Chiara Capparini
- Center for Research in Cognition & Neuroscience (CRCN), Université Libre de Bruxelles, 1050 Brussels, Belgium
- Department of Psychology, Lancaster University, Lancaster LA1 4YF, UK
| | - Michelle P. S. To
- Department of Psychology, Lancaster University, Lancaster LA1 4YF, UK
| | | | - Vincent M. Reid
- School of Psychology, University of Waikato, Hamilton 3240, New Zealand
| |
Collapse
|
12
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
13
|
Kucharský Š, Zaharieva M, Raijmakers M, Visser I. Habituation, part
II
. Rethinking the habituation paradigm. INFANT AND CHILD DEVELOPMENT 2022. [DOI: 10.1002/icd.2383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Šimon Kucharský
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences University of Amsterdam Amsterdam the Netherlands
- Department of Psychological Methods, Faculty of Social and Behavioural Sciences University of Amsterdam Amsterdam the Netherlands
| | - Martina Zaharieva
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences University of Amsterdam Amsterdam the Netherlands
- Research Institute of Child Development and Education, Faculty of Social and Behavioural Sciences University of Amsterdam Amsterdam the Netherlands
| | - Maartje Raijmakers
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences University of Amsterdam Amsterdam the Netherlands
- Department of Educational Studies and Learn!, Faculty of Behavioral and Movement Sciences Free University Amsterdam Amsterdam the Netherlands
| | - Ingmar Visser
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences University of Amsterdam Amsterdam the Netherlands
- Amsterdam Brain & Cognition (ABC) University of Amsterdam Amsterdam the Netherlands
| |
Collapse
|
14
|
Meyer J, Smeeton NJ, Fasold F, Schul K, Schön T, Klatt S. Shot deception in basketball: Gaze and anticipation strategy in defence. Hum Mov Sci 2022; 84:102975. [PMID: 35820258 DOI: 10.1016/j.humov.2022.102975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 06/24/2022] [Accepted: 06/28/2022] [Indexed: 11/25/2022]
Abstract
Anticipation of teammates and opponents is a critical factor in many sports played in interactive environments. Deceptive actions are used in sports such as basketball to counteract anticipation of an opponent. In this study, we investigated the effects of shot deception on the players' anticipation behaviour in basketball. Thirty one basketball players (15 expert, 16 novice) watched life-sized videos of basketball players performing real shots or shot fakes aimed at the basket. Four different shot outcomes were presented in the video stimuli: a head fake, a ball fake, a high shot fake, and a genuine shot. The videos were temporally occluded at three different time points (-160 ms, -80 ms, 0 ms to ball release) during a shooting motion. The participants had to perform a basketball-related response action to either shots or shot fakes. Response accuracy, response time, and decision confidence were recorded along with gaze behaviour. Anticipation accuracy was reduced at later occlusion points for fake shooting actions. For expert athletes, this effect occurred at later occlusion points compared to novices. The gaze analysis of successful and unsuccessful shot anticipations revealed more gaze fixations towards the hip and legs in successful anticipations, whereas more fixations towards the ball and the head were found in shots unsuccessfully anticipated. It is proposed that hip and leg regions may contain causal information concerning the vertical trajectory of the shooter and identifying this information may be important for perceiving genuine and deceptive shots in basketball.
Collapse
Affiliation(s)
- Johannes Meyer
- Institute of Exercise Training and Sport Informatics, German Sport University Cologne, Am Sportpark Müngersdorf 6, 50933 Cologne, Germany.
| | - Nicholas J Smeeton
- Sport and Exercise Science and Medicine Research Group, University of Brighton, Mithras House, Lewes Road, Brighton BN2 4AT, United Kingdom
| | - Frowin Fasold
- Institute of Exercise Training and Sport Informatics, German Sport University Cologne, Am Sportpark Müngersdorf 6, 50933 Cologne, Germany
| | - Karsten Schul
- Institute of Exercise Training and Sport Informatics, German Sport University Cologne, Am Sportpark Müngersdorf 6, 50933 Cologne, Germany
| | - Timo Schön
- Institute of Exercise Training and Sport Informatics, German Sport University Cologne, Am Sportpark Müngersdorf 6, 50933 Cologne, Germany
| | - Stefanie Klatt
- Institute of Exercise Training and Sport Informatics, German Sport University Cologne, Am Sportpark Müngersdorf 6, 50933 Cologne, Germany; Sport and Exercise Science and Medicine Research Group, University of Brighton, Mithras House, Lewes Road, Brighton BN2 4AT, United Kingdom
| |
Collapse
|
15
|
Presseller EK, Patarinski AGG, Fan SC, Lampe EW, Juarascio AS. Sensor technology in eating disorders research: A systematic review. Int J Eat Disord 2022; 55:573-624. [PMID: 35489036 DOI: 10.1002/eat.23715] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Revised: 04/02/2022] [Accepted: 04/03/2022] [Indexed: 12/14/2022]
Abstract
OBJECTIVE Sensor technologies offer exciting potential to objectively measure psychopathological correlates of eating pathology and eating disorder (ED) research utilizing sensors has rapidly proliferated in the past several years. The aims of the present review are: (1) characterize the types of sensors that have been utilized in ED research, (2) identify the psychopathological factors relevant to EDs that have been assessed using sensors, (3) describe the data supporting the validity and reliability of these sensors, (4) discuss limitations associated with these sensors, and (5) identify gaps that persist within the ED literature with regard to use of sensor technologies. METHOD A systematic search was conducted of PubMed, PsycINFO, Web of Science, ProQuest, and "gray" literature sources. Eligible publications were empirical studies that utilized sensors to measure at least one psychological variable among clinical ED populations. RESULTS Sensors have been utilized with ED samples to measure eating behaviors, physical activity, sleep, autonomic nervous system activity, eyeblink startle response, visual attention, and visual-haptic object integration. The reliability and validity of these sensors varies widely and there are a number of significant gaps that remain in the literature with regard to the types of sensors utilized, context in which sensors have been used, and populations studied. DISCUSSION The existing literature utilizing sensors within ED research largely support the feasibility and acceptability of these tools. Sensors should continue to be utilized within the field, with a specific focus on examining the reliability and validity of these tools within ED samples and increasing the diversity of samples studied. PUBLIC SIGNIFICANCE STATEMENT Sensor technologies, such as those included in modern smartwatches, offer new opportunities to measure factors that may maintain or contribute to symptoms of eating disorders. This article describes the types of sensors that have been used in eating disorders research, challenges that may arise in using these sensors, and discusses new applications of these sensors that may be pursued in future research.
Collapse
Affiliation(s)
- Emily K Presseller
- Department of Psychological and Brain Sciences, Drexel University, Philadelphia, Pennsylvania, USA.,Center for Weight, Eating, and Lifestyle Science, Drexel University, Philadelphia, Pennsylvania, USA
| | | | - Stephanie C Fan
- Department of Psychological and Brain Sciences, Drexel University, Philadelphia, Pennsylvania, USA
| | - Elizabeth W Lampe
- Department of Psychological and Brain Sciences, Drexel University, Philadelphia, Pennsylvania, USA.,Center for Weight, Eating, and Lifestyle Science, Drexel University, Philadelphia, Pennsylvania, USA
| | - Adrienne S Juarascio
- Department of Psychological and Brain Sciences, Drexel University, Philadelphia, Pennsylvania, USA.,Center for Weight, Eating, and Lifestyle Science, Drexel University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
16
|
Vehlen A, Standard W, Domes G. How to choose the size of facial areas of interest in interactive eye tracking. PLoS One 2022; 17:e0263594. [PMID: 35120188 PMCID: PMC8815978 DOI: 10.1371/journal.pone.0263594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 01/21/2022] [Indexed: 11/18/2022] Open
Abstract
Advances in eye tracking technology have enabled the development of interactive experimental setups to study social attention. Since these setups differ substantially from the eye tracker manufacturer's test conditions, validation is essential with regard to the quality of gaze data and other factors potentially threatening the validity of this signal. In this study, we evaluated the impact of accuracy and areas of interest (AOIs) size on the classification of simulated gaze (fixation) data. We defined AOIs of different sizes using the Limited-Radius Voronoi-Tessellation (LRVT) method, and simulated gaze data for facial target points with varying accuracy. As hypothesized, we found that accuracy and AOI size had strong effects on gaze classification. In addition, these effects were not independent and differed in falsely classified gaze inside AOIs (Type I errors; false alarms) and falsely classified gaze outside the predefined AOIs (Type II errors; misses). Our results indicate that smaller AOIs generally minimize false classifications as long as accuracy is good enough. For studies with lower accuracy, Type II errors can still be compensated to some extent by using larger AOIs, but at the cost of more probable Type I errors. Proper estimation of accuracy is therefore essential for making informed decisions regarding the size of AOIs in eye tracking research.
Collapse
Affiliation(s)
- Antonia Vehlen
- Department of Psychology, Biological and Clinical Psychology, University of Trier, Trier, Germany
| | - William Standard
- Department of Psychology, Biological and Clinical Psychology, University of Trier, Trier, Germany
| | - Gregor Domes
- Department of Psychology, Biological and Clinical Psychology, University of Trier, Trier, Germany
| |
Collapse
|
17
|
Bánki A, de Eccher M, Falschlehner L, Hoehl S, Markova G. Comparing Online Webcam- and Laboratory-Based Eye-Tracking for the Assessment of Infants' Audio-Visual Synchrony Perception. Front Psychol 2022; 12:733933. [PMID: 35087442 PMCID: PMC8787048 DOI: 10.3389/fpsyg.2021.733933] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 12/06/2021] [Indexed: 11/13/2022] Open
Abstract
Online data collection with infants raises special opportunities and challenges for developmental research. One of the most prevalent methods in infancy research is eye-tracking, which has been widely applied in laboratory settings to assess cognitive development. Technological advances now allow conducting eye-tracking online with various populations, including infants. However, the accuracy and reliability of online infant eye-tracking remain to be comprehensively evaluated. No research to date has directly compared webcam-based and in-lab eye-tracking data from infants, similarly to data from adults. The present study provides a direct comparison of in-lab and webcam-based eye-tracking data from infants who completed an identical looking time paradigm in two different settings (in the laboratory or online at home). We assessed 4-6-month-old infants (n = 38) in an eye-tracking task that measured the detection of audio-visual asynchrony. Webcam-based and in-lab eye-tracking data were compared on eye-tracking and video data quality, infants' viewing behavior, and experimental effects. Results revealed no differences between the in-lab and online setting in the frequency of technical issues and participant attrition rates. Video data quality was comparable between settings in terms of completeness and brightness, despite lower frame rate and resolution online. Eye-tracking data quality was higher in the laboratory than online, except in case of relative sample loss. Gaze data quantity recorded by eye-tracking was significantly lower than by video in both settings. In valid trials, eye-tracking and video data captured infants' viewing behavior uniformly, irrespective of setting. Despite the common challenges of infant eye-tracking across experimental settings, our results point toward the necessity to further improve the precision of online eye-tracking with infants. Taken together, online eye-tracking is a promising tool to assess infants' gaze behavior but requires careful data quality control. The demographic composition of both samples differed from the generic population on caregiver education: our samples comprised caregivers with higher-than-average education levels, challenging the notion that online studies will per se reach more diverse populations.
Collapse
Affiliation(s)
- Anna Bánki
- Department of Developmental and Educational Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Martina de Eccher
- Department for Psychology of Language, Georg-Elias-Müller-Institut für Psychologie, Georg-August-Universität Göttingen, Göttingen, Germany
| | - Lilith Falschlehner
- Department of Developmental and Educational Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Stefanie Hoehl
- Department of Developmental and Educational Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Gabriela Markova
- Department of Developmental and Educational Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| |
Collapse
|
18
|
De Kloe YJR, Hooge ITC, Kemner C, Niehorster DC, Nyström M, Hessels RS. Replacing eye trackers in ongoing studies: A comparison of eye-tracking data quality between the Tobii Pro TX300 and the Tobii Pro Spectrum. INFANCY 2021; 27:25-45. [PMID: 34687142 DOI: 10.1111/infa.12441] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 09/06/2021] [Accepted: 09/20/2021] [Indexed: 11/26/2022]
Abstract
The Tobii Pro TX300 is a popular eye tracker in developmental eye-tracking research, yet it is no longer manufactured. If a TX300 breaks down, it may have to be replaced. The data quality of the replacement eye tracker may differ from that of the TX300, which may affect the experimental outcome measures. This is problematic for longitudinal and multi-site studies, and for researchers replacing eye trackers between studies. We, therefore, ask how the TX300 and its successor, the Tobii Pro Spectrum, compare in terms of eye-tracking data quality. Data quality-operationalized through precision, accuracy, and data loss-was compared between eye trackers for three age groups (around 5-months, 10-months, and 3-years). Precision was better for all gaze position signals obtained with the Spectrum in comparison to the TX300. Accuracy of the Spectrum was higher for the 5-month-old and 10-month-old children. For the three-year-old children, accuracy was similar across both eye trackers. Gaze position signals from the Spectrum exhibited lower proportions of data loss, and the duration of the data loss periods tended to be shorter. In conclusion, the Spectrum produces gaze position signals with higher data quality, especially for the younger infants. Implications for data analysis are discussed.
Collapse
Affiliation(s)
- Yentl J R De Kloe
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Chantal Kemner
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden.,Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
19
|
Abstract
Eye-tracking and recording of physiological signals are increasingly used in research within cognitive science and human–computer interaction. For example, gaze position and measures of autonomic arousal, including pupil dilation, skin conductance (SC), and heart rate (HR), provide an indicator of cognitive and physiological processes. The growing popularity of these techniques is partially driven by the emergence of low-cost recording equipment and the proliferation of open-source software for data collection and analysis of such signals. However, the use of new technology requires investigation of its reliability and validation with respect to real-world usage and against established technologies. Accordingly, in two experiments (total N = 69), we assessed the Gazepoint GP3-HD eye-tracker and Gazepoint Biometrics (GPB) system from Gazepoint. We show that the accuracy, precision, and robustness of the eye-tracker are comparable to competing systems. While fixation and saccade events can be reliably extracted, the study of saccade kinematics is affected by the low sampling rate. The GP3-HD is also able to capture psychological effects on pupil dilation in addition to the well-defined pupillary light reflex. Finally, moderate-to-strong correlations between physiological recordings and derived metrics of SC and HR between the GPB and the well-established BIOPAC MP160 support its validity. However, low amplitude of the SC signal obtained from the GPB may reduce sensitivity when separating phasic and tonic components. Similarly, data loss in pulse monitoring may pose difficulties for certain HR variability analyses.
Collapse
|
20
|
Assessing how visual search entropy and engagement predict performance in a multiple-objects tracking air traffic control task. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2021. [DOI: 10.1016/j.chbr.2021.100127] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
21
|
Avoiding potential pitfalls in visual search and eye-movement experiments: A tutorial review. Atten Percept Psychophys 2021; 83:2753-2783. [PMID: 34089167 PMCID: PMC8460493 DOI: 10.3758/s13414-021-02326-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/03/2021] [Indexed: 12/15/2022]
Abstract
Examining eye-movement behavior during visual search is an increasingly popular approach for gaining insights into the moment-to-moment processing that takes place when we look for targets in our environment. In this tutorial review, we describe a set of pitfalls and considerations that are important for researchers – both experienced and new to the field – when engaging in eye-movement and visual search experiments. We walk the reader through the research cycle of a visual search and eye-movement experiment, from choosing the right predictions, through to data collection, reporting of methodology, analytic approaches, the different dependent variables to analyze, and drawing conclusions from patterns of results. Overall, our hope is that this review can serve as a guide, a talking point, a reflection on the practices and potential problems with the current literature on this topic, and ultimately a first step towards standardizing research practices in the field.
Collapse
|
22
|
Koochaki F, Najafizadeh L. A Data-Driven Framework for Intention Prediction via Eye Movement With Applications to Assistive Systems. IEEE Trans Neural Syst Rehabil Eng 2021; 29:974-984. [PMID: 34038364 DOI: 10.1109/tnsre.2021.3083815] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Fast and accurate human intention prediction can significantly advance the performance of assistive devices for patients with limited motor or communication abilities. Among available modalities, eye movement can be valuable for inferring the user's intention, as it can be tracked non-invasively. However, existing limited studies in this domain do not provide the level of accuracy required for the reliable operation of assistive systems. By taking a data-driven approach, this paper presents a new framework that utilizes the spatial and temporal patterns of eye movement along with deep learning to predict the user's intention. In the proposed framework, the spatial patterns of gaze are identified by clustering the gaze points based on their density over displayed images in order to find the regions of interest (ROIs). The temporal patterns of gaze are identified via hidden Markov models (HMMs) to find the transition sequence between ROIs. Transfer learning is utilized to identify the objects of interest in the displayed images. Finally, models are developed to predict the user's intention after completing the task as well as at early stages of the task. The proposed framework is evaluated in an experiment involving predicting intended daily-life activities. Results indicate that an average classification accuracy of 97.42% is achieved, which is considerably higher than existing gaze-based intention prediction studies.
Collapse
|
23
|
Using Brain Activity Patterns to Differentiate Real and Virtual Attended Targets during Augmented Reality Scenarios. INFORMATION 2021. [DOI: 10.3390/info12060226] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.
Collapse
|
24
|
The time course of attentional biases in pain: a meta-analysis of eye-tracking studies. Pain 2021; 162:687-701. [PMID: 32960534 DOI: 10.1097/j.pain.0000000000002083] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 08/17/2020] [Indexed: 01/15/2023]
Abstract
ABSTRACT Previous meta-analyses investigating attentional biases towards pain have used reaction time measures. Eye-tracking methods have been adopted to more directly and reliably assess biases, but this literature has not been synthesized in relation to pain. This meta-analysis aimed to investigate the nature and time course of attentional biases to pain-related stimuli in participants of all ages with and without chronic pain using eye-tracking studies and determine the role of task parameters and theoretically relevant moderators. After screening, 24 studies were included with a total sample of 1425 participants. Between-group analyses revealed no significant overall group differences for people with and without chronic pain on biases to pain-related stimuli. Results indicated significant attentional biases towards pain-related words or pictures across both groups on probability of first fixation (k = 21, g = 0.43, 95% confidence interval [CI] 0.15-0.71, P = 0.002), how long participants looked at each picture in the first 500 ms (500-ms epoch dwell: k = 5, g = 0.69, 95% CI 0.034-1.35, P = 0.039), and how long participants looked at each picture overall (total dwell time: k = 25, g = 0.44, 95% CI 0.15-0.72, P = 0.003). Follow-up analyses revealed significant attentional biases on probability of first fixation, latency to first fixation and dwell time for facial stimuli, and number of fixations for sensory word stimuli. Moderator analyses revealed substantial influence of task parameters and some influence of threat status and study quality. Findings support biases in both vigilance and attentional maintenance for pain-related stimuli but suggest attentional biases towards pain are ubiquitous and not related to pain status.
Collapse
|
25
|
Neurogastronomy as a Tool for Evaluating Emotions and Visual Preferences of Selected Food Served in Different Ways. Foods 2021; 10:foods10020354. [PMID: 33562287 PMCID: PMC7914587 DOI: 10.3390/foods10020354] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 01/28/2021] [Accepted: 02/04/2021] [Indexed: 01/22/2023] Open
Abstract
The appearance of food provides certain expectations regarding the harmonization of taste, delicacy, and overall quality, which subsequently affects not only the intake itself but also many other features of the behavior of customers of catering facilities. The main goal of this article is to find out what effect the visual design of food (waffles) prepared from the same ingredients and served in three different ways-a stone plate, street food style, and a white classic plate-has on the consumer's preferences. In addition to the classic tablet assistance personal interview (TAPI) tools, biometric methods such as eye tracking and face reading were used in order to obtain unconscious feedback. During testing, air quality in the room by means of the Extech device and the influence of the visual design of food on the perception of its smell were checked. At the end of the paper, we point out the importance of using classical feedback collection techniques (TAPI) and their extension in measuring subconscious reactions based on monitoring the eye movements and facial expressions of the respondents, which provides a whole new perspective on the perception of visual design and serving food as well as more effective targeting and use of corporate resources.
Collapse
|
26
|
Freschl J, Melcher D, Carter A, Kaldy Z, Blaser E. Seeing a Page in a Flipbook: Shorter Visual Temporal Integration Windows in 2-Year-Old Toddlers with Autism Spectrum Disorder. Autism Res 2020; 14:946-958. [PMID: 33174396 DOI: 10.1002/aur.2430] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 09/02/2020] [Accepted: 10/26/2020] [Indexed: 12/20/2022]
Abstract
Individuals with autism spectrum disorder (ASD) experience differences in visual temporal processing, the part of vision responsible for parsing continuous input into discrete objects and events. Here we investigated temporal processing in 2-year-old toddlers diagnosed with ASD and age-matched typically developing (TD) toddlers. We used a visual search task where the visibility of the target was determined by the pace of a display sequence. On integration trials, each display viewed alone had no visible target, but if integrated over time, the target became visible. On segmentation trials, the target became visible only when displays were perceptually segmented. We measured the percent of trials when participants fixated the target as a function of the stimulus onset asynchrony (SOA) between displays. We computed the crossover point of the integration and segmentation performance functions for each group, an estimate of the temporal integration window (TIW), the period in which visual input is combined. We found that both groups of toddlers had significantly longer TIWs (125 ms) than adults (65 ms) from previous studies using the same paradigm, and that toddlers with ASD had significantly shorter TIWs (108 ms) than chronologically age-matched TD controls (142 ms). LAY SUMMARY: We investigated how young children, with and without autism, organize dynamic visual information across time, using a visual search paradigm. We found that toddlers with autism had higher temporal resolution than typically developing (TD) toddlers of the same age - that is, they are more likely to be able to detect rapid change across time, relative to TD toddlers. These differences in visual temporal processing can impact how one sees, interprets, and interacts with the world. Autism Res 2021, 14: 946-958. © 2020 International Society for Autism Research and Wiley Periodicals LLC.
Collapse
Affiliation(s)
- Julie Freschl
- University of Massachusetts Boston, Department of Psychology, Boston, Massachusetts, USA
| | - David Melcher
- University of Massachusetts Boston, Department of Psychology, Boston, Massachusetts, USA.,University of Trento, Center for Mind/Brain Sciences (CIMeC), Rovereto, Italy
| | - Alice Carter
- University of Massachusetts Boston, Department of Psychology, Boston, Massachusetts, USA
| | - Zsuzsa Kaldy
- University of Massachusetts Boston, Department of Psychology, Boston, Massachusetts, USA
| | - Erik Blaser
- University of Massachusetts Boston, Department of Psychology, Boston, Massachusetts, USA
| |
Collapse
|
27
|
Vrabič N, Juroš B, Tekavčič Pompe M. Automated Visual Acuity Evaluation Based on Preferential Looking Technique and Controlled with Remote Eye Tracking. Ophthalmic Res 2020; 64:389-397. [PMID: 33080607 DOI: 10.1159/000512395] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Accepted: 10/17/2020] [Indexed: 11/19/2022]
Abstract
OBJECTIVE To establish an automated visual acuity test (AVAT) for infants, based on preferential looking technique and controlled with remote eye tracking. To validate the AVAT in a group of healthy children. To compare AVAT visual acuity (VA) values with corresponding VA values acquired with standard tests (ST). METHODS ST, adapted for age (Keeler Acuity Cards in preverbal children and LEA symbols in verbal children), was performed to obtain monocular VA in a group of 36 healthy children. During AVAT, 9 different stimuli with grating circles that matched spatial frequencies of 9 Keeler Acuity Cards (ranging between 0.29 and 14.5 cycles per degree) were projected on a screen. Three repetitions of each stimulus were shown during 9-s intervals, interchanging with an attention grabber. The remote eye tracker was used to evaluate the proportion of time a child spent looking at each grating circle compared to a homogeneous gray background that matched the grating stimuli in average luminance. From this proportion of time, child's binocular VA was evaluated. RESULTS Ninety-seven percent (35/36) of healthy children successfully completed ST and AVAT. There was an agreement between the results of an ST and AVAT, with Lin's concordance coefficient being 0.53 (95% CI = 0.31-0.72). A tendency was observed toward VA overestimation on AVAT for children with VA >0.4 logMAR on ST and toward VA underestimation on AVAT for children with VA ≤0.4 logMAR on ST. CONCLUSIONS AVAT requires a minimally skilled investigator. The evaluation of better eye monocular VA on ST and binocular VA on AVAT was comparable for healthy children.
Collapse
Affiliation(s)
- Nika Vrabič
- Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia
| | | | - Manca Tekavčič Pompe
- Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia, .,University Eye Clinic, University Medical Centre Ljubljana, Ljubljana, Slovenia,
| |
Collapse
|
28
|
Stallworthy IC, Sifre R, Berry D, Lasch C, Smith TJ, Elison JT. Infants' gaze exhibits a fractal structure that varies by age and stimulus salience. Sci Rep 2020; 10:17216. [PMID: 33057030 PMCID: PMC7560596 DOI: 10.1038/s41598-020-73187-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Accepted: 09/11/2020] [Indexed: 02/01/2023] Open
Abstract
The development of selective visual attention is critical for effectively engaging with an ever-changing world. Its optimal deployment depends upon interactions between neural, motor, and sensory systems across multiple timescales and neurocognitive loci. Previous work illustrates the spatio-temporal dynamics of these processes in adults, but less is known about this emergent phenomenon early in life. Using data (n = 190; 421 visits) collected between 3 and 35 months of age, we examined the spatio-temporal complexity of young children's gaze patterns as they viewed stimuli varying in semantic salience. Specifically, we used detrended fluctuation analysis (DFA) to quantify the extent to which infants' gaze patterns exhibited scale invariant patterns of nested variability, an organizational feature thought to reflect self-organized and optimally flexible system dynamics that are not overly rigid or random. Results indicated that gaze patterns of even the youngest infants exhibited fractal organization that increased with age. Further, fractal organization was greater when children (a) viewed social stimuli compared to stimuli with degraded social information and (b) when they spontaneously gazed at faces. These findings suggest that selective attention is well-organized in infancy, particularly toward social information, and indicate noteworthy growth in these processes across the first years of life.
Collapse
Affiliation(s)
| | - Robin Sifre
- Institute of Child Development, University of Minnesota, Minneapolis, USA
| | - Daniel Berry
- Institute of Child Development, University of Minnesota, Minneapolis, USA
| | - Carolyn Lasch
- Institute of Child Development, University of Minnesota, Minneapolis, USA
| | - Tim J Smith
- Department of Psychological Sciences, Birkbeck University of London, London, UK
| | - Jed T Elison
- Institute of Child Development, University of Minnesota, Minneapolis, USA
- Department of Pediatrics, University of Minnesota, Minneapolis, USA
| |
Collapse
|
29
|
Carter BT, Luke SG. Best practices in eye tracking research. Int J Psychophysiol 2020; 155:49-62. [PMID: 32504653 DOI: 10.1016/j.ijpsycho.2020.05.010] [Citation(s) in RCA: 61] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 05/26/2020] [Accepted: 05/27/2020] [Indexed: 12/14/2022]
Abstract
This guide describes best practices in using eye tracking technology for research in a variety of disciplines. A basic outline of the anatomy and physiology of the eyes and of eye movements is provided, along with a description of the sorts of research questions eye tracking can address. We then explain how eye tracking technology works and what sorts of data it generates, and provide guidance on how to select and use an eye tracker as well as selecting appropriate eye tracking measures. Challenges to the validity of eye tracking studies are described, along with recommendations for overcoming these challenges. We then outline correct reporting standards for eye tracking studies.
Collapse
|
30
|
Adhanom IB, Lee SC, Folmer E, MacNeilage P. GazeMetrics: An Open-Source Tool for Measuring the Data Quality of HMD-based Eye Trackers. PROCEEDINGS. EYE TRACKING RESEARCH & APPLICATIONS SYMPOSIUM 2020; 2020. [PMID: 33791686 DOI: 10.1145/3379156.3391374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
As virtual reality (VR) garners more attention for eye tracking research, knowledge of accuracy and precision of head-mounted display (HMD) based eye trackers becomes increasingly necessary. It is tempting to rely on manufacturer-provided information about the accuracy and precision of an eye tracker. However, unless data is collected under ideal conditions, these values seldom align with on-site metrics. Therefore, best practices dictate that accuracy and precision should be measured and reported for each study. To address this issue, we provide a novel open-source suite for rigorously measuring accuracy and precision for use with a variety of HMD-based eye trackers. This tool is customizable without having to alter the source code, but changes to the code allow for further alteration. The outputs are available in real time and easy to interpret, making eye tracking with VR more approachable for all users.
Collapse
|
31
|
Voloh B, Watson MR, König S, Womelsdorf T. MAD saccade: statistically robust saccade threshold estimation via the median absolute deviation. J Eye Mov Res 2020; 12:10.16910/jemr.12.8.3. [PMID: 33828776 PMCID: PMC7881893 DOI: 10.16910/jemr.12.8.3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Saccade detection is a critical step in the analysis of gaze data. A common method for saccade detection is to use a simple threshold for velocity or acceleration values, which can be estimated from the data using the mean and standard deviation. However, this method has the downside of being influenced by the very signal it is trying to detect, the outlying velocities or accelerations that occur during saccades. We propose instead to use the median absolute deviation (MAD), a robust estimator of dispersion that is not influenced by outliers. We modify an algorithm proposed by Nyström and colleagues, and quantify saccade detection performance in both simulated and human data. Our modified algorithm shows a significant and marked improvement in saccade detection - showing both more true positives and less false negatives - especially under higher noise levels. We conclude that robust estimators can be widely adopted in other common, automatic gaze classification algorithms due to their ease of implementation.
Collapse
|
32
|
Park SY, Bacelar CE, Holmqvist K. Dog eye movements are slower than human eye movements. J Eye Mov Res 2020; 12:10.16910/jemr.12.8.4. [PMID: 33828775 PMCID: PMC7881887 DOI: 10.16910/jemr.12.8.4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Eye movement of a species reflects the visual behavior strategy that it has adapted to during its evolution. What are eye movements of domestic dogs (Canis lupus familiaris) like? Investigations of dog eye movements per se have not been done, despite the increasing number of visuo-cognitive studies in dogs using eye-tracking systems. To fill this gap, we have recorded dog eye movements using a video-based eye-tracking system, and compared the dog data to that of humans. We found dog saccades follow the systematic relationships between saccade metrics previously shown in humans and other animal species. Yet, the details of the relationships, and the quantities of each metric of dog saccades and fixations differed from those of humans. Overall, dog saccades were slower and fixations were longer than those of humans. We hope our findings contribute to existing comparative analyses of eye movement across animal species, and also to improvement of algorithms used for classifying eye movement data of dogs.
Collapse
Affiliation(s)
- Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna Medical University of Vienna, University of Vienna, Austria
| | - Catarina Espanca Bacelar
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna Medical University of Vienna, University of Vienna, Austria
| | | |
Collapse
|
33
|
Venker CE, Pomper R, Mahr T, Edwards J, Saffran J, Ellis Weismer S. Comparing Automatic Eye Tracking and Manual Gaze Coding Methods in Young Children with Autism Spectrum Disorder. Autism Res 2020; 13:271-283. [PMID: 31622050 PMCID: PMC7359753 DOI: 10.1002/aur.2225] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Revised: 09/16/2019] [Accepted: 09/22/2019] [Indexed: 01/12/2023]
Abstract
Eye-gaze methods offer numerous advantages for studying cognitive processes in children with autism spectrum disorder (ASD), but data loss may threaten the validity and generalizability of results. Some eye-gaze systems may be more vulnerable to data loss than others, but to our knowledge, this issue has not been empirically investigated. In the current study, we asked whether automatic eye-tracking and manual gaze coding produce different rates of data loss or different results in a group of 51 toddlers with ASD. Data from both systems were gathered (from the same children) simultaneously, during the same experimental sessions. As predicted, manual gaze coding produced significantly less data loss than automatic eye tracking, as indicated by the number of usable trials and the proportion of looks to the images per trial. In addition, automatic eye-tracking and manual gaze coding produced different patterns of results, suggesting that the eye-gaze system used to address a particular research question could alter a study's findings and the scientific conclusions that follow. It is our hope that the information from this and future methodological studies will help researchers to select the eye-gaze measurement system that best fits their research questions and target population, as well as help consumers of autism research to interpret the findings from studies that utilize eye-gaze methods with children with ASD. Autism Res 2020, 13: 271-283. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: The current study found that automatic eye-tracking and manual gaze coding produced different rates of data loss and different overall patterns of results in young children with ASD. These findings show that the choice of eye-gaze system may impact the findings of a study-important information for both researchers and consumers of autism research.
Collapse
Affiliation(s)
| | - Ron Pomper
- Waisman Center and Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin
| | - Tristan Mahr
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, Wisconsin
| | - Jan Edwards
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, Wisconsin
| | - Jenny Saffran
- Waisman Center and Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin
| | - Susan Ellis Weismer
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, Wisconsin
| |
Collapse
|
34
|
Hessels RS, Hooge ITC. Eye tracking in developmental cognitive neuroscience - The good, the bad and the ugly. Dev Cogn Neurosci 2019; 40:100710. [PMID: 31593909 PMCID: PMC6974897 DOI: 10.1016/j.dcn.2019.100710] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 07/31/2019] [Accepted: 09/10/2019] [Indexed: 02/07/2023] Open
Abstract
Eye tracking is a popular research tool in developmental cognitive neuroscience for studying the development of perceptual and cognitive processes. However, eye tracking in the context of development is also challenging. In this paper, we ask how knowledge on eye-tracking data quality can be used to improve eye-tracking recordings and analyses in longitudinal research so that valid conclusions about child development may be drawn. We answer this question by adopting the data-quality perspective and surveying the eye-tracking setup, training protocols, and data analysis of the YOUth study (investigating neurocognitive development of 6000 children). We first show how our eye-tracking setup has been optimized for recording high-quality eye-tracking data. Second, we show that eye-tracking data quality can be operator-dependent even after a thorough training protocol. Finally, we report distributions of eye-tracking data quality measures for four age groups (5 months, 10 months, 3 years, and 9 years), based on 1531 recordings. We end with advice for (prospective) developmental eye-tracking researchers and generalizations to other methodologies.
Collapse
Affiliation(s)
- Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands; Developmental Psychology, Utrecht University, Utrecht, The Netherlands.
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
35
|
Schlegelmilch K, Wertz AE. The Effects of Calibration Target, Screen Location, and Movement Type on Infant Eye-Tracking Data Quality. INFANCY 2019; 24:636-662. [PMID: 32677249 DOI: 10.1111/infa.12294] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 03/31/2019] [Accepted: 04/03/2019] [Indexed: 11/29/2022]
Abstract
During infant eye-tracking, fussiness caused by the repetition of calibration stimuli and body movements during testing are frequent constraints on measurement quality. Here, we systematically investigated these constraints with infants and adults using EyeLink 1000 Plus. We compared looking time and dispersion of gaze points elicited by stimuli resembling commonly used calibration animations. The adult group additionally performed body movements during gaze recording that were equivalent to movements infants spontaneously produce during testing. In our results, infants' preference for a particular calibration target did not predict data quality elicited by that stimulus, but targets exhibiting the strongest contrasts in their center or targets with globally distributed complexity resulted in the highest accuracy. Our gaze measures from the adult movement tasks were differentially affected by the type of movement as well as the location where the target appeared on the screen. These heterogeneous effects of movement on measures should be taken into account when planning infant eye-tracking experiments. Additionally, to improve data quality, infants' tolerance for repeated calibrations can be facilitated by alternating between precise calibration targets.
Collapse
Affiliation(s)
- Karola Schlegelmilch
- Max Planck Institute for Human Development, Max Planck Research Group Naturalistic Social Cognition
| | - Annie E Wertz
- Max Planck Institute for Human Development, Max Planck Research Group Naturalistic Social Cognition
| |
Collapse
|
36
|
Cantrell LM, Kanjlia S, Harrison M, Luck SJ, Oakes LM. Cues to individuation facilitate 6-month-old infants' visual short-term memory. Dev Psychol 2019; 55:905-919. [PMID: 30702312 PMCID: PMC6542570 DOI: 10.1037/dev0000683] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Infants' ability to perform visual short-term memory (VSTM) tasks develops rapidly between 6 and 8 months. Here we tested the hypothesis that infants' VSTM performance is influenced by their ability to individuate simultaneously presented objects. We used a one-shot change detection task to ask whether 6-month-old infants (N = 47) would detect a change in the color of 1 item in a 2-item array when the stimulus context facilitated individuation of the items. In Experiment 1 the 2 items in the display differed in shape and color and in Experiment 2 the onset and offset times of the 2 items differed. In both experiments, 6-month-old infants detected a change, contrasting with previous results. Thus, young infants' encoding of information about individual items in multiple-item arrays is related to their ability to individuate those items. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | | | - Steven J. Luck
- Center for Mind and Brain, UC Davis
- Department of Psychology, UC Davis
| | - Lisa M. Oakes
- Center for Mind and Brain, UC Davis
- Department of Psychology, UC Davis
| |
Collapse
|
37
|
Dalrymple KA, Jiang M, Zhao Q, Elison JT. Machine learning accurately classifies age of toddlers based on eye tracking. Sci Rep 2019; 9:6255. [PMID: 31000762 PMCID: PMC6472500 DOI: 10.1038/s41598-019-42764-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Accepted: 04/05/2019] [Indexed: 11/16/2022] Open
Abstract
How people extract visual information from complex scenes provides important information about cognitive processes. Eye tracking studies that have used naturalistic, rather than highly controlled experimental stimuli, reveal that variability in looking behavior is determined by bottom-up image properties such as intensity, color, and orientation, top-down factors such as task instructions and semantic information, and individual differences in genetics, cognitive function and social functioning. These differences are often revealed using areas of interest that are chosen by the experimenter or other human observers. In contrast, we adopted a data-driven approach by using machine learning (Support Vector Machine (SVM) and Deep Learning (DL)) to elucidate factors that contribute to age-related variability in gaze patterns. These models classified the infants by age with a high degree of accuracy, and identified meaningful features distinguishing the age groups. Our results demonstrate that machine learning is an effective tool for understanding how looking patterns vary according to age, providing insight into how toddlers allocate attention and how that changes with development. This sensitivity for detecting differences in exploratory gaze behavior in toddlers highlights the utility of machine learning for characterizing a variety of developmental capacities.
Collapse
Affiliation(s)
| | - Ming Jiang
- Computer Science and Engineering, University of Minnesota, Minneapolis, USA
| | - Qi Zhao
- Computer Science and Engineering, University of Minnesota, Minneapolis, USA
| | - Jed T Elison
- Institute of Child Development, University of Minnesota, Minneapolis, USA
- Department of Pediatrics, University of Minnesota, Minneapolis, USA
| |
Collapse
|