1
|
Paparelli A, Sokhn N, Stacchi L, Coutrot A, Richoz AR, Caldara R. Idiosyncratic fixation patterns generalize across dynamic and static facial expression recognition. Sci Rep 2024; 14:16193. [PMID: 39003314 PMCID: PMC11246522 DOI: 10.1038/s41598-024-66619-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 07/02/2024] [Indexed: 07/15/2024] Open
Abstract
Facial expression recognition (FER) is crucial for understanding the emotional state of others during human social interactions. It has been assumed that humans share universal visual sampling strategies to achieve this task. However, recent studies in face identification have revealed striking idiosyncratic fixation patterns, questioning the universality of face processing. More importantly, very little is known about whether such idiosyncrasies extend to the biological relevant recognition of static and dynamic facial expressions of emotion (FEEs). To clarify this issue, we tracked observers' eye movements categorizing static and ecologically valid dynamic faces displaying the six basic FEEs, all normalized for time presentation (1 s), contrast and global luminance across exposure time. We then used robust data-driven analyses combining statistical fixation maps with hidden Markov Models to explore eye-movements across FEEs and stimulus modalities. Our data revealed three spatially and temporally distinct equally occurring face scanning strategies during FER. Crucially, such visual sampling strategies were mostly comparably effective in FER and highly consistent across FEEs and modalities. Our findings show that spatiotemporal idiosyncratic gaze strategies also occur for the biologically relevant recognition of FEEs, further questioning the universality of FER and, more generally, face processing.
Collapse
Affiliation(s)
- Anita Paparelli
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Nayla Sokhn
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Lisa Stacchi
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Antoine Coutrot
- Laboratoire d'Informatique en Image Et Systèmes d'information, French Centre National de La Recherche Scientifique, University of Lyon, Lyon, France
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland.
| |
Collapse
|
2
|
Qi R, Zheng Y, Yang Y, Cao CC, Hsiao JH. Explanation strategies in humans versus current explainable artificial intelligence: Insights from image classification. Br J Psychol 2024. [PMID: 38858823 DOI: 10.1111/bjop.12714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 05/22/2024] [Indexed: 06/12/2024]
Abstract
Explainable AI (XAI) methods provide explanations of AI models, but our understanding of how they compare with human explanations remains limited. Here, we examined human participants' attention strategies when classifying images and when explaining how they classified the images through eye-tracking and compared their attention strategies with saliency-based explanations from current XAI methods. We found that humans adopted more explorative attention strategies for the explanation task than the classification task itself. Two representative explanation strategies were identified through clustering: One involved focused visual scanning on foreground objects with more conceptual explanations, which contained more specific information for inferring class labels, whereas the other involved explorative scanning with more visual explanations, which were rated higher in effectiveness for early category learning. Interestingly, XAI saliency map explanations had the highest similarity to the explorative attention strategy in humans, and explanations highlighting discriminative features from invoking observable causality through perturbation had higher similarity to human strategies than those highlighting internal features associated with higher class score. Thus, humans use both visual and conceptual information during explanation, which serve different purposes, and XAI methods that highlight features informing observable causality match better with human explanations, potentially more accessible to users.
Collapse
Affiliation(s)
- Ruoxi Qi
- Department of Psychology, University of Hong Kong, Hong Kong SAR, China
| | - Yueyuan Zheng
- Department of Psychology, University of Hong Kong, Hong Kong SAR, China
- Huawei Research Hong Kong, Hong Kong SAR, China
| | - Yi Yang
- Huawei Research Hong Kong, Hong Kong SAR, China
| | - Caleb Chen Cao
- Huawei Research Hong Kong, Hong Kong SAR, China
- Big Data Institute, Hong Kong University of Science and Technology, Hong Kong SAR, China
| | - Janet H Hsiao
- Division of Social Science, Hong Kong University of Science and Technology, Hong Kong SAR, China
| |
Collapse
|
3
|
Butcher N, Bennetts RJ, Sexton L, Barbanta A, Lander K. Eye movement differences when recognising and learning moving and static faces. Q J Exp Psychol (Hove) 2024:17470218241252145. [PMID: 38644390 DOI: 10.1177/17470218241252145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Seeing a face in motion can help subsequent face recognition. Several explanations have been proposed for this "motion advantage," but other factors that might play a role have received less attention. For example, facial movement might enhance recognition by attracting attention to the internal facial features, thereby facilitating identification. However, there is no direct evidence that motion increases attention to regions of the face that facilitate identification (i.e., internal features) compared with static faces. We tested this hypothesis by recording participants' eye movements while they completed the famous face recognition (Experiment 1, N = 32), and face-learning (Experiment 2, N = 60, Experiment 3, N = 68) tasks, with presentation style manipulated (moving or static). Across all three experiments, a motion advantage was found, and participants directed a higher proportion of fixations to the internal features (i.e., eyes, nose, and mouth) of moving faces versus static. Conversely, the proportion of fixations to the internal non-feature area (i.e., cheeks, forehead, chin) and external area (Experiment 3) was significantly reduced for moving compared with static faces (all ps < .05). Results suggest that during both familiar and unfamiliar face recognition, facial motion is associated with increased attention to internal facial features, but only during familiar face recognition is the magnitude of the motion advantage significantly related functionally to the proportion of fixations directed to the internal features.
Collapse
Affiliation(s)
- Natalie Butcher
- Department of Psychology, Teesside University, Middlesbrough, UK
| | | | - Laura Sexton
- Department of Psychology, Teesside University, Middlesbrough, UK
- School of Psychology, Faculty of Health Sciences and Wellbeing, University of Sunderland, Sunderland, UK
| | | | - Karen Lander
- Division of Psychology, Communication and Human Neuroscience, University of Manchester, Manchester, UK
| |
Collapse
|
4
|
Xu K. Insights into the relationship between eye movements and personality traits in restricted visual fields. Sci Rep 2024; 14:10261. [PMID: 38704441 PMCID: PMC11069522 DOI: 10.1038/s41598-024-60992-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 04/30/2024] [Indexed: 05/06/2024] Open
Abstract
Previous studies have suggested behavioral patterns, such as visual attention and eye movements, relate to individual personality traits. However, these studies mainly focused on free visual tasks, and the impact of visual field restriction remains inadequately understood. The primary objective of this study is to elucidate the patterns of conscious eye movements induced by visual field restriction and to examine how these patterns relate to individual personality traits. Building on previous research, we aim to gain new insights through two behavioral experiments, unraveling the intricate relationship between visual behaviors and individual personality traits. As a result, both Experiment 1 and Experiment 2 revealed differences in eye movements during free observation and visual field restriction. Particularly, simulation results based on the analyzed data showed clear distinctions in eye movements between free observation and visual field restriction conditions. This suggests that eye movements during free observation involve a mixture of conscious and unconscious eye movements. Furthermore, we observed significant correlations between conscious eye movements and personality traits, with more pronounced effects in the visual field restriction condition used in Experiment 2 compared to Experiment 1. These analytical findings provide a novel perspective on human cognitive processes through visual perception.
Collapse
Affiliation(s)
- Kuangzhe Xu
- Institute for Promotion of Higher Education, Hirosaki University, Aomori, 036-8560, Japan.
| |
Collapse
|
5
|
Zuo F, Jing P, Sun J, Duan J, Ji Y, Liu Y. Deep Learning-Based Eye-Tracking Analysis for Diagnosis of Alzheimer's Disease Using 3D Comprehensive Visual Stimuli. IEEE J Biomed Health Inform 2024; 28:2781-2793. [PMID: 38349825 DOI: 10.1109/jbhi.2024.3365172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Abstract
Alzheimer's Disease (AD) is a neurodegenerative disorder that causes a continuous decline in cognitive functions and eventually results in death. An early AD diagnosis is important for taking active measures to slow its deterioration. Traditional diagnoses are usually based on clinical experience, which is limited by several realistic factors. In this paper, we focus on exploiting deep learning techniques to diagnose AD based on eye-tracking behaviors. Visual attention, as a typical eye-tracking behavior, is of great clinical value in detecting cognitive abnormalities in AD patients. To better analyze the differences in visual attention between AD patients and normals, we first conducted a 3D comprehensive visual task on a noninvasive eye-tracking system to collect visual attention heatmaps. Then a multilayered comparison convolutional neural network (MC-CNN) is proposed to distinguish the visual attention differences between AD patients and normals. In MC-CNN, the multilayered feature representations of heatmaps were obtained by hierarchical residual blocks to better encode eye movement behaviors, which were further integrated into a distance vector to benefit the comprehensive visual task. From evaluation, MC-CNN can distinguish AD patients from normals with 0.84 accuracy, 0.86 recall, 0.82 precision, 0.83 F1-score and 0.90 area under the curve (AUC). The above results demonstrate the effectiveness of the proposed MC-CNN in AD diagnosis based on the comprehensive 3D visual task.
Collapse
|
6
|
Xu K, Matsuka T. Conscious observational behavior in recognizing landmarks in facial expressions. PLoS One 2023; 18:e0291735. [PMID: 37792713 PMCID: PMC10550163 DOI: 10.1371/journal.pone.0291735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 09/05/2023] [Indexed: 10/06/2023] Open
Abstract
The present study investigated (1) how well humans can recognize facial expressions represented by a small set of landmarks, a commonly used technique in facial recognition in machine learning and (2) differences in conscious observational behaviors to recognized different types of expressions. Our video stimuli consisted of facial expression represented by 68 landmark points. Conscious observational behaviors were measured by movements of the mouse cursor where a small area around it was only visible to participants. We constructed Bayesian models to analyze how personality traits and observational behaviors influenced how participants recognized different facial expressions. We found that humans could recognize positive expressions with high accuracy, similar to machine learning, even when faces were represented by a small set of landmarks. Although humans fared better than machine learning, recognition of negative expressions was not as high as positives. Our results also showed that personality traits and conscious observational behaviors significantly influenced recognizing facial expressions. For example, people with high agreeableness could correctly recognize faces expressing happiness by observing several areas among faces without focusing on any specific part for very long. These results suggest a mechanism whereby personality traits lead to different conscious observational behaviors and recognitions of facial expressions are based on information obtained through those observational behaviors.
Collapse
Affiliation(s)
- Kuangzhe Xu
- Institute for Promotion of Higher Education, Hirosaki University, Aomori, Japan
| | - Toshihiko Matsuka
- Department of Cognitive and Information Science, Chiba University, Chiba, Japan
| |
Collapse
|
7
|
Rodger H, Sokhn N, Lao J, Liu Y, Caldara R. Developmental eye movement strategies for decoding facial expressions of emotion. J Exp Child Psychol 2023; 229:105622. [PMID: 36641829 DOI: 10.1016/j.jecp.2022.105622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 12/21/2022] [Accepted: 12/23/2022] [Indexed: 01/15/2023]
Abstract
In our daily lives, we routinely look at the faces of others to try to understand how they are feeling. Few studies have examined the perceptual strategies that are used to recognize facial expressions of emotion, and none have attempted to isolate visual information use with eye movements throughout development. Therefore, we recorded the eye movements of children from 5 years of age up to adulthood during recognition of the six "basic emotions" to investigate when perceptual strategies for emotion recognition become mature (i.e., most adult-like). Using iMap4, we identified the eye movement fixation patterns for recognition of the six emotions across age groups in natural viewing and gaze-contingent (i.e., expanding spotlight) conditions. While univariate analyses failed to reveal significant differences in fixation patterns, more sensitive multivariate distance analyses revealed a U-shaped developmental trajectory with the eye movement strategies of the 17- to 18-year-old group most similar to adults for all expressions. A developmental dip in strategy similarity was found for each emotional expression revealing which age group had the most distinct eye movement strategy from the adult group: the 13- to 14-year-olds for sadness recognition; the 11- to 12-year-olds for fear, anger, surprise, and disgust; and the 7- to 8-year-olds for happiness. Recognition performance for happy, angry, and sad expressions did not differ significantly across age groups, but the eye movement strategies for these expressions diverged for each group. Therefore, a unique strategy was not a prerequisite for optimal recognition performance for these expressions. Our data provide novel insights into the developmental trajectories underlying facial expression recognition, a critical ability for adaptive social relations.
Collapse
Affiliation(s)
- Helen Rodger
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland.
| | - Nayla Sokhn
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Yingdi Liu
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland.
| |
Collapse
|
8
|
Doidy F, Desaunay P, Rebillard C, Clochon P, Lambrechts A, Wantzen P, Guénolé F, Baleyte JM, Eustache F, Bowler DM, Lebreton K, Guillery-Girard B. How scene encoding affects memory discrimination: Analysing eye movements data using data driven methods. VISUAL COGNITION 2023. [DOI: 10.1080/13506285.2023.2188335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
Affiliation(s)
- F. Doidy
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - P. Desaunay
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
- Service de Psychiatrie de l’enfant et de l’adolescent, CHU de Caen, Caen, France
| | - C. Rebillard
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - P. Clochon
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - A. Lambrechts
- Autism Research Group, Department of Psychology, City, University of London, London, UK
| | - P. Wantzen
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - F. Guénolé
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
- Service de Psychiatrie de l’enfant et de l’adolescent, CHU de Caen, Caen, France
| | - J. M. Baleyte
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
- Service de Psychiatrie de l’enfant et de l’adolescent, Centre Hospitalier Interuniversitaire de Créteil, Créteil, France
| | - F. Eustache
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - D. M. Bowler
- Autism Research Group, Department of Psychology, City, University of London, London, UK
| | - K. Lebreton
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - B. Guillery-Girard
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| |
Collapse
|
9
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
10
|
Rossion B. Twenty years of investigation with the case of prosopagnosia PS to understand human face identity recognition. Part I: Function. Neuropsychologia 2022; 173:108278. [DOI: 10.1016/j.neuropsychologia.2022.108278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 03/28/2022] [Accepted: 05/25/2022] [Indexed: 10/18/2022]
|
11
|
Franceschiello B, Noto TD, Bourgeois A, Murray MM, Minier A, Pouget P, Richiardi J, Bartolomeo P, Anselmi F. Machine learning algorithms on eye tracking trajectories to classify patients with spatial neglect. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106929. [PMID: 35675721 DOI: 10.1016/j.cmpb.2022.106929] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 05/19/2022] [Accepted: 05/31/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Eye-movement trajectories are rich behavioral data, providing a window on how the brain processes information. We address the challenge of characterizing signs of visuo-spatial neglect from saccadic eye trajectories recorded in brain-damaged patients with spatial neglect as well as in healthy controls during a visual search task. METHODS We establish a standardized pre-processing pipeline adaptable to other task-based eye-tracker measurements. We use traditional machine learning algorithms together with deep convolutional networks (both 1D and 2D) to automatically analyze eye trajectories. RESULTS Our top-performing machine learning models classified neglect patients vs. healthy individuals with an Area Under the ROC curve (AUC) ranging from 0.83 to 0.86. Moreover, the 1D convolutional neural network scores correlated with the degree of severity of neglect behavior as estimated with standardized paper-and-pencil tests and with the integrity of white matter tracts measured from Diffusion Tensor Imaging (DTI). Interestingly, the latter showed a clear correlation with the third branch of the superior longitudinal fasciculus (SLF), especially damaged in neglect. CONCLUSIONS The study introduces new methods for both the pre-processing and the classification of eye-movement trajectories in patients with neglect syndrome. The proposed methods can likely be applied to other types of neurological diseases opening the possibility of new computer-aided, precise, sensitive and non-invasive diagnostic tools.
Collapse
Affiliation(s)
- Benedetta Franceschiello
- The LINE (Laboratory for Investigative Neurophysiology), Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.; CIBM Center for Biomedical Imaging, Lausanne, Switzerland; Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; The Sense Innovation and Research Center, Lausanne and Sion, Switzerland; School of Engineering, Institute of Systems Engineering, HES-SO Valais-Wallis, Route de L'industrie 23, Sion, Switzerland
| | - Tommaso Di Noto
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Alexia Bourgeois
- Laboratory of Cognitive Neurorehabilitation, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Micah M Murray
- The LINE (Laboratory for Investigative Neurophysiology), Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.; Department of Ophthalmology, Fondation Asile des Aveugles and University of Lausanne, Lausanne, Switzerland; CIBM Center for Biomedical Imaging, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA; The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
| | - Astrid Minier
- The LINE (Laboratory for Investigative Neurophysiology), Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.; Department of Ophthalmology, Fondation Asile des Aveugles and University of Lausanne, Lausanne, Switzerland
| | - Pierre Pouget
- Laboratory of Cognitive Neurorehabilitation, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Jonas Richiardi
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
| | - Paolo Bartolomeo
- Sorbonne Universite, Inserm, CNRS, Institut du Cerveau - Paris Brain Institute, ICM, Hopital de la Pitie-Salpetriere, Paris, France
| | - Fabio Anselmi
- Center for Neuroscience and Artificial Intelligence, Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA; Center for Brains, Minds, and Machines, McGovern Institute for Brain Research at MIT, Cambridge, MA, USA.
| |
Collapse
|
12
|
Cho VY, Hsiao JH, Chan AB, Ngo HC, King NM, Anthonappa RP. Eye movement analysis of children's attention for midline diastema. Sci Rep 2022; 12:7462. [PMID: 35523808 PMCID: PMC9076614 DOI: 10.1038/s41598-022-11174-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Accepted: 04/05/2022] [Indexed: 11/09/2022] Open
Abstract
No previous studies have investigated eye-movement patterns to show children's information processing while viewing clinical images. Therefore, this study aimed to explore children and their educators' perception of a midline diastema by applying eye-movement analysis using the hidden Markov models (EMHMM). A total of 155 children between 2.5 and 5.5 years of age and their educators (n = 34) viewed pictures with and without a midline diastema while Tobii Pro Nano eye-tracker followed their eye movements. Fixation data were analysed using data-driven, and fixed regions of interest (ROIs) approaches with EMHMM. Two different eye-movement patterns were identified: explorative pattern (76%), where the children's ROIs were predominantly around the nose and mouth, and focused pattern (26%), where children's ROIs were precise, locating on the teeth with and without a diastema, and fixations transited among the ROIs with similar frequencies. Females had a significantly higher eye-movement preference for without diastema image than males. Comparisons between the different age groups showed a statistically significant difference for overall entropies. The 3.6-4.5y age groups exhibited higher entropies, indicating lower eye-movement consistency. In addition, children and their educators exhibited two specific eye-movement patterns. Children in the explorative pattern saw the midline diastema more often while their educators focussed on the image without diastema. Thus, EMHMMs are valuable in analysing eye-movement patterns in children and adults.
Collapse
Affiliation(s)
- Vanessa Y Cho
- UWA Dental School, The University of Western Australia, 17 Monash Avenue, Nedlands, WA, 6009, Australia
| | - Janet H Hsiao
- Department of Psychology, University of Hong Kong, Pok Fu Lam, Hong Kong SAR
| | - Antoni B Chan
- Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong SAR
| | - Hien C Ngo
- UWA Dental School, The University of Western Australia, 17 Monash Avenue, Nedlands, WA, 6009, Australia
| | - Nigel M King
- UWA Dental School, The University of Western Australia, 17 Monash Avenue, Nedlands, WA, 6009, Australia
| | - Robert P Anthonappa
- UWA Dental School, The University of Western Australia, 17 Monash Avenue, Nedlands, WA, 6009, Australia.
| |
Collapse
|
13
|
Eye movements in Parkinson's disease during visual search. J Neurol Sci 2022; 440:120299. [DOI: 10.1016/j.jns.2022.120299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Revised: 04/30/2022] [Accepted: 05/23/2022] [Indexed: 10/18/2022]
|
14
|
Masulli P, Galazka M, Eberhard D, Johnels JÅ, Gillberg C, Billstedt E, Hadjikhani N, Andersen TS. Data-driven analysis of gaze patterns in face perception: Methodological and clinical contributions. Cortex 2021; 147:9-23. [PMID: 34998084 DOI: 10.1016/j.cortex.2021.11.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 10/18/2021] [Accepted: 11/12/2021] [Indexed: 01/05/2023]
Abstract
Gaze patterns during face perception have been shown to relate to psychiatric symptoms. Standard analysis of gaze behavior includes calculating fixations within arbitrarily predetermined areas of interest. In contrast to this approach, we present an objective, data-driven method for the analysis of gaze patterns and their relation to diagnostic test scores. This method was applied to data acquired in an adult sample (N = 111) of psychiatry outpatients while they freely looked at images of human faces. Dimensional symptom scores of autism, attention deficit, and depression were collected. A linear regression model based on Principal Component Analysis coefficients computed for each participant was used to model symptom scores. We found that specific components of gaze patterns predicted autistic traits as well as depression symptoms. Gaze patterns shifted away from the eyes with increasing autism traits, a well-known effect. Additionally, the model revealed a lateralization component, with a reduction of the left visual field bias increasing with both autistic traits and depression symptoms independently. Taken together, our model provides a data-driven alternative for gaze data analysis, which can be applied to dimensionally-, rather than categorically-defined clinical subgroups within a variety of contexts. Methodological and clinical contribution of this approach are discussed.
Collapse
Affiliation(s)
- Paolo Masulli
- Department of Applied Mathematics and Computer Science DTU Compute, Section of Cognitive Systems, Technical University of Denmark, Kgs. Lyngby, Denmark; iMotions A/S, Copenhagen V, Denmark
| | - Martyna Galazka
- Gillberg Neuropsychiatry Center, University of Gothenburg, Gothenburg, Sweden
| | - David Eberhard
- Gillberg Neuropsychiatry Center, University of Gothenburg, Gothenburg, Sweden.
| | | | | | - Eva Billstedt
- Gillberg Neuropsychiatry Center, University of Gothenburg, Gothenburg, Sweden
| | - Nouchine Hadjikhani
- Gillberg Neuropsychiatry Center, University of Gothenburg, Gothenburg, Sweden; Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, USA.
| | - Tobias S Andersen
- Department of Applied Mathematics and Computer Science DTU Compute, Section of Cognitive Systems, Technical University of Denmark, Kgs. Lyngby, Denmark
| |
Collapse
|
15
|
Liu W, Li M, Zou X, Raj B. Discriminative Dictionary Learning for Autism Spectrum Disorder Identification. Front Comput Neurosci 2021; 15:662401. [PMID: 34819846 PMCID: PMC8606656 DOI: 10.3389/fncom.2021.662401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 09/20/2021] [Indexed: 12/02/2022] Open
Abstract
Autism Spectrum Disorder (ASD) is a group of lifelong neurodevelopmental disorders with complicated causes. A key symptom of ASD patients is their impaired interpersonal communication ability. Recent study shows that face scanning patterns of individuals with ASD are often different from those of typical developing (TD) ones. Such abnormality motivates us to study the feasibility of identifying ASD children based on their face scanning patterns with machine learning methods. In this paper, we consider using the bag-of-words (BoW) model to encode the face scanning patterns, and propose a novel dictionary learning method based on dual mode seeking for better BoW representation. Unlike k-means which is broadly used in conventional BoW models to learn dictionaries, the proposed method captures discriminative information by finding atoms which maximizes both the purity and coverage of belonging samples within one class. Compared to the rich literature of ASD studies from psychology and neural science, our work marks one of the relatively few attempts to directly identify high-functioning ASD children with machine learning methods. Experiments demonstrate the superior performance of our method with considerable gain over several baselines. Although the proposed work is yet too preliminary to directly replace existing autism diagnostic observation schedules in the clinical practice, it shed light on future applications of machine learning methods in early screening of ASD.
Collapse
Affiliation(s)
- Wenbo Liu
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, United States
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, China
| | - Ming Li
- Data Science Research Center, Duke Kunshan University, Suzhou, China
- School of Computer Science, Wuhan University, Wuhan, China
| | - Xiaobing Zou
- The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Bhiksha Raj
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, United States
- Language Technologies Institute, Carnegie Mellon University, Pittsburgh, PA, United States
| |
Collapse
|
16
|
Onwuegbusi T, Hermens F, Hogue T. Data-driven group comparisons of eye fixations to dynamic stimuli. Q J Exp Psychol (Hove) 2021; 75:989-1003. [PMID: 34507503 PMCID: PMC9016662 DOI: 10.1177/17470218211048060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recent advances in software and hardware have allowed eye tracking to move away from static images to more ecologically relevant video streams. The analysis of eye tracking data for such dynamic stimuli, however, is not without challenges. The frame-by-frame coding of regions of interest (ROIs) is labour-intensive and computer vision techniques to automatically code such ROIs are not yet mainstream, restricting the use of such stimuli. Combined with the more general problem of defining relevant ROIs for video frames, methods are needed that facilitate data analysis. Here, we present a first evaluation of an easy-to-implement data-driven method with the potential to address these issues. To test the new method, we examined the differences in eye movements of self-reported politically left- or right-wing leaning participants to video clips of left- and right-wing politicians. The results show that our method can accurately predict group membership on the basis of eye movement patterns, isolate video clips that best distinguish people on the political left-right spectrum, and reveal the section of each video clip with the largest group differences. Our methodology thereby aids the understanding of group differences in gaze behaviour, and the identification of critical stimuli for follow-up studies or for use in saccade diagnosis.
Collapse
Affiliation(s)
| | - Frouke Hermens
- School of Psychology, University of Lincoln, Lincoln, UK
| | - Todd Hogue
- School of Psychology, University of Lincoln, Lincoln, UK
| |
Collapse
|
17
|
Liu H, Hu X, Ren Y, Wang L, Guo L, Guo CC, Han J. Neural Correlates of Interobserver Visual Congruency in Free-Viewing Condition. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2020.3002765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
18
|
Saliency-Based Gaze Visualization for Eye Movement Analysis. SENSORS 2021; 21:s21155178. [PMID: 34372413 PMCID: PMC8348507 DOI: 10.3390/s21155178] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 07/12/2021] [Accepted: 07/27/2021] [Indexed: 12/29/2022]
Abstract
Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.
Collapse
|
19
|
Prunty JE, Keemink JR, Kelly DJ. Infants scan static and dynamic facial expressions differently. INFANCY 2021; 26:831-856. [PMID: 34288344 DOI: 10.1111/infa.12426] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Revised: 07/02/2021] [Accepted: 07/08/2021] [Indexed: 11/30/2022]
Abstract
Despite being inherently dynamic phenomena, much of our understanding of how infants attend and scan facial expressions is based on static face stimuli. Here we investigate how six-, nine-, and twelve-month infants allocate their visual attention toward dynamic-interactive videos of the six basic emotional expressions, and compare their responses with static images of the same stimuli. We find infants show clear differences in how they attend and scan dynamic and static expressions, looking longer toward the dynamic-face and lower-face regions. Infants across all age groups show differential interest in expressions, and show precise scanning of regions "diagnostic" for emotion recognition. These data also indicate that infants' attention toward dynamic expressions develops over the first year of life, including relative increases in interest and scanning precision toward some negative facial expressions (e.g., anger, fear, and disgust).
Collapse
Affiliation(s)
| | | | - David J Kelly
- School of Psychology, University of Kent, Canterbury, UK
| |
Collapse
|
20
|
Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels. SENSORS 2021; 21:s21144686. [PMID: 34300425 PMCID: PMC8309511 DOI: 10.3390/s21144686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 06/28/2021] [Accepted: 07/06/2021] [Indexed: 11/17/2022]
Abstract
Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to gain insight through visualization. To avoid the complication, we often employ fixation identification algorithms for more abstract visualizations. In the past, many scientists have focused on gaze data abstraction with the attention map and analyzed detail gaze movement patterns with the scanpath visualization. Abstract eye movement patterns change dramatically depending on fixation identification algorithms in the preprocessing. However, it is difficult to find out how fixation identification algorithms affect gaze movement pattern visualizations. Additionally, scientists often spend much time on adjusting parameters manually in the fixation identification algorithms. In this paper, we propose a gaze behavior-based data processing method for abstract gaze data visualization. The proposed method classifies raw gaze data using machine learning models for image classification, such as CNN, AlexNet, and LeNet. Additionally, we compare the velocity-based identification (I-VT), dispersion-based identification (I-DT), density-based fixation identification, velocity and dispersion-based (I-VDT), and machine learning based and behavior-based modelson various visualizations at each abstraction level, such as attention map, scanpath, and abstract gaze movement visualization.
Collapse
|
21
|
Efficient calculations of NSS-based gaze similarity for time-dependent stimuli. Behav Res Methods 2021; 54:94-116. [PMID: 34109561 DOI: 10.3758/s13428-021-01562-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/08/2021] [Indexed: 11/08/2022]
Abstract
The degree of spatial similarity between the gaze of participants viewing dynamic stimuli such as videos has been previously measured using metrics which are based on the NSS (Normalized Scanpath Saliency). Methods currently used to calculate this metric rely upon a numerical grid, which can be computationally prohibitive for a variety of otherwise useful applications such as Monte Carlo analyses. In the present work we derive a new analytical calculation method for the same metric that yields equal or more accurate results, but with speeds than can be orders of magnitude faster (depending on parameters). Our analytical method scales well with dimensionality, and could also be of use for other applications. The drawback is that it can become very slow if the number of participants in the study is very large or if the gaze sampling rate is high. We provide performance benchmarks for a Fortran implementation of our method, and make available the source code developed.
Collapse
|
22
|
WANG XIAOWEI, GENG XIAOXU, WANG JINKE, TAMURA SHINICHI. A COMPARATIVE RESEARCH ON G-HMM AND TSS TECHNOLOGIES FOR EYE MOVEMENT TRACKING ANALYSIS. J MECH MED BIOL 2021. [DOI: 10.1142/s0219519421400236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Eye movement analysis provides a new way for disease screening, quantification and assessment. In order to track and analyze eye movement scanpaths under different conditions, this paper proposed the Gaussian mixture-Hidden Markov Model (G-HMM) modeling the eye movement scanpath during saccade, combing with the Time-Shifting Segmentation (TSS) method for model optimization, and also the Linear Discriminant Analysis (LDA) method was utilized to perform the recognition and evaluation tasks based on the multi-dimensional features. In the experiments, 800 real scene images of eye-movement sequences datasets were used, and the experimental results show that the G-HMM method has high specificity for free searching tasks and high sensitivity for prompt object search tasks, while TSS can strengthen the difference of eye movement characteristics, which is conducive to eye movement pattern recognition, especially for search tasks.
Collapse
Affiliation(s)
- XIAOWEI WANG
- Rongcheng College, Harbin University of Science and Technology, Rongcheng 264300, P. R. China
| | - XIAOXU GENG
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, P. R. China
| | - JINKE WANG
- Rongcheng College, Harbin University of Science and Technology, Rongcheng 264300, P. R. China
| | - SHINICHI TAMURA
- The Institute of Scientific and Industrial Research, Osaka University, Ibaraki, 567-0047, Japan
| |
Collapse
|
23
|
Stelter M, Rommel M, Degner J. (Eye-) Tracking the Other-Race Effect: Comparison of Eye Movements During Encoding and Recognition of Ingroup Faces With Proximal and Distant Outgroup Faces. SOCIAL COGNITION 2021. [DOI: 10.1521/soco.2021.39.3.366] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
People experience difficulties recognizing faces of ethnic outgroups, known as the other-race effect. The present eye-tracking study investigates if this effect is related to differences in visual attention to ingroup and outgroup faces. We measured gaze fixations to specific facial features and overall eye-movement activity level during an old/new recognition task comparing ingroup faces with proximal and distal ethnic outgroup faces. Recognition was best for ingroup faces and decreased gradually for proximal and distal outgroup faces. Participants attended more to the eyes of ingroup faces than outgroup faces, but this effect was unrelated to recognition performance. Ingroup-outgroup differences in eye-movement activity level did not emerge during the study phase, but during the recognition phase, with ingroup-outgroup differences varying as a function of recognition accuracy and old/new effects. Overall, ingroup-outgroup effects on recognition performance and eye movements were more pronounced for recognition of new items, emphasizing the role of retrieval processes.
Collapse
|
24
|
Rim NW, Choe KW, Scrivner C, Berman MG. Introducing Point-of-Interest as an alternative to Area-of-Interest for fixation duration analysis. PLoS One 2021; 16:e0250170. [PMID: 33970920 PMCID: PMC8109773 DOI: 10.1371/journal.pone.0250170] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Accepted: 03/31/2021] [Indexed: 11/18/2022] Open
Abstract
Many eye-tracking data analyses rely on the Area-of-Interest (AOI) methodology, which utilizes AOIs to analyze metrics such as fixations. However, AOI-based methods have some inherent limitations including variability and subjectivity in shape, size, and location of AOIs. In this article, we propose an alternative approach to the traditional AOI dwell time analysis: Weighted Sum Durations (WSD). This approach decreases the subjectivity of AOI definitions by using Points-of-Interest (POI) while maintaining interpretability. In WSD, the durations of fixations toward each POI is weighted by the distance from the POI and summed together to generate a metric comparable to AOI dwell time. To validate WSD, we reanalyzed data from a previously published eye-tracking study (n = 90). The re-analysis replicated the original findings that people gaze less towards faces and more toward points of contact when viewing violent social interactions.
Collapse
Affiliation(s)
- Nak Won Rim
- Masters in Computational Social Science, The University of Chicago, Chicago, Illinois, United States of America
| | - Kyoung Whan Choe
- Department of Psychology, The University of Chicago, Chicago, Illinois, United States of America
- Mansueto Institute for Urban Innovation, The University of Chicago, Chicago, Illinois, United States of America
| | - Coltan Scrivner
- Department of Comparative Human Development, The University of Chicago, Chicago, Illinois, United States of America
- Institute for Mind and Biology, The University of Chicago, Chicago, Illinois, United States of America
| | - Marc G. Berman
- Department of Psychology, The University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, The University of Chicago, Chicago, Illinois, United States of America
- * E-mail:
| |
Collapse
|
25
|
Abstract
The eye movement analysis with hidden Markov models (EMHMM) method provides quantitative measures of individual differences in eye-movement pattern. However, it is limited to tasks where stimuli have the same feature layout (e.g., faces). Here we proposed to combine EMHMM with the data mining technique co-clustering to discover participant groups with consistent eye-movement patterns across stimuli for tasks involving stimuli with different feature layouts. Through applying this method to eye movements in scene perception, we discovered explorative (switching between the foreground and background information or different regions of interest) and focused (mainly looking at the foreground with less switching) eye-movement patterns among Asian participants. Higher similarity to the explorative pattern predicted better foreground object recognition performance, whereas higher similarity to the focused pattern was associated with better feature integration in the flanker task. These results have important implications for using eye tracking as a window into individual differences in cognitive abilities and styles. Thus, EMHMM with co-clustering provides quantitative assessments on eye-movement patterns across stimuli and tasks. It can be applied to many other real-life visual tasks, making a significant impact on the use of eye tracking to study cognitive behavior across disciplines.
Collapse
|
26
|
Hsiao JH, An J, Zheng Y, Chan AB. Do portrait artists have enhanced face processing abilities? Evidence from hidden Markov modeling of eye movements. Cognition 2021; 211:104616. [PMID: 33592393 DOI: 10.1016/j.cognition.2021.104616] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 01/24/2021] [Accepted: 01/27/2021] [Indexed: 11/16/2022]
Abstract
Recent research has suggested the importance of part-based information in face recognition in addition to global, whole-face information. Nevertheless, face drawing experience was reported to enhance selective attention to the eyes but did not improve face recognition performance, leading to speculations about limited plasticity in adult face recognition. Here we examined the mechanism underlying the limited advantage of face drawing experience in face recognition through the Eye Movement analysis with Hidden Markov Models (EMHMM) approach. We found that portrait artists showed more eyes-focused eye movement patterns and outperformed novices in face matching, and participants' drawing rating was correlated with both eye movement pattern and performance. In contrast, portrait artists did not outperform novices and did not differ from novices in eye movement pattern in either the face recognition or part-whole tasks, although the eyes-focused pattern was associated with better recognition performance and longer response times in the whole condition relative to the part condition. Interestingly, in contrast to the face recognition and part-whole tasks, participants' performance in face matching was predicted by their drawing rating but not eye movement pattern. These results suggested that artists' advantage in face processing is specific to tasks similar to their drawing experience such as face matching, and may be related to their better ability in extracting identity-invariant information between two faces rather than more eyes-focused eye movement patterns.
Collapse
Affiliation(s)
- Janet H Hsiao
- Department of Psychology, University of Hong Kong, Hong Kong Special Administrative Region; The State Key Laboratory of Brain and Cognitive Sciences, University of Hong Kong, Hong Kong Special Administrative Region.
| | - Jeehye An
- Department of Psychology, University of Hong Kong, Hong Kong Special Administrative Region
| | - Yueyuan Zheng
- Department of Psychology, University of Hong Kong, Hong Kong Special Administrative Region
| | - Antoni B Chan
- Department of Computer Science, City University of Hong Kong, Hong Kong Special Administrative Region
| |
Collapse
|
27
|
Hilton C, Miellet S, Slattery TJ, Wiener J. Are age-related deficits in route learning related to control of visual attention? PSYCHOLOGICAL RESEARCH 2020; 84:1473-1484. [PMID: 30850875 PMCID: PMC7387378 DOI: 10.1007/s00426-019-01159-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Accepted: 02/18/2019] [Indexed: 11/29/2022]
Abstract
Typically aged adults show reduced ability to learn a route compared to younger adults. In this experiment, we investigate the role of visual attention through eye-tracking and engagement of attentional resources in age-related route learning deficits. Participants were shown a route through a realistic virtual environment before being tested on their route knowledge. Younger and older adults were compared on their gaze behaviour during route learning and on their reaction time to a secondary probe task as a measure of attentional engagement. Behavioural results show a performance deficit in route knowledge for older adults compared to younger adults, which is consistent with previous research. We replicated previous findings showing that reaction times to the secondary probe task were longer at decision points than non-decision points, indicating stronger attentional engagement at navigationally relevant locations. However, we found no differences in attentional engagement and no differences for a range of gaze measures between age groups. We conclude that age-related changes in route learning ability are not reflected in changes in control of visual attention or regulation of attentional engagement.
Collapse
Affiliation(s)
- Christopher Hilton
- Department of Psychology, Bournemouth University, Poole House, Talbot Campus, Fern Barrow, Poole, Dorset, BH12 5BB, UK.
| | - Sebastien Miellet
- Active Vision Lab, School of Psychology, University of Wollongong, Northfields Ave, Wollongong, NSW, 2522, Australia
| | - Timothy J Slattery
- Department of Psychology, Bournemouth University, Poole House, Talbot Campus, Fern Barrow, Poole, Dorset, BH12 5BB, UK
| | - Jan Wiener
- Department of Psychology, Bournemouth University, Poole House, Talbot Campus, Fern Barrow, Poole, Dorset, BH12 5BB, UK
| |
Collapse
|
28
|
Chan FH, Suen H, Jackson T, Vlaeyen JW, Barry TJ. Pain-related attentional processes: A systematic review of eye-tracking research. Clin Psychol Rev 2020; 80:101884. [DOI: 10.1016/j.cpr.2020.101884] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 05/03/2020] [Accepted: 06/11/2020] [Indexed: 02/01/2023]
|
29
|
Wang C, Haponenko H, Liu X, Sun H, Zhao G. How Attentional Guidance and Response Selection Boost Contextual Learning: Evidence from Eye Movement. Adv Cogn Psychol 2020; 15:265-275. [PMID: 32477438 PMCID: PMC7246933 DOI: 10.5709/acp-0274-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
The contextual cueing effect (CCE) refers to the learned association between predictive configuration and target location, speeding up response times for targets. Previous studies have examined the underlying processes (initial perceptual process, attentional guidance, and response selection) of CCE but have not reached a general consensus on their contributions to CCE. In the present study, we used eye tracking to address this question by analyzing the oculomotor correlates of context-guided learning in visual search and eliminating indefinite response factors during response priming. The results show that both attentional guidance and response selection contribute to contextual learning.
Collapse
Affiliation(s)
- Chao Wang
- Faculty of Psychology, Tianjin Normal University, Tianjin, Tianjin, China, 300387
| | - Hanna Haponenko
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, L8S 4K1, Canada
| | - Xingze Liu
- Medical Psychological Center, Second Xiangya Hospital of Central South University, Hunan, China, 410011
| | - Hongjin Sun
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, L8S 4K1, Canada
| | - Guang Zhao
- Faculty of Psychology, Tianjin Normal University, Tianjin, Tianjin, China, 300387
| |
Collapse
|
30
|
Haensel JX, Danvers M, Ishikawa M, Itakura S, Tucciarelli R, Smith TJ, Senju A. Culture modulates face scanning during dyadic social interactions. Sci Rep 2020; 10:1958. [PMID: 32029826 PMCID: PMC7005015 DOI: 10.1038/s41598-020-58802-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 12/11/2019] [Indexed: 11/08/2022] Open
Abstract
Recent studies have revealed significant cultural modulations on face scanning strategies, thereby challenging the notion of universality in face perception. Current findings are based on screen-based paradigms, which offer high degrees of experimental control, but lack critical characteristics common to social interactions (e.g., social presence, dynamic visual saliency), and complementary approaches are required. The current study used head-mounted eye tracking techniques to investigate the visual strategies for face scanning in British/Irish (in the UK) and Japanese adults (in Japan) who were engaged in dyadic social interactions with a local research assistant. We developed novel computational data pre-processing tools and data-driven analysis techniques based on Monte Carlo permutation testing. The results revealed significant cultural differences in face scanning during social interactions for the first time, with British/Irish participants showing increased mouth scanning and the Japanese group engaging in greater eye and central face looking. Both cultural groups further showed more face orienting during periods of listening relative to speaking, and during the introduction task compared to a storytelling game, thereby replicating previous studies testing Western populations. Altogether, these findings point to the significant role of postnatal social experience in specialised face perception and highlight the adaptive nature of the face processing system.
Collapse
Affiliation(s)
- Jennifer X Haensel
- Birkbeck, University of London, Department of Psychological Sciences, London, WC1E 7HX, United Kingdom.
| | - Matthew Danvers
- Birkbeck, University of London, Department of Psychological Sciences, London, WC1E 7HX, United Kingdom
| | | | - Shoji Itakura
- Kyoto University, Department of Psychology, Kyoto, 606-8501, Japan
| | - Raffaele Tucciarelli
- Birkbeck, University of London, Department of Psychological Sciences, London, WC1E 7HX, United Kingdom
| | - Tim J Smith
- Birkbeck, University of London, Department of Psychological Sciences, London, WC1E 7HX, United Kingdom
| | - Atsushi Senju
- Birkbeck, University of London, Department of Psychological Sciences, London, WC1E 7HX, United Kingdom
| |
Collapse
|
31
|
Mao R, Li G, Hildre HP, Zhang H. Analysis and Evaluation of Eye Behavior for Marine Operation Training - A Pilot Study. J Eye Mov Res 2019; 12:10.16910/jemr.12.3.5. [PMID: 33828734 PMCID: PMC7880139 DOI: 10.16910/jemr.12.3.6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
This paper presents a new analysis approach for evaluating situation awareness in marine operation training. Taking advantage of eye tracking technology, the situation awareness reflected by visual attention can be visualized and analyzed. A scanpath similarity comparison method that allows group-wise comparisons is proposed. The term 'Expert zone' is introduced to evaluate the performance of novice operator based on expert operators' eye movement. It is used to evaluate performance of novice operators in groups in certain segment of marine operation. A pilot study of crane lifting experiment was carried out. Two target stages of operation for the load descending until total immersion to the seabed were selected and analyzed for both novice and expert operators. The group-wise evaluation method is proven to be able to access the performance of the operator. Besides that, from data analysis of fixation-related source and scanpath, the similarities and dissimilarities of eye behavior between novice and expert is concluded with the scanpath mode in target segment.
Collapse
Affiliation(s)
- Runze Mao
- Norwegian University of Science and Technology, Alesund, Norway
| | - Guoyuan Li
- Norwegian University of Science and Technology, Alesund, Norway
| | | | - Houxiang Zhang
- Norwegian University of Science and Technology, Alesund, Norway
| |
Collapse
|
32
|
A Novel Eye Movement Data Transformation Technique that Preserves Temporal Information: A Demonstration in a Face Processing Task. SENSORS 2019; 19:s19102377. [PMID: 31126117 PMCID: PMC6567129 DOI: 10.3390/s19102377] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Revised: 05/07/2019] [Accepted: 05/22/2019] [Indexed: 01/01/2023]
Abstract
Existing research has shown that human eye-movement data conveys rich information about underlying mental processes, and that the latter may be inferred from the former. However, most related studies rely on spatial information about which different areas of visual stimuli were looked at, without considering the order in which this occurred. Although powerful algorithms for making pairwise comparisons between eye-movement sequences (scanpaths) exist, the problem is how to compare two groups of scanpaths, e.g., those registered with vs. without an experimental manipulation in place, rather than individual scanpaths. Here, we propose that the problem might be solved by projecting a scanpath similarity matrix, obtained via a pairwise comparison algorithm, to a lower-dimensional space (the comparison and dimensionality-reduction techniques we use are ScanMatch and t-SNE). The resulting distributions of low-dimensional vectors representing individual scanpaths can be statistically compared. To assess if the differences result from temporal scanpath features, we propose to statistically compare the cross-validated accuracies of two classifiers predicting group membership: (1) based exclusively on spatial metrics; (2) based additionally on the obtained scanpath representation vectors. To illustrate, we compare autistic vs. typically-developing individuals looking at human faces during a lab experiment and find significant differences in temporal scanpath features.
Collapse
|
33
|
Developing attentional control in naturalistic dynamic road crossing situations. Sci Rep 2019; 9:4176. [PMID: 30862845 PMCID: PMC6414534 DOI: 10.1038/s41598-019-39737-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Accepted: 01/24/2019] [Indexed: 11/09/2022] Open
Abstract
In the last 20 years, there has been increasing interest in studying visual attentional processes under more natural conditions. In the present study, we propose to determine the critical age at which children show similar to adult performance and attentional control in a visually guided task; in a naturalistic dynamic and socially relevant context: road crossing. We monitored visual exploration and crossing decisions in adults and children aged between 5 and 15 while they watched road traffic videos containing a range of traffic densities with or without pedestrians. 5–10 year old (y/o) children showed less systematic gaze patterns. More specifically, adults and 11–15 y/o children look mainly at the vehicles’ appearing point, which is an optimal location to sample diagnostic information for the task. In contrast, 5–10 y/os look more at socially relevant stimuli and attend to moving vehicles further down the trajectory when the traffic density is high. Critically, 5-10 y/o children also make an increased number of crossing decisions compared to 11–15 y/os and adults. Our findings reveal a critical shift around 10 y/o in attentional control and crossing decisions in a road crossing task.
Collapse
|
34
|
Luisier AC, Petitpierre G, Bérod AC, Richoz AR, Lao J, Caldara R, Bensafi M. Visual and Hedonic Perception of Food Stimuli in Children with Autism Spectrum Disorders and their Relationship to Food Neophobia. Perception 2019; 48:197-213. [PMID: 30758252 DOI: 10.1177/0301006619828300] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The present study examined whether children with autism spectrum disorder (ASD) and typically developing (TD) children differed in visual perception of food stimuli at both sensorimotor and affective levels. A potential link between visual perception and food neophobia was also investigated. To these aims, 11 children with ASD and 11 TD children were tested. Visual pictures of food were used, and food neophobia was assessed by the parents. Results revealed that children with ASD explored visually longer food stimuli than TD children. Complementary analyses revealed that whereas TD children explored more multiple-item dishes (vs. simple-item dishes), children with ASD explored all the dishes in a similar way. In addition, children with ASD gave more negative appreciation in general. Moreover, hedonic rating was negatively correlated with food neophobia scores in children with ASD, but not in TD children. In sum, we show here that children with ASD have more difficulty than TD children in liking a food when presented visually. Our findings also suggest that a prominent factor that needs to be considered is time management during the food choice process. They also provide new ways of measuring and understanding food neophobia in children with ASD.
Collapse
Affiliation(s)
- Anne-Claude Luisier
- Research Center in Neurosciences of Lyon, Claude Bernard University Lyon 1, France; Institute of Special Education, University of Fribourg, Switzerland; Brocoli Factory, Sion, Switzerland
| | | | | | | | - Junpeng Lao
- Department of Psychology, University of Fribourg, Switzerland
| | - Roberto Caldara
- Department of Psychology, University of Fribourg, Switzerland
| | - Moustafa Bensafi
- Research Center in Neurosciences of Lyon, Claude Bernard University Lyon 1, France
| |
Collapse
|
35
|
Arizpe JM, Noles DL, Tsao JW, Chan AWY. Eye Movement Dynamics Differ between Encoding and Recognition of Faces. Vision (Basel) 2019; 3:vision3010009. [PMID: 31735810 PMCID: PMC6802769 DOI: 10.3390/vision3010009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 11/15/2018] [Accepted: 12/26/2018] [Indexed: 11/16/2022] Open
Abstract
Facial recognition is widely thought to involve a holistic perceptual process, and optimal recognition performance can be rapidly achieved within two fixations. However, is facial identity encoding likewise holistic and rapid, and how do gaze dynamics during encoding relate to recognition? While having eye movements tracked, participants completed an encoding ("study") phase and subsequent recognition ("test") phase, each divided into blocks of one- or five-second stimulus presentation time conditions to distinguish the influences of experimental phase (encoding/recognition) and stimulus presentation time (short/long). Within the first two fixations, several differences between encoding and recognition were evident in the temporal and spatial dynamics of the eye-movements. Most importantly, in behavior, the long study phase presentation time alone caused improved recognition performance (i.e., longer time at recognition did not improve performance), revealing that encoding is not as rapid as recognition, since longer sequences of eye-movements are functionally required to achieve optimal encoding than to achieve optimal recognition. Together, these results are inconsistent with a scan path replay hypothesis. Rather, feature information seems to have been gradually integrated over many fixations during encoding, enabling recognition that could subsequently occur rapidly and holistically within a small number of fixations.
Collapse
Affiliation(s)
- Joseph M. Arizpe
- Department of Neurology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Children’s Foundation Research Institute, Le Bonheur Children’s Hospital, Memphis, TN 38103, USA
- Science Applications International Corporation (SAIC), Fort Sam Houston, TX 78234, USA
- Correspondence:
| | - Danielle L. Noles
- Department of Neurology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Children’s Foundation Research Institute, Le Bonheur Children’s Hospital, Memphis, TN 38103, USA
- School of Medicine, University of Tennessee Health Science Center, Memphis, TN 38163, USA
| | - Jack W. Tsao
- Department of Neurology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Children’s Foundation Research Institute, Le Bonheur Children’s Hospital, Memphis, TN 38103, USA
- Department of Anatomy & Neurobiology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Memphis Veterans Affairs Medical Center, Memphis, TN 38104, USA
| | - Annie W.-Y. Chan
- Department of Neurology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Children’s Foundation Research Institute, Le Bonheur Children’s Hospital, Memphis, TN 38103, USA
- Department of Radiology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Department of Life Sciences, Centre for Cognitive Neuroscience, Division of Psychology, Brunel University London, London, UB8 3PH, UK
| |
Collapse
|
36
|
Eye-movement patterns in face recognition are associated with cognitive decline in older adults. Psychon Bull Rev 2019; 25:2200-2207. [PMID: 29313315 DOI: 10.3758/s13423-017-1419-0] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The Hidden Markov Modeling approach for eye-movement data analysis is able to quantitatively assess differences and similarities among individual patterns. Here we applied this approach to examine the relationships between eye-movement patterns in face recognition and age-related cognitive decline. We found that significantly more older than young adults adopted "holistic" patterns, in which most eye fixations landed around the face center, as opposed to "analytic" patterns, in which eye movements switched among the two eyes and the face center. Participants showing analytic patterns had better performance than those with holistic patterns regardless of age. Interestingly, older adults with lower cognitive status (as assessed by the Montreal Cognitive Assessment), particularly in executive and visual attention functioning (as assessed by Tower of London and Trail Making Tests) were associated with a higher likelihood of holistic patterns. This result suggests the possibility of using eye movements as an easily deployable screening assessment for cognitive decline in older adults.
Collapse
|
37
|
He Y, Su Q, Wang L, He W, Tan C, Zhang H, Ng ML, Yan N, Chen Y. The Characteristics of Intelligence Profile and Eye Gaze in Facial Emotion Recognition in Mild and Moderate Preschoolers With Autism Spectrum Disorder. Front Psychiatry 2019; 10:402. [PMID: 31281268 PMCID: PMC6596453 DOI: 10.3389/fpsyt.2019.00402] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Accepted: 05/21/2019] [Indexed: 12/27/2022] Open
Abstract
Childhood autism spectrum disorder (ASD) can easily be misdiagnosed, due to the nonspecific social and communicational deficits associated with the disorder. The present study attempted to profile the mental development and visual attention toward emotion among preschool children with mild or moderate ASD who were attending mainstream kindergartens. A total of 21 children (17 boys and 4 girls) diagnosed with mild or moderate ASD selected from 5,178 kindergarteners from the Xi'an city were recruited. Another group of 21 typically developing (TD) children who were matched with age, gender, and class served as controls. All children were assessed using the Griffiths Mental Development Scales-Chinese (GDS-C), and their social visual attention was assessed during watching 20 ecologically valid film scenes by using eye tracking technique. The results showed that ASD children had lower mental development scores in the Locomotor, Personal-Social, Language, Performance, and Practical Reasoning subscales than the TD peers. Moreover, deficits in recognizing emotions from facial expressions based on naturalistic scene stimuli with voice were found for ASD children. The deficits were significantly correlated with their ability in social interaction and development quotient in ASD group. ASD children showed atypical eye-gaze pattern when compared to TD children during facial emotion expression task. Children with ASD had reduced visual attention to facial emotion expression, especially for the eye region. The findings confirmed the deficits of ASD children in real life multimodal of emotion recognition, and their atypical eye-gaze pattern for emotion recognition. Parents and teachers of children with mild or moderate ASD should make informed educational decisions according to their level of mental development. In addition, eye tracking technique might clinically help provide evidence diagnosing children with mild or moderate ASD.
Collapse
Affiliation(s)
- Yuying He
- Department of Pediatrics, Xi'an Jiaotong University Health Science Center, Xi'an, China.,Child Healthcare Department, Xi'an Maternal and Child Health Hospital, Xi'an, China
| | - Qi Su
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lan Wang
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Guangdong Provincial Key Laboratory of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wenxiang He
- Department of Pediatrics, Shaanxi University of Chinese Medicine, Xianyang, China
| | - Chuanxue Tan
- Child Healthcare Department, Xi'an Children's Hospital, Xi'an, China
| | - Haiqing Zhang
- Department of Pediatrics, Shaanxi University of Chinese Medicine, Xianyang, China
| | - Manwa L Ng
- Speech Science Laboratory, Division of Speech and Hearing Sciences, University of Hong Kong, Hong Kong, China
| | - Nan Yan
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Guangdong Provincial Key Laboratory of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yanni Chen
- Department of Pediatrics, Xi'an Jiaotong University Health Science Center, Xi'an, China.,Department of Pediatrics, Shaanxi University of Chinese Medicine, Xianyang, China.,Child Healthcare Department, Xi'an Children's Hospital, Xi'an, China
| |
Collapse
|
38
|
Feng S, Wang X, Wang Q, Fang J, Wu Y, Yi L, Wei K. The uncanny valley effect in typically developing children and its absence in children with autism spectrum disorders. PLoS One 2018; 13:e0206343. [PMID: 30383848 PMCID: PMC6211702 DOI: 10.1371/journal.pone.0206343] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2018] [Accepted: 10/01/2018] [Indexed: 12/27/2022] Open
Abstract
Robots and virtual reality are gaining popularity in the intervention of children with autism spectrum disorder (ASD). To shed light on children’s attitudes towards robots and characters in virtual reality, this study aims to examine whether children with ASD show the uncanny valley effect. We varied the realism of facial appearance by morphing a cartoon face into a human face, and induced perceptual mismatch by enlarging the eyes, which has previously been shown as an effective method to induce the uncanny valley effect in adults. Children with ASD and typically developing (TD) children participated in a two-alternative forced choice task that asked them to choose one they liked more from the two images presented on the screen. We found that TD children showed the effect, i.e., the enlargement of eye size and the approaching realism reduced their preference. In contrast, children with ASD did not show the uncanny valley effect. Our findings in TD children help resolve the controversy in the literature about the existence of the uncanny valley effect among young children. Meanwhile, the absence of the uncanny valley effect in children with ASD might be attributed to their reduced sensitivity to subtle changes of face features and their limited visual experience to faces caused by diminished social motivation. Last, our findings provide practical implications for designing robots and virtual characters for the intervention of children with ASD.
Collapse
Affiliation(s)
- Shuyuan Feng
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Xueqin Wang
- Department of Statistical Science, School of Mathematics and Computational Science, Sun Yat-sen University, Guangzhou, Guangdong, China
- Southern China Research Center of Statistical Science, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Qiandong Wang
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
- Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Jing Fang
- Qingdao Autism Research Institute, Qingdao, Shandong, China
| | - Yaxue Wu
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Li Yi
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
- * E-mail: (LY); (KW)
| | - Kunlin Wei
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
- * E-mail: (LY); (KW)
| |
Collapse
|
39
|
Costa M, Gomez A, Barat E, Lio G, Duhamel JR, Sirigu A. Implicit preference for human trustworthy faces in macaque monkeys. Nat Commun 2018; 9:4529. [PMID: 30375399 PMCID: PMC6207650 DOI: 10.1038/s41467-018-06987-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Accepted: 09/29/2018] [Indexed: 11/09/2022] Open
Abstract
It has been shown that human judgements of trustworthiness are based on subtle processing of specific facial features. However, it is not known if this ability is a specifically human function, or whether it is shared among primates. Here we report that macaque monkeys (Macaca Mulatta and Macaca Fascicularis), like humans, display a preferential attention to trustworthiness-associated facial cues in computer-generated human faces. Monkeys looked significantly longer at faces categorized a priori as trustworthy compared to untrustworthy. In addition, spatial sequential analysis of monkeys’ initial saccades revealed an upward shift with attention moving to the eye region for trustworthy faces while no change was observed for the untrustworthy ones. Finally, we found significant correlations between facial width-to-height ratio– a morphometric feature that predicts trustworthiness’ judgments in humans – and looking time in both species. These findings suggest the presence of common mechanisms among primates for first impression of trustworthiness. Humans infer the trustworthiness of others based on subtle facial features such as the facial width-to-height ratio, but it is not known whether other primates are sensitive to these cues. Here, the authors show that macaque monkeys prefer to look at human faces which appear trustworthy to humans.
Collapse
Affiliation(s)
- Manuela Costa
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, UCBL, Lyon 1, 67, boulevard Pinel, 69675, Bron, Cedex, France
| | - Alice Gomez
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, UCBL, Lyon 1, 67, boulevard Pinel, 69675, Bron, Cedex, France
| | - Elodie Barat
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, UCBL, Lyon 1, 67, boulevard Pinel, 69675, Bron, Cedex, France
| | - Guillaume Lio
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, UCBL, Lyon 1, 67, boulevard Pinel, 69675, Bron, Cedex, France
| | - Jean-René Duhamel
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, UCBL, Lyon 1, 67, boulevard Pinel, 69675, Bron, Cedex, France
| | - Angela Sirigu
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, UCBL, Lyon 1, 67, boulevard Pinel, 69675, Bron, Cedex, France.
| |
Collapse
|
40
|
Birmingham E, Svärd J, Kanan C, Fischer H. Exploring emotional expression recognition in aging adults using the Moving Window Technique. PLoS One 2018; 13:e0205341. [PMID: 30335767 PMCID: PMC6193651 DOI: 10.1371/journal.pone.0205341] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 09/24/2018] [Indexed: 11/22/2022] Open
Abstract
Adult aging is associated with difficulties in recognizing negative facial expressions such as fear and anger. However, happiness and disgust recognition is generally found to be less affected. Eye-tracking studies indicate that the diagnostic features of fearful and angry faces are situated in the upper regions of the face (the eyes), and for happy and disgusted faces in the lower regions (nose and mouth). These studies also indicate age-differences in visual scanning behavior, suggesting a role for attention in emotion recognition deficits in older adults. However, because facial features can be processed extrafoveally, and expression recognition occurs rapidly, eye-tracking has been questioned as a measure of attention during emotion recognition. In this study, the Moving Window Technique (MWT) was used as an alternative to the conventional eye-tracking technology. By restricting the visual field to a moveable window, this technique provides a more direct measure of attention. We found a strong bias to explore the mouth across both age groups. Relative to young adults, older adults focused less on the left eye, and marginally more on the mouth and nose. Despite these different exploration patterns, older adults were most impaired in recognition accuracy for disgusted expressions. Correlation analysis revealed that among older adults, more mouth exploration was associated with faster recognition of both disgusted and happy expressions. As a whole, these findings suggest that in aging there are both attentional differences and perceptual deficits contributing to less accurate emotion recognition.
Collapse
Affiliation(s)
- Elina Birmingham
- Faculty of Education, Simon Fraser University, Burnaby, BC, Canada
- * E-mail:
| | - Joakim Svärd
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Christopher Kanan
- Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, United States of America
| | - Håkan Fischer
- Department of Psychology, Stockholm University, Stockholm, Sweden
| |
Collapse
|
41
|
Hermens F, Golubickis M, Macrae CN. Eye movements while judging faces for trustworthiness and dominance. PeerJ 2018; 6:e5702. [PMID: 30324015 PMCID: PMC6186410 DOI: 10.7717/peerj.5702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Accepted: 09/06/2018] [Indexed: 11/20/2022] Open
Abstract
Past studies examining how people judge faces for trustworthiness and dominance have suggested that they use particular facial features (e.g. mouth features for trustworthiness, eyebrow and cheek features for dominance ratings) to complete the task. Here, we examine whether eye movements during the task reflect the importance of these features. We here compared eye movements for trustworthiness and dominance ratings of face images under three stimulus configurations: Small images (mimicking large viewing distances), large images (mimicking face to face viewing), and a moving window condition (removing extrafoveal information). Whereas first area fixated, dwell times, and number of fixations depended on the size of the stimuli and the availability of extrafoveal vision, and varied substantially across participants, no clear task differences were found. These results indicate that gaze patterns for face stimuli are highly individual, do not vary between trustworthiness and dominance ratings, but are influenced by the size of the stimuli and the availability of extrafoveal vision.
Collapse
Affiliation(s)
- Frouke Hermens
- School of Psychology, University of Lincoln, Lincoln, Lincolnshire, UK
| | | | - C. Neil Macrae
- School of Psychology, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
42
|
Hessels RS, Benjamins JS, Cornelissen THW, Hooge ITC. A Validation of Automatically-Generated Areas-of-Interest in Videos of a Face for Eye-Tracking Research. Front Psychol 2018; 9:1367. [PMID: 30123168 PMCID: PMC6085555 DOI: 10.3389/fpsyg.2018.01367] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 07/16/2018] [Indexed: 11/13/2022] Open
Abstract
When mapping eye-movement behavior to the visual information presented to an observer, Areas of Interest (AOIs) are commonly employed. For static stimuli (screen without moving elements), this requires that one AOI set is constructed for each stimulus, a possibility in most eye-tracker manufacturers' software. For moving stimuli (screens with moving elements), however, it is often a time-consuming process, as AOIs have to be constructed for each video frame. A popular use-case for such moving AOIs is to study gaze behavior to moving faces. Although it is technically possible to construct AOIs automatically, the standard in this field is still manual AOI construction. This is likely due to the fact that automatic AOI-construction methods are (1) technically complex, or (2) not effective enough for empirical research. To aid researchers in this field, we present and validate a method that automatically achieves AOI construction for videos containing a face. The fully-automatic method uses an open-source toolbox for facial landmark detection, and a Voronoi-based AOI-construction method. We compared the position of AOIs obtained using our new method, and the eye-tracking measures derived from it, to a recently published semi-automatic method. The differences between the two methods were negligible. The presented method is therefore both effective (as effective as previous methods), and efficient; no researcher time is needed for AOI construction. The software is freely available from https://osf.io/zgmch/.
Collapse
Affiliation(s)
- Roy S. Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
- Developmental Psychology, Utrecht University, Utrecht, Netherlands
| | - Jeroen S. Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
- Social, Health and Organisational Psychology, Utrecht University, Utrecht, Netherlands
| | - Tim H. W. Cornelissen
- Scene Grammar Lab, Department of Cognitive Psychology, Goethe University Frankfurt, Frankfurt, Germany
| | - Ignace T. C. Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
43
|
The nature of individual face recognition in preschool children: Insights from a gaze-contingent paradigm. COGNITIVE DEVELOPMENT 2018. [DOI: 10.1016/j.cogdev.2018.06.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
44
|
|
45
|
Abstract
How people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement.
Collapse
Affiliation(s)
| | - Janet H Hsiao
- Department of Psychology, The University of Hong Kong, Pok Fu Lam, Hong Kong
| | - Antoni B Chan
- Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong
| |
Collapse
|
46
|
SubsMatch 2.0: Scanpath comparison and classification based on subsequence frequencies. Behav Res Methods 2018; 49:1048-1064. [PMID: 27443354 DOI: 10.3758/s13428-016-0765-6] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Our eye movements are driven by a continuous trade-off between the need for detailed examination of objects of interest and the necessity to keep an overview of our surrounding. In consequence, behavioral patterns that are characteristic for our actions and their planning are typically manifested in the way we move our eyes to interact with our environment. Identifying such patterns from individual eye movement measurements is however highly challenging. In this work, we tackle the challenge of quantifying the influence of experimental factors on eye movement sequences. We introduce an algorithm for extracting sequence-sensitive features from eye movements and for the classification of eye movements based on the frequencies of small subsequences. Our approach is evaluated against the state-of-the art on a novel and a very rich collection of eye movements data derived from four experimental settings, from static viewing tasks to highly dynamic outdoor settings. Our results show that the proposed method is able to classify eye movement sequences over a variety of experimental designs. The choice of parameters is discussed in detail with special focus on highlighting different aspects of general scanpath shape. Algorithms and evaluation data are available at: http://www.ti.uni-tuebingen.de/scanpathcomparison.html .
Collapse
|
47
|
iMap4: An open source toolbox for the statistical fixation mapping of eye movement data with linear mixed modeling. Behav Res Methods 2017; 49:559-575. [PMID: 27142836 DOI: 10.3758/s13428-016-0737-x] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A major challenge in modern eye movement research is to statistically map where observers are looking, by isolating the significant differences between groups and conditions. As compared to the signals from contemporary neuroscience measures, such as magneto/electroencephalography and functional magnetic resonance imaging, eye movement data are sparser, with much larger variations in space across trials and participants. As a result, the implementation of a conventional linear modeling approach on two-dimensional fixation distributions often returns unstable estimations and underpowered results, leaving this statistical problem unresolved (Liversedge, Gilchrist, & Everling, 2011). Here, we present a new version of the iMap toolbox (Caldara & Miellet, 2011) that tackles this issue by implementing a statistical framework comparable to those developed in state-of-the-art neuroimaging data-processing toolboxes. iMap4 uses univariate, pixel-wise linear mixed models on smoothed fixation data, with the flexibility of coding for multiple between- and within-subjects comparisons and performing all possible linear contrasts for the fixed effects (main effects, interactions, etc.). Importantly, we also introduced novel nonparametric tests based on resampling, to assess statistical significance. Finally, we validated this approach by using both experimental and Monte Carlo simulation data. iMap4 is a freely available MATLAB open source toolbox for the statistical fixation mapping of eye movement data, with a user-friendly interface providing straightforward, easy-to-interpret statistical graphical outputs. iMap4 matches the standards of robust statistical neuroimaging methods and represents an important step in the data-driven processing of eye movement fixation data, an important field of vision sciences.
Collapse
|
48
|
Hutson JP, Smith TJ, Magliano JP, Loschky LC. What is the role of the film viewer? The effects of narrative comprehension and viewing task on gaze control in film. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2017; 2:46. [PMID: 29214207 PMCID: PMC5698392 DOI: 10.1186/s41235-017-0080-5] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 10/04/2017] [Indexed: 11/23/2022]
Abstract
Film is ubiquitous, but the processes that guide viewers’ attention while viewing film narratives are poorly understood. In fact, many film theorists and practitioners disagree on whether the film stimulus (bottom-up) or the viewer (top-down) is more important in determining how we watch movies. Reading research has shown a strong connection between eye movements and comprehension, and scene perception studies have shown strong effects of viewing tasks on eye movements, but such idiosyncratic top-down control of gaze in film would be anathema to the universal control mainstream filmmakers typically aim for. Thus, in two experiments we tested whether the eye movements and comprehension relationship similarly held in a classic film example, the famous opening scene of Orson Welles’ Touch of Evil (Welles & Zugsmith, Touch of Evil, 1958). Comprehension differences were compared with more volitionally controlled task-based effects on eye movements. To investigate the effects of comprehension on eye movements during film viewing, we manipulated viewers’ comprehension by starting participants at different points in a film, and then tracked their eyes. Overall, the manipulation created large differences in comprehension, but only produced modest differences in eye movements. To amplify top-down effects on eye movements, a task manipulation was designed to prioritize peripheral scene features: a map task. This task manipulation created large differences in eye movements when compared to participants freely viewing the clip for comprehension. Thus, to allow for strong, volitional top-down control of eye movements in film, task manipulations need to make features that are important to narrative comprehension irrelevant to the viewing task. The evidence provided by this experimental case study suggests that filmmakers’ belief in their ability to create systematic gaze behavior across viewers is confirmed, but that this does not indicate universally similar comprehension of the film narrative.
Collapse
Affiliation(s)
- John P Hutson
- Department of Psychological Sciences, Kansas State University, 492 Bluemont Hall, 1100 Mid-campus Dr, Manhattan, KS 66506 USA
| | - Tim J Smith
- Department of Psychological Sciences, Birkbeck, University of London, Malet St, London, WC1E 7HX UK
| | - Joseph P Magliano
- Department of Psychology, Northern Illinois University, 361 Psychology-Computer Science Building, DeKalb, IL 60115 USA
| | - Lester C Loschky
- Department of Psychological Sciences, Kansas State University, 492 Bluemont Hall, 1100 Mid-campus Dr, Manhattan, KS 66506 USA
| |
Collapse
|
49
|
Bodala IP, Abbasi NI, Bezerianos A, Al-Nashash H, Thakor NV. Measuring vigilance decrement using computer vision assisted eye tracking in dynamic naturalistic environments. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:2478-2481. [PMID: 29060401 DOI: 10.1109/embc.2017.8037359] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Eye tracking offers a practical solution for monitoring cognitive performance in real world tasks. However, eye tracking in dynamic environments is difficult due to high spatial and temporal variation of stimuli, needing further and thorough investigation. In this paper, we study the possibility of developing a novel computer vision assisted eye tracking analysis by using fixations. Eye movement data is obtained from a long duration naturalistic driving experiment. Source invariant feature transform (SIFT) algorithm was implemented using VLFeat toolbox to identify multiple areas of interest (AOIs). A new measure called `fixation score' was defined to understand the dynamics of fixation position between the target AOI and the non target AOIs. Fixation score is maximum when the subjects focus on the target AOI and diminishes when they gaze at the non-target AOIs. Statistically significant negative correlation was found between fixation score and reaction time data (r =-0.2253 and p<;0.05). This implies that with vigilance decrement, the fixation score decreases due to visual attention shifting away from the target objects resulting in an increase in the reaction time.
Collapse
|
50
|
Mega LF, Volz KG. Intuitive Face Judgments Rely on Holistic Eye Movement Pattern. Front Psychol 2017; 8:1005. [PMID: 28676773 PMCID: PMC5476727 DOI: 10.3389/fpsyg.2017.01005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Accepted: 05/31/2017] [Indexed: 12/02/2022] Open
Abstract
Non-verbal signals such as facial expressions are of paramount importance for social encounters. Their perception predominantly occurs without conscious awareness and is effortlessly integrated into social interactions. In other words, face perception is intuitive. Contrary to classical intuition tasks, this work investigates intuitive processes in the realm of every-day type social judgments. Two differently instructed groups of participants judged the authenticity of emotional facial expressions, while their eye movements were recorded: an ‘intuitive group,’ instructed to rely on their “gut feeling” for the authenticity judgments, and a ‘deliberative group,’ instructed to make their judgments after careful analysis of the face. Pixel-wise statistical maps of the resulting eye movements revealed a differential viewing pattern, wherein the intuitive judgments relied on fewer, longer and more centrally located fixations. These markers have been associated with a global/holistic viewing strategy. The holistic pattern of intuitive face judgments is in line with evidence showing that intuition is related to processing the “gestalt” of an object, rather than focusing on details. Our work thereby provides further evidence that intuitive processes are characterized by holistic perception, in an understudied and real world domain of intuition research.
Collapse
Affiliation(s)
- Laura F Mega
- Werner Reichardt Centre for Integrative NeuroscienceTübingen, Germany.,University of Tübingen, Tübingen, Germany
| | - Kirsten G Volz
- Werner Reichardt Centre for Integrative NeuroscienceTübingen, Germany.,University of Tübingen, Tübingen, Germany
| |
Collapse
|