1
|
Galazka MA, Thorsson M, Lundin Kleberg J, Hadjikhani N, Åsberg Johnels J. Pupil contagion variation with gaze, arousal, and autistic traits. Sci Rep 2024; 14:18282. [PMID: 39112540 PMCID: PMC11306570 DOI: 10.1038/s41598-024-68670-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 07/26/2024] [Indexed: 08/10/2024] Open
Abstract
Pupillary contagion occurs when one's pupil size unconsciously adapts to the pupil size of an observed individual and is presumed to reflect the transfer of arousal. Importantly, when estimating pupil contagion, low level stimuli properties need to be controlled for, to ensure that observations of pupillary changes are due to internal change in arousal rather than the external differences between stimuli. Here, naturalistic images of children's faces depicting either small or large pupils were presented to a group of children and adolescents with a wide range of autistic traits, a third of whom had been diagnosed with autism. We examined the extent to which pupillary contagion reflects autonomic nervous system reaction through pupil size change, heart rate and skin conductance response. Our second aim was to determine the association between arousal reaction to stimuli and degree of autistic traits. Results show that pupil contagion and concomitant heart rate change, but not skin conductance change, was evident when gaze was restricted to the eye region of face stimuli. A positive association was also observed between pupillary contagion and autistic traits when participants' gaze was constrained to the eye region. Findings add to a broader understanding of the mechanisms underlying pupillary contagion and its association with autism.
Collapse
Affiliation(s)
- Martyna A Galazka
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
- Division of Cognition and Communication, Department of Applied Information Technology, University of Gothenburg, Gothenburg, Sweden.
| | - Max Thorsson
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
| | - Johan Lundin Kleberg
- Department of Psychology, Stockholm University, Stockholm, Sweden
- Department of Clinical Neuroscience, Centre for Psychiatry Research, Karolinska Institute, Stockholm, Sweden
| | - Nouchine Hadjikhani
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jakob Åsberg Johnels
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
- Section for Speech and Language Pathology, University of Gothenburg, Gothenburg, Sweden
- Child Neuropsychiatric Clinic, Queen Silvia Children's Hospital, Västra Götalandsregionen, Gothenburg, Sweden
| |
Collapse
|
2
|
Kvasova D, Coll L, Stewart T, Soto-Faraco S. Crossmodal semantic congruence guides spontaneous orienting in real-life scenes. PSYCHOLOGICAL RESEARCH 2024:10.1007/s00426-024-02018-8. [PMID: 39105825 DOI: 10.1007/s00426-024-02018-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 07/24/2024] [Indexed: 08/07/2024]
Abstract
In real-world scenes, the different objects and events are often interconnected within a rich web of semantic relationships. These semantic links help parse information efficiently and make sense of the sensory environment. It has been shown that, during goal-directed search, hearing the characteristic sound of an everyday life object helps finding the affiliate objects in artificial visual search arrays as well as in naturalistic, real-life videoclips. However, whether crossmodal semantic congruence also triggers orienting during spontaneous, not goal-directed observation is unknown. Here, we investigated this question addressing whether crossmodal semantic congruence can attract spontaneous, overt visual attention when viewing naturalistic, dynamic scenes. We used eye-tracking whilst participants (N = 45) watched video clips presented alongside sounds of varying semantic relatedness with objects present within the scene. We found that characteristic sounds increased the probability of looking at, the number of fixations to, and the total dwell time on semantically corresponding visual objects, in comparison to when the same scenes were presented with semantically neutral sounds or just with background noise only. Interestingly, hearing object sounds not met with an object in the scene led to increased visual exploration. These results suggest that crossmodal semantic information has an impact on spontaneous gaze on realistic scenes, and therefore on how information is sampled. Our findings extend beyond known effects of object-based crossmodal interactions with simple stimuli arrays and shed new light on the role that audio-visual semantic relationships out in the perception of everyday life scenarios.
Collapse
Affiliation(s)
- Daria Kvasova
- Center for Brain and Cognition, Department of Communication and Information Technologies, Universitat Pompeu Fabra, Carrer de Ramón Trias i Fargas 25-27, Barcelona, 08005, Spain
| | - Llucia Coll
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Travis Stewart
- Center for Brain and Cognition, Department of Communication and Information Technologies, Universitat Pompeu Fabra, Carrer de Ramón Trias i Fargas 25-27, Barcelona, 08005, Spain
| | - Salvador Soto-Faraco
- Center for Brain and Cognition, Department of Communication and Information Technologies, Universitat Pompeu Fabra, Carrer de Ramón Trias i Fargas 25-27, Barcelona, 08005, Spain.
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Passeig de Lluís Companys, 23, Barcelona, 08010, Spain.
| |
Collapse
|
3
|
Mao Y, Han Y, Li P, Si C, Wu D. Performance and eye movement patterns of industrial design students reading sustainable design articles. Sci Rep 2024; 14:16267. [PMID: 39009746 PMCID: PMC11251013 DOI: 10.1038/s41598-024-67223-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 07/09/2024] [Indexed: 07/17/2024] Open
Abstract
Sustainable design education plays a crucial role in cultivating sustainability awareness and competencies among students studying industrial design. This research investigates their sustainability levels, reading performance when engaging with articles, and fixation patterns during reading. 60 industrial design students participated in the study. We evaluated their sustainability levels using the Sustainable Consumption Measurement Scale. After reading both theoretical and case article, they completed tests assessing their recall and perspective scores. We collected eye-tracking data to analyze fixation duration and conducted lag sequential analysis on fixation transitions. Students were categorized into higher and lower sustainability groups based on their sustainability scores. Female students demonstrated higher sustainability levels, and students with design experience performed better in the higher sustainability group. While recall scores did not differ significantly, the higher sustainability group exhibited elevated perspective scores in theory article. Perspective scores were generally higher for case article compared to theory article. The higher sustainability group exhibited longer fixation durations in theory article, while the case article had longer fixation durations on images. Fixation transition patterns varied between theoretical and case article, with the former featuring transitions from images to texts, and the latter demonstrating transitions between images. This study provides valuable insights into sustainable design education for students studying industrial design.
Collapse
Affiliation(s)
- Yongchun Mao
- School of Arts and Design, Qilu University of Technology (Shandong Academy of Sciences), Jinan, 250353, China.
- School of Distance Education, Universiti Sains Malaysia, 11800, Penang, Malaysia.
| | - Yanjun Han
- School of Modern Logistics, Hengyang Technician College, Hengyang, 421101, China.
| | - Puhong Li
- School of Arts and Design, Qilu University of Technology (Shandong Academy of Sciences), Jinan, 250353, China.
| | - Chengming Si
- Jining Experimental High School, Jining, 272000, China
| | - Dan Wu
- Jining Experimental High School, Jining, 272000, China
| |
Collapse
|
4
|
Kawagoe T, Teramoto W. The center of a face catches the eye in face perception. Exp Brain Res 2024; 242:1339-1348. [PMID: 38563980 DOI: 10.1007/s00221-024-06822-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 03/17/2024] [Indexed: 04/04/2024]
Abstract
Using the "Don't look" (DL) paradigm, wherein participants are asked not to look at a specific feature (i.e., eye, nose, and mouth), we previously documented that Easterners struggled to completely avoid fixating on the eyes and nose. Their underlying mechanisms for attractiveness may differ because the fixations on the eyes were triggered only reflexively, whereas fixations on the nose were consistently elicited. In this study, we predominantly focused on the nose, where the center-of-gravity (CoG) effect, which refers to a person's tendency to look near an object's CoG, could be confounded. Full-frontal and mid-profile faces were used because the latter's CoG did not correspond to the nose location. Although we hypothesized that these two effects are independent, the results indicated that, in addition to the successful tracing of previous studies, the CoG effect explains the nose-attracting effect. This study not only reveals this explanation but also raises a question regarding the CoG effect on Eastern participants.
Collapse
Affiliation(s)
- Toshikazu Kawagoe
- School of Humanities and Science, Tokai University, Kumamoto Campus, Toroku 9- 1-1, Kumamoto City, Kumamoto, 862-8652, Japan.
| | - Wataru Teramoto
- Faculty of Humanities and Social Sciences, Kumamoto University, Kurokami 2-40-1, Kumamoto City, 860-8555, Japan
| |
Collapse
|
5
|
Urbanus E, Swaab H, Tartaglia N, van Rijn S. Social Communication in Young Children With Sex Chromosome Trisomy (XXY, XXX, XYY): A Study With Eye Tracking and Heart Rate Measures. Arch Clin Neuropsychol 2024; 39:482-497. [PMID: 37987192 PMCID: PMC11110620 DOI: 10.1093/arclin/acad088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 09/22/2023] [Accepted: 09/25/2023] [Indexed: 11/22/2023] Open
Abstract
OBJECTIVE Children with sex chromosome trisomy (SCT) have an increased risk for suboptimal development. Difficulties with language are frequently reported, start from a very young age, and encompass various domains. This cross-sectional study examined social orientation with eye tracking and physiological arousal responses to gain more knowledge on how children perceive and respond to communicative bids and evaluated the associations between social orientation and language outcomes, concurrently and 1 year later. METHOD In total, 107 children with SCT (33 XXX, 50 XXY, and 24 XYY) and 102 controls (58 girls and 44 boys) aged between 1 and 7 years were included. Assessments took place in the USA and Western Europe. A communicative bids eye tracking paradigm, physiological arousal measures, and receptive and expressive language outcomes were used. RESULTS Compared to controls, children with SCT showed reduced attention to the face and eyes of the on-screen interaction partner and reduced physiological arousal sensitivity in response to direct versus averted gaze. In addition, social orientation to the mouth was related to concurrent receptive and expressive language abilities in 1-year-old children with SCT. CONCLUSIONS Children with SCT may experience difficulties with social communication that extend past the well-recognized risk for early language delays. These difficulties may underlie social-behavioral problems that have been described in the SCT population and are an important target for early monitoring and support.
Collapse
Affiliation(s)
- Evelien Urbanus
- Department of Clinical Neurodevelopmental Sciences, Leiden University, Leiden, The Netherlands
- TRIXY Center of Expertise, Leiden University Treatment and Expertise Centre (LUBEC), Leiden, The Netherlands
- Department of Clinical, Neuro, and Developmental Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Hanna Swaab
- Department of Clinical Neurodevelopmental Sciences, Leiden University, Leiden, The Netherlands
- TRIXY Center of Expertise, Leiden University Treatment and Expertise Centre (LUBEC), Leiden, The Netherlands
| | - Nicole Tartaglia
- eXtraordinarY Kids Clinic, Developmental Pediatrics, Children's Hospital Colorado, Aurora, CO, USA
- Department of Pediatrics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Sophie van Rijn
- Department of Clinical Neurodevelopmental Sciences, Leiden University, Leiden, The Netherlands
- TRIXY Center of Expertise, Leiden University Treatment and Expertise Centre (LUBEC), Leiden, The Netherlands
| |
Collapse
|
6
|
Hagenaar DA, Bindels-de Heus KGCB, van Gils MM, van den Berg L, Ten Hoopen LW, Affourtit P, Pel JJM, Joosten KFM, Hillegers MHJ, Moll HA, de Wit MCY, Dieleman GC, Mous SE. Outcome measures in Angelman syndrome. J Neurodev Disord 2024; 16:6. [PMID: 38429713 PMCID: PMC10905876 DOI: 10.1186/s11689-024-09516-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 01/29/2024] [Indexed: 03/03/2024] Open
Abstract
BACKGROUND Angelman syndrome (AS) is a rare neurodevelopmental disorder characterized by severe intellectual disability, little to no expressive speech, visual and motor problems, emotional/behavioral challenges, and a tendency towards hyperphagia and weight gain. The characteristics of AS make it difficult to measure these children's functioning with standard clinical tests. Feasible outcome measures are needed to measure current functioning and change over time, in clinical practice and clinical trials. AIM Our first aim is to assess the feasibility of several functional tests. We target domains of neurocognitive functioning and physical growth using the following measurement methods: eye-tracking, functional Near-Infrared Spectroscopy (fNIRS), indirect calorimetry, bio-impedance analysis (BIA), and BOD POD (air-displacement plethysmography). Our second aim is to explore the results of the above measures, in order to better understand the AS phenotype. METHODS The study sample consisted of 28 children with AS aged 2-18 years. We defined an outcome measure as feasible when (1) at least 70% of participants successfully finished the measurement and (2) at least 60% of those participants had acceptable data quality. Adaptations to the test procedure and reasons for early termination were noted. Parents rated acceptability and importance and were invited to make recommendations to increase feasibility. The results of the measures were explored. RESULTS Outcome measures obtained with eye-tracking and BOD POD met the definition of feasibility, while fNIRS, indirect calorimetry, and BIA did not. The most important reasons for early termination of measurements were showing signs of protest, inability to sit still and poor/no calibration (eye-tracking specific). Post-calibration was often applied to obtain valid eye-tracking results. Parents rated the BOD POD als most acceptable and fNIRS as least acceptable for their child. All outcome measures were rated to be important. Exploratory results indicated longer reaction times to high salient visual stimuli (eye-tracking) as well as high body fat percentage (BOD POD). CONCLUSIONS Eye-tracking and BOD POD are feasible measurement methods for children with AS. Eye-tracking was successfully used to assess visual orienting functions in the current study and (with some practical adaptations) can potentially be used to assess other outcomes as well. BOD POD was successfully used to examine body composition. TRIAL REGISTRATION Registered d.d. 23-04-2020 under number 'NL8550' in the Dutch Trial Register: https://onderzoekmetmensen.nl/en/trial/23075.
Collapse
Affiliation(s)
- Doesjka A Hagenaar
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands.
- Department of Child- and Adolescent Psychiatry/Psychology, Erasmus MC, Rotterdam, The Netherlands.
- Department of Paediatrics, Erasmus MC, Rotterdam, The Netherlands.
| | - Karen G C B Bindels-de Heus
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Paediatrics, Erasmus MC, Rotterdam, The Netherlands
| | - Maud M van Gils
- Vestibular and Oculomotor Research Group, Department of Neuroscience, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Louise van den Berg
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Child- and Adolescent Psychiatry/Psychology, Erasmus MC, Rotterdam, The Netherlands
| | - Leontine W Ten Hoopen
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Child- and Adolescent Psychiatry/Psychology, Erasmus MC, Rotterdam, The Netherlands
| | - Philine Affourtit
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Dietetics, Erasmus MC, Rotterdam, The Netherlands
| | - Johan J M Pel
- Vestibular and Oculomotor Research Group, Department of Neuroscience, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Koen F M Joosten
- Division of Pediatric ICU, Department of Neonatal and Pediatric ICU, Erasmus MC, Rotterdam, The Netherlands
| | - Manon H J Hillegers
- Department of Child- and Adolescent Psychiatry/Psychology, Erasmus MC, Rotterdam, The Netherlands
| | - Henriëtte A Moll
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Paediatrics, Erasmus MC, Rotterdam, The Netherlands
| | - Marie-Claire Y de Wit
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Neurology and Paediatric Neurology, Erasmus MC, Rotterdam, The Netherlands
| | - Gwen C Dieleman
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Child- and Adolescent Psychiatry/Psychology, Erasmus MC, Rotterdam, The Netherlands
| | - Sabine E Mous
- ENCORE Expertise Centre for Neurodevelopmental Disorders, Erasmus MC, Rotterdam, The Netherlands
- Department of Child- and Adolescent Psychiatry/Psychology, Erasmus MC, Rotterdam, The Netherlands
| |
Collapse
|
7
|
Zeng G, Simpson EA, Paukner A. Maximizing valid eye-tracking data in human and macaque infants by optimizing calibration and adjusting areas of interest. Behav Res Methods 2024; 56:881-907. [PMID: 36890330 DOI: 10.3758/s13428-022-02056-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/24/2022] [Indexed: 03/10/2023]
Abstract
Remote eye tracking with automated corneal reflection provides insights into the emergence and development of cognitive, social, and emotional functions in human infants and non-human primates. However, because most eye-tracking systems were designed for use in human adults, the accuracy of eye-tracking data collected in other populations is unclear, as are potential approaches to minimize measurement error. For instance, data quality may differ across species or ages, which are necessary considerations for comparative and developmental studies. Here we examined how the calibration method and adjustments to areas of interest (AOIs) of the Tobii TX300 changed the mapping of fixations to AOIs in a cross-species longitudinal study. We tested humans (N = 119) at 2, 4, 6, 8, and 14 months of age and macaques (Macaca mulatta; N = 21) at 2 weeks, 3 weeks, and 6 months of age. In all groups, we found improvement in the proportion of AOI hits detected as the number of successful calibration points increased, suggesting calibration approaches with more points may be advantageous. Spatially enlarging and temporally prolonging AOIs increased the number of fixation-AOI mappings, suggesting improvements in capturing infants' gaze behaviors; however, these benefits varied across age groups and species, suggesting different parameters may be ideal, depending on the population studied. In sum, to maximize usable sessions and minimize measurement error, eye-tracking data collection and extraction approaches may need adjustments for the age groups and species studied. Doing so may make it easier to standardize and replicate eye-tracking research findings.
Collapse
Affiliation(s)
- Guangyu Zeng
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | | - Annika Paukner
- Department of Psychology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
8
|
Hooge ITC, Niehorster DC, Hessels RS, Benjamins JS, Nyström M. How robust are wearable eye trackers to slow and fast head and body movements? Behav Res Methods 2023; 55:4128-4142. [PMID: 36326998 PMCID: PMC10700439 DOI: 10.3758/s13428-022-02010-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2022] [Indexed: 06/16/2023]
Abstract
How well can modern wearable eye trackers cope with head and body movement? To investigate this question, we asked four participants to stand still, walk, skip, and jump while fixating a static physical target in space. We did this for six different eye trackers. All the eye trackers were capable of recording gaze during the most dynamic episodes (skipping and jumping). The accuracy became worse as movement got wilder. During skipping and jumping, the biggest error was 5.8∘. However, most errors were smaller than 3∘. We discuss the implications of decreased accuracy in the context of different research scenarios.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, and Social, Health and Organisational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
9
|
Sander-Montant A, López Pérez M, Byers-Heinlein K. The more they hear the more they learn? Using data from bilinguals to test models of early lexical development. Cognition 2023; 238:105525. [PMID: 37402336 DOI: 10.1016/j.cognition.2023.105525] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 05/19/2023] [Accepted: 06/10/2023] [Indexed: 07/06/2023]
Abstract
Children have an early ability to learn and comprehend words, a skill that develops as they age. A critical question remains regarding what drives this development. Maturation-based theories emphasise cognitive maturity as a driver of comprehension, while accumulator theories emphasise children's accumulation of language experience over time. In this study we used archival looking-while-listening data from 155 children aged 14-48 months with a range of exposure to the target languages (from 10% to 100%) to evaluate the relative contributions of maturation and experience. We compared four statistical models of noun learning: maturation-only, experience-only, additive (maturation plus experience), and accumulator (maturation times experience). The best-fitting model was the additive model in which both maturation (age) and experience were independent contributors to noun comprehension: older children as well as children who had more experience with the target language were more accurate and looked faster to the target in the looking-while-listening task. A 25% change in relative language exposure was equivalent to a 4 month change in age, and age effects were stronger at younger than at older ages. Whereas accumulator models predict that the lexical development of children with less exposure to a language (as is typical in bilinguals) should fall further and further behind children with more exposure to a language (such as monolinguals), our results indicate that bilinguals are buffered against effects of reduced exposure in each language. This study shows that continuous-level measures from individual children's looking-while-listening data, gathered from children with a range of language experience, provide a powerful window into lexical development.
Collapse
Affiliation(s)
- Andrea Sander-Montant
- Concordia Infant Research Lab, Department of Psychology, Concordia University (Canada), 7141 Sherbrooke St. West, PY-033, Montréal, Québec H4B 1R6, Canada.
| | - Melanie López Pérez
- Concordia Infant Research Lab, Department of Psychology, Concordia University (Canada), 7141 Sherbrooke St. West, PY-033, Montréal, Québec H4B 1R6, Canada.
| | - Krista Byers-Heinlein
- Concordia Infant Research Lab, Department of Psychology, Concordia University (Canada), 7141 Sherbrooke St. West, PY-033, Montréal, Québec H4B 1R6, Canada.
| |
Collapse
|
10
|
Tzamaras HM, Wu HL, Moore JZ, Miller SR. Shifting Perspectives: A proposed framework for analyzing head-mounted eye-tracking data with dynamic areas of interest and dynamic scenes. PROCEEDINGS OF THE HUMAN FACTORS AND ERGONOMICS SOCIETY ... ANNUAL MEETING. HUMAN FACTORS AND ERGONOMICS SOCIETY. ANNUAL MEETING 2023; 67:953-958. [PMID: 38450120 PMCID: PMC10914345 DOI: 10.1177/21695067231192929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Eye-tracking is a valuable research method for understanding human cognition and is readily employed in human factors research, including human factors in healthcare. While wearable mobile eye trackers have become more readily available, there are no existing analysis methods for accurately and efficiently mapping dynamic gaze data on dynamic areas of interest (AOIs), which limits their utility in human factors research. The purpose of this paper was to outline a proposed framework for automating the analysis of dynamic areas of interest by integrating computer vision and machine learning (CVML). The framework is then tested using a use-case of a Central Venous Catheterization trainer with six dynamic AOIs. While the results of the validity trial indicate there is room for improvement in the CVML method proposed, the framework provides direction and guidance for human factors researchers using dynamic AOIs.
Collapse
Affiliation(s)
| | - Hang-Ling Wu
- Pennsylvania State University Mechanical Engineering
| | - Jason Z Moore
- Pennsylvania State University Mechanical Engineering
| | | |
Collapse
|
11
|
Viktorsson C, Valtakari NV, Falck-Ytter T, Hooge ITC, Rudling M, Hessels RS. Stable eye versus mouth preference in a live speech-processing task. Sci Rep 2023; 13:12878. [PMID: 37553414 PMCID: PMC10409748 DOI: 10.1038/s41598-023-40017-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 08/03/2023] [Indexed: 08/10/2023] Open
Abstract
Looking at the mouth region is thought to be a useful strategy for speech-perception tasks. The tendency to look at the eyes versus the mouth of another person during speech processing has thus far mainly been studied using screen-based paradigms. In this study, we estimated the eye-mouth-index (EMI) of 38 adult participants in a live setting. Participants were seated across the table from an experimenter, who read sentences out loud for the participant to remember in both a familiar (English) and unfamiliar (Finnish) language. No statistically significant difference in the EMI between the familiar and the unfamiliar languages was observed. Total relative looking time at the mouth also did not predict the number of correctly identified sentences. Instead, we found that the EMI was higher during an instruction phase than during the speech-processing task. Moreover, we observed high intra-individual correlations in the EMI across the languages and different phases of the experiment. We conclude that there are stable individual differences in looking at the eyes versus the mouth of another person. Furthermore, this behavior appears to be flexible and dependent on the requirements of the situation (speech processing or not).
Collapse
Affiliation(s)
- Charlotte Viktorsson
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden.
| | - Niilo V Valtakari
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Terje Falck-Ytter
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
- Center of Neurodevelopmental Disorders (KIND), Division of Neuropsychiatry, Department of Women's and Children's Health, Karolinska Institutet, Stockholm, Sweden
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Maja Rudling
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
12
|
Curtis PR, Estabrook R, Roberts MY, Weisleder A. Sensitivity to Semantic Relationships in U.S. Monolingual English-Speaking Typical Talkers and Late Talkers. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:2404-2420. [PMID: 37339002 PMCID: PMC10468120 DOI: 10.1044/2023_jslhr-22-00563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 01/09/2023] [Accepted: 03/29/2023] [Indexed: 06/22/2023]
Abstract
PURPOSE Late talkers (LTs) are a group of children who exhibit delays in language development without a known cause. Although a hallmark of LTs is a reduced expressive vocabulary, little is known about LTs' processing of semantic relations among words in their emerging vocabularies. This study uses an eye-tracking task to compare 2-year-old LTs' and typical talkers' (TTs') sensitivity to semantic relationships among early acquired words. METHOD U.S. monolingual English-speaking LTs (n = 21) and TTs (n = 24) completed a looking-while-listening task in which they viewed two images on a screen (e.g., a shirt and a pizza), while they heard words that referred to one of the images (e.g., Look! Shirt!; target-present condition) or a semantically related item (e.g., Look! Hat!; target-absent condition). Children's eye movements (i.e., looks to the target) were monitored to assess their sensitivity to these semantic relationships. RESULTS Both LTs and TTs looked longer at the semantically related image than the unrelated image on target-absent trials, demonstrating sensitivity to the taxonomic relationships used in the experiment. There was no significant group difference between LTs and TTs. Both groups also looked more to the target in the target-present condition than in the target-absent condition. CONCLUSIONS These results reveal that, despite possessing smaller expressive vocabularies, LTs have encoded semantic relationships in their receptive vocabularies and activate these during real-time language comprehension. This study furthers our understanding of LTs' emerging linguistic systems and language processing skills. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.23303987.
Collapse
Affiliation(s)
- Philip R. Curtis
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
| | - Ryne Estabrook
- Department of Psychology, University of Illinois Chicago
| | - Megan Y. Roberts
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
| | - Adriana Weisleder
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
| |
Collapse
|
13
|
Krauze L, Delesa-Velina M, Pladere T, Krumina G. Why 2D layout in 3D images matters: evidence from visual search and eyetracking. J Eye Mov Res 2023; 16:10.16910/jemr.16.1.4. [PMID: 37965285 PMCID: PMC10643048 DOI: 10.16910/jemr.16.1.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2023] Open
Abstract
Precise perception of three-dimensional (3D) images is crucial for a rewarding experience when using novel displays. However, the capability of the human visual system to perceive binocular disparities varies across the visual field meaning that depth perception might be affected by the two-dimensional (2D) layout of items on the screen. Nevertheless, potential difficulties in perceiving 3D images during free viewing have received only a little attention so far, limiting opportunities to enhance visual effectiveness of information presentation. The aim of this study was to elucidate how the 2D layout of items in 3D images impacts visual search and distribution of maintaining attention based on the analysis of the viewer's gaze. Participants were searching for a target which was projected one plane closer to the viewer compared to distractors on a multi-plane display. The 2D layout of items was manipulated by changing the item distance from the center of the display plane from 2° to 8°. As a result, the targets were identified correctly when the items were displayed close to the center of the display plane, however, the number of errors grew with an increase in distance. Moreover, correct responses were given more often when subjects paid more attention to targets compared to other items on the screen. However, a more balanced distribution of attention over time across all items was characteristic of the incorrectly completed trials. Thus, our results suggest that items should be displayed close to each other in a 2D layout to facilitate precise perception of 3D images and considering distribution of attention maintenance based on eye-tracking might be useful in the objective assessment of user experience for novel displays.
Collapse
|
14
|
Vehlen A, Kellner A, Normann C, Heinrichs M, Domes G. Reduced eye gaze during facial emotion recognition in chronic depression: Effects of intranasal oxytocin. J Psychiatr Res 2023; 159:50-56. [PMID: 36657314 DOI: 10.1016/j.jpsychires.2023.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 11/21/2022] [Accepted: 01/09/2023] [Indexed: 01/13/2023]
Abstract
Chronic depression disorders (CDD) are characterized by impaired social cognitive functioning. Visual attention during social perception is altered in clinical depression and is known to be sensitive to intranasal treatment with oxytocin (OT). The present study thus investigated potential alterations in gaze preferences during a standardized facial emotion recognition (FER) task using remote eye tracking in patients with CDD and the effect of a single dose of intranasal OT (compared to placebo). In emotion recognition, CDD patients were not more impaired than healthy controls, and there was no OT effect. However, CDD patients (with placebo) demonstrated less attentional preference for the eye region during FER than healthy controls, which was not apparent in the CDD group after OT treatment. Our results suggest that despite largely preserved basic facial emotions recognition, attention in social perception may be altered in CDD, and that this bias may be sensitive to OT treatment. These findings highlight OTs potential as a means of augmenting psychotherapy.
Collapse
Affiliation(s)
- Antonia Vehlen
- Department of Biological and Clinical Psychology, University of Trier, Germany
| | - Antonia Kellner
- Department of Psychiatry and Psychotherapy, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Germany
| | - Claus Normann
- Department of Psychiatry and Psychotherapy, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Germany; Center for Neuromodulation, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Markus Heinrichs
- Department of Psychology, Laboratory for Biological Psychology, Clinical Psychology and Psychotherapy, University of Freiburg, Germany
| | - Gregor Domes
- Department of Biological and Clinical Psychology, University of Trier, Germany.
| |
Collapse
|
15
|
Lin Z, Yang Z, Ye X. Immersive Experience and Climate Change Monitoring in Digital Landscapes: Evidence from Somatosensory Sense and Comfort. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:3332. [PMID: 36834034 PMCID: PMC9966150 DOI: 10.3390/ijerph20043332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/09/2023] [Accepted: 02/10/2023] [Indexed: 06/18/2023]
Abstract
In this study, the virtual engine software (Unity 2019, Unity Software Inc., San Francisco, California, the U.S.) was used to generate a digital landscape model, forming a virtual immersive environment. Through field investigation and emotional preference experiments, the ancient tree ecological area and the sunlight-exposed area were respectively monitored, and the somatosensory comfort evaluation model was established. The subjects showed the highest degree of interest in the ancient tree ecological area after landscape roaming experience, and the mean variance in SC fluctuation was 13.23% in experiments. The subjects were in a low arousal state and had a significant degree of interest in the digital landscape roaming scene, and there was a significant correlation between positive emotion, somatosensory comfort and the Rating of Perceived Exertion index; moreover, the somatosensory comfort of the ancient tree ecological area was higher than that of the sunlight-exposed area. Meanwhile, it was found that somatosensory comfort level can effectively distinguish the comfort level between the ancient tree ecological area and the sunlight-exposed area, which provides an important basis for monitoring extreme heat. This study concludes that, in terms of the goal of harmonious coexistence between human and nature, the evaluation model of somatosensory comfort can contribute to reducing people's adverse views on extreme weather conditions.
Collapse
Affiliation(s)
- Zhengsong Lin
- Virtual Landscape Design Lab, School of Art and Design, Wuhan Institute of Technology, Wuhan 430205, China; (Z.L.); (Z.Y.)
| | - Ziqian Yang
- Virtual Landscape Design Lab, School of Art and Design, Wuhan Institute of Technology, Wuhan 430205, China; (Z.L.); (Z.Y.)
| | - Xinyue Ye
- Department of Landscape Architecture and Urban Planning, Center for Geospatial Sciences, Applications and Technology, TAMIDS Design and Analytics Lab for Urban Artificial Intelligence, Texas A&M University, College Station, TX 77840, USA
| |
Collapse
|
16
|
Holmqvist K, Örbom SL, Hooge ITC, Niehorster DC, Alexander RG, Andersson R, Benjamins JS, Blignaut P, Brouwer AM, Chuang LL, Dalrymple KA, Drieghe D, Dunn MJ, Ettinger U, Fiedler S, Foulsham T, van der Geest JN, Hansen DW, Hutton SB, Kasneci E, Kingstone A, Knox PC, Kok EM, Lee H, Lee JY, Leppänen JM, Macknik S, Majaranta P, Martinez-Conde S, Nuthmann A, Nyström M, Orquin JL, Otero-Millan J, Park SY, Popelka S, Proudlock F, Renkewitz F, Roorda A, Schulte-Mecklenbeck M, Sharif B, Shic F, Shovman M, Thomas MG, Venrooij W, Zemblys R, Hessels RS. Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2023; 55:364-416. [PMID: 35384605 PMCID: PMC9535040 DOI: 10.3758/s13428-021-01762-8] [Citation(s) in RCA: 52] [Impact Index Per Article: 52.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
Affiliation(s)
- Kenneth Holmqvist
- Department of Psychology, Nicolaus Copernicus University, Torun, Poland.
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa.
- Department of Psychology, Regensburg University, Regensburg, Germany.
| | - Saga Lee Örbom
- Department of Psychology, Regensburg University, Regensburg, Germany
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Social, Health and Organizational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Pieter Blignaut
- Department of Computer Science and Informatics, University of the Free State, Bloemfontein, South Africa
| | | | - Lewis L Chuang
- Department of Ergonomics, Leibniz Institute for Working Environments and Human Factors, Dortmund, Germany
- Institute of Informatics, LMU Munich, Munich, Germany
| | | | - Denis Drieghe
- School of Psychology, University of Southampton, Southampton, UK
| | - Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | | - Susann Fiedler
- Vienna University of Economics and Business, Vienna, Austria
| | - Tom Foulsham
- Department of Psychology, University of Essex, Essex, UK
| | | | - Dan Witzner Hansen
- Machine Learning Group, Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | | | - Enkelejda Kasneci
- Human-Computer Interaction, University of Tübingen, Tübingen, Germany
| | | | - Paul C Knox
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Ellen M Kok
- Department of Education and Pedagogy, Division Education, Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
- Department of Online Learning and Instruction, Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, The Netherlands
| | - Helena Lee
- University of Southampton, Southampton, UK
| | - Joy Yeonjoo Lee
- School of Health Professions Education, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Jukka M Leppänen
- Department of Psychology and Speed-Language Pathology, University of Turku, Turku, Finland
| | - Stephen Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Päivi Majaranta
- TAUCHI Research Center, Computing Sciences, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Jacob L Orquin
- Department of Management, Aarhus University, Aarhus, Denmark
- Center for Research in Marketing and Consumer Psychology, Reykjavik University, Reykjavik, Iceland
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, Vienna, Austria
| | - Stanislav Popelka
- Department of Geoinformatics, Palacký University Olomouc, Olomouc, Czech Republic
| | - Frank Proudlock
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Frank Renkewitz
- Department of Psychology, University of Erfurt, Erfurt, Germany
| | - Austin Roorda
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | | | - Bonita Sharif
- School of Computing, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of General Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Mark Shovman
- Eyeviation Systems, Herzliya, Israel
- Department of Industrial Design, Bezalel Academy of Arts and Design, Jerusalem, Israel
| | - Mervyn G Thomas
- The University of Leicester Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, UK
| | - Ward Venrooij
- Electrical Engineering, Mathematics and Computer Science (EEMCS), University of Twente, Enschede, The Netherlands
| | | | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
17
|
Jou YT, Mariñas KAA. Developing inclusive lateral layouts for students with dyslexia - Chinese reading materials as an example. RESEARCH IN DEVELOPMENTAL DISABILITIES 2023; 132:104389. [PMID: 36508778 DOI: 10.1016/j.ridd.2022.104389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 08/25/2022] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND/AIM Students with learning disabilities have difficulties in reading abilities; however, their IQ is no less than that of ordinary students of the same age. This study investigated and developed three articles as the author and schoolteachers developed reading materials. Article A is with a standard layout; Article B is with keywords of various font sizes, and Article C is with a related illustration. METHODS Data of eye movements and reading tests from thirty students wherein 15 participants have dyslexia were collected. An eye-tracking methodology was employed to assess the dyslexics' students reading patterns and behavior. RESULTS ANOVA analysis shows differences in reading test performance among students for Article A with usual layout [F (1, 28) = 133.16, p = 0.000], but no significant differences for the other two articles. Based on the gaze map analysis, Article C (with illustration) can improve the reading completeness of the dyslexic students (eight out of fifteen dyslexic students had completed the reading during our experiment) than Article A and Article B. CONCLUSION The results affirm that special layouts and narrative writing styles can improve the reading attention of students with dyslexia. This study's results and conclusions can reference future teaching materials or lesson preparation using lateral layouts for people with dyslexia.
Collapse
Affiliation(s)
- Yung-Tsan Jou
- Department of Industrial and Systems Engineering, Chung Yuan Christian University, Taoyuan, Taiwan
| | - Klint Allen A Mariñas
- Department of Industrial and Systems Engineering, Chung Yuan Christian University, Taoyuan, Taiwan; School of Industrial Engineering and Engineering Management, Mapua University, Manila, the Philippines.
| |
Collapse
|
18
|
Tönsing D, Schiller B, Vehlen A, Spenthof I, Domes G, Heinrichs M. No evidence that gaze anxiety predicts gaze avoidance behavior during face-to-face social interaction. Sci Rep 2022; 12:21332. [PMID: 36494411 PMCID: PMC9734162 DOI: 10.1038/s41598-022-25189-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 11/25/2022] [Indexed: 12/13/2022] Open
Abstract
Eye contact is an indispensable social signal, yet for some individuals it is also a source of discomfort they fear and avoid. However, it is still unknown whether gaze anxiety actually produces avoidant gaze behavior in naturalistic, face-to-face interactions. Here, we relied on a novel dual eye-tracking setup that allows us to assess interactive gaze behavior. To investigate the effect of gaze anxiety on gaze behavior, we a priori created groups of participants reporting high or low levels of gaze anxiety. These participants (n = 51) then performed a semi-standardized interaction with a previously unknown individual reporting a medium level of gaze anxiety. The gaze behavior of both groups did not differ in either classical one-way, eye-tracking parameters (e.g. unilateral eye gaze), or interactive, two-way ones (e.g. mutual gaze). Furthermore, the subjective ratings of both participants' interaction did not differ between groups. Gaze anxious individuals seem to exhibit normal gaze behavior which does not hamper the perceived quality of interactions in a naturalistic face-to-face setup. Our findings point to the existence of cognitive distortions in gaze anxious individuals whose exterior behavior might be less affected than feared by their interior anxiety.
Collapse
Affiliation(s)
- Daniel Tönsing
- grid.5963.9Laboratory for Biological Psychology, Clinical Psychology and Psychotherapy, Department of Psychology, Albert-Ludwigs University of Freiburg, Stefan-Meier-Straße 8, 79104 Freiburg, Germany
| | - Bastian Schiller
- grid.5963.9Laboratory for Biological Psychology, Clinical Psychology and Psychotherapy, Department of Psychology, Albert-Ludwigs University of Freiburg, Stefan-Meier-Straße 8, 79104 Freiburg, Germany ,grid.5963.9Freiburg Brain Imaging Center, University Medical Center, Albert-Ludwigs University of Freiburg, Freiburg, Germany
| | - Antonia Vehlen
- grid.12391.380000 0001 2289 1527Department of Biological and Clinical Psychology, University of Trier, Trier, Germany
| | - Ines Spenthof
- grid.5963.9Laboratory for Biological Psychology, Clinical Psychology and Psychotherapy, Department of Psychology, Albert-Ludwigs University of Freiburg, Stefan-Meier-Straße 8, 79104 Freiburg, Germany
| | - Gregor Domes
- grid.12391.380000 0001 2289 1527Department of Biological and Clinical Psychology, University of Trier, Trier, Germany
| | - Markus Heinrichs
- grid.5963.9Laboratory for Biological Psychology, Clinical Psychology and Psychotherapy, Department of Psychology, Albert-Ludwigs University of Freiburg, Stefan-Meier-Straße 8, 79104 Freiburg, Germany ,grid.5963.9Freiburg Brain Imaging Center, University Medical Center, Albert-Ludwigs University of Freiburg, Freiburg, Germany
| |
Collapse
|
19
|
Vargas-Alvarez MA, Al-Sehaim H, Brunstrom JM, Castelnuovo G, Navas-Carretero S, Martínez JA, Almiron-Roig E. Development and validation of a new methodological platform to measure behavioral, cognitive, and physiological responses to food interventions in real time. Behav Res Methods 2022; 54:2777-2801. [PMID: 35102518 PMCID: PMC8802991 DOI: 10.3758/s13428-021-01745-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/07/2021] [Indexed: 01/01/2023]
Abstract
To fully understand the causes and mechanisms involved in overeating and obesity, measures of both cognitive and physiological determinants of eating behavior need to be integrated. Effectively synchronizing behavioral measures such as meal micro-structure (e.g., eating speed), cognitive processing of sensory stimuli, and metabolic parameters, can be complex. However, this step is central to understanding the impact of food interventions on body weight. In this paper, we provide an overview of the existing gaps in eating behavior research and describe the development and validation of a new methodological platform to address some of these issues. As part of a controlled trial, 76 men and women self-served and consumed food from a buffet, using a portion-control plate with visual stimuli for appropriate amounts of main food groups, or a conventional plate, on two different days, in a random order. In both sessions participants completed behavioral and cognitive tests using a novel methodological platform that measured gaze movement (as a proxy for visual attention), eating rate and bite size, memory for portion sizes, subjective appetite and portion-size perceptions. In a sub-sample of women, hormonal secretion in response to the meal was also measured. The novel platform showed a significant improvement in meal micro-structure measures from published data (13 vs. 33% failure rate) and high comparability between an automated gaze mapping protocol vs. manual coding for eye-tracking studies involving an eating test (ICC between methods 0.85; 90% CI 0.74, 0.92). This trial was registered at Clinical Trials.gov with Identifier NCT03610776.
Collapse
Affiliation(s)
- M A Vargas-Alvarez
- Center for Nutrition Research, University of Navarra, 31008, Pamplona, Spain
- Department of Nutrition, Food Science and Physiology, Faculty of Pharmacy and Nutrition, University of Navarra, Pamplona, Spain
| | - H Al-Sehaim
- School of Biological and Health Sciences, Technological University Dublin, Dublin, Ireland
| | - J M Brunstrom
- School of Psychological Science, University of Bristol, Bristol, UK
| | - G Castelnuovo
- Center for Nutrition Research, University of Navarra, 31008, Pamplona, Spain
| | - S Navas-Carretero
- Center for Nutrition Research, University of Navarra, 31008, Pamplona, Spain
- Department of Nutrition, Food Science and Physiology, Faculty of Pharmacy and Nutrition, University of Navarra, Pamplona, Spain
- Spanish Biomedical Research Centre in Physiopathology of Obesity and Nutrition (CIBERobn), Institute of Health Carlos III, Madrid, Spain
- Navarra Institute for Health Research (IdiSNa), Pamplona, Spain
| | - J A Martínez
- Department of Nutrition, Food Science and Physiology, Faculty of Pharmacy and Nutrition, University of Navarra, Pamplona, Spain
- Spanish Biomedical Research Centre in Physiopathology of Obesity and Nutrition (CIBERobn), Institute of Health Carlos III, Madrid, Spain
| | - E Almiron-Roig
- Center for Nutrition Research, University of Navarra, 31008, Pamplona, Spain.
- Department of Nutrition, Food Science and Physiology, Faculty of Pharmacy and Nutrition, University of Navarra, Pamplona, Spain.
- Navarra Institute for Health Research (IdiSNa), Pamplona, Spain.
| |
Collapse
|
20
|
Seyedi S, Jiang Z, Levey A, Clifford GD. An investigation of privacy preservation in deep learning-based eye-tracking. Biomed Eng Online 2022; 21:67. [PMID: 36100851 PMCID: PMC9469631 DOI: 10.1186/s12938-022-01035-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 08/24/2022] [Indexed: 11/10/2022] Open
Abstract
Background The expanding usage of complex machine learning methods such as deep learning has led to an explosion in human activity recognition, particularly applied to health. However, complex models which handle private and sometimes protected data, raise concerns about the potential leak of identifiable data. In this work, we focus on the case of a deep network model trained on images of individual faces. Materials and methods A previously published deep learning model, trained to estimate the gaze from full-face image sequences was stress tested for personal information leakage by a white box inference attack. Full-face video recordings taken from 493 individuals undergoing an eye-tracking- based evaluation of neurological function were used. Outputs, gradients, intermediate layer outputs, loss, and labels were used as inputs for a deep network with an added support vector machine emission layer to recognize membership in the training data. Results The inference attack method and associated mathematical analysis indicate that there is a low likelihood of unintended memorization of facial features in the deep learning model. Conclusions In this study, it is showed that the named model preserves the integrity of training data with reasonable confidence. The same process can be implemented in similar conditions for different models.
Collapse
Affiliation(s)
- Salman Seyedi
- Biomedical Informatics, School of Medicine, Emory, Atlanta, USA.
| | - Zifan Jiang
- Biomedical Informatics, School of Medicine, Emory, Atlanta, USA.,Biomedical Engineering, Georgia Institute of Technology, Atlanta, USA
| | - Allan Levey
- Neurology, School of Medicine, Emory, Atlanta, USA
| | - Gari D Clifford
- Biomedical Informatics, School of Medicine, Emory, Atlanta, USA.,Biomedical Engineering, Georgia Institute of Technology, Atlanta, USA
| |
Collapse
|
21
|
Hermens F, Zdravković S. Visual attention in change blindness for objects and shadows. Perception 2022; 51:605-623. [PMID: 35971314 PMCID: PMC9434251 DOI: 10.1177/03010066221109936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Studies have found that observers pay less attention to cast shadows in images than to better illuminated regions. In line with such observations, a recent study has suggested stronger change blindness for shadows than for objects (Ehinger et al., 2016). We here examine the role of (overt) visual attention in these findings by recording participants' eye movements. Participants first viewed all original images (without changes). They then performed a change detection task on a subset of the images with changes in objects or shadows. During both tasks, their eye movements were recorded. In line with the original study, objects (subject to change in the change detection task) were fixated more often than shadows. In contrast to the previous study, better change detection was found for shadows than for objects. The improved change detection for shadows may be explained by the balancing of trials with object and shadow changes in the present study. Eye movements during change detection indicated that participants searched the bottom half of the images. Shadows were more often present in this region, which may explain why they were easier to find.
Collapse
Affiliation(s)
- Frouke Hermens
- Department of Computer Science, 10198Open University of the Netherlands, the Netherlands
| | - Sunčica Zdravković
- Laboratory for Experimental Psychology, Psychology Department, University of Novi Sad, Serbia.,Laboratory for Experimental Psychology, 54801University of Belgrade, Serbia
| |
Collapse
|
22
|
Bouw N, Swaab H, Tartaglia N, Cordeiro L, van Rijn S. The impact of sex chromosome trisomies (XXX, XXY, XYY) on gaze towards faces and affect recognition: a cross-sectional eye tracking study. J Neurodev Disord 2022; 14:44. [PMID: 35918661 PMCID: PMC9347080 DOI: 10.1186/s11689-022-09453-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 07/19/2022] [Indexed: 12/03/2022] Open
Abstract
BACKGROUND About 1:650-1000 children are born with an extra X or Y chromosome (47,XXX; 47,XXY; 47,XYY), which results in a sex chromosome trisomy (SCT). This international cross-sectional study was designed to investigate gaze towards faces and affect recognition during early life of children with SCT, with the aim to find indicators for support and treatment. METHODS A group of 101 children with SCT (aged 1-7 years old; Mage= 3.7 years) was included in this study, as well as a population-based sample of 98 children without SCT (Mage= 3.7). Eye gaze patterns to faces were measured using an eye tracking method that quantifies first fixations and fixation durations on eyes of static faces and fixation durations on eyes and faces in a dynamic paradigm (with two conditions: single face and multiple faces). Affect recognition was measured using the subtest Affect Recognition of the NEPSY-II neuropsychological test battery. Recruitment and assessment took place in the Netherlands and the USA. RESULTS Eye tracking results reveal that children with SCT show lower proportion fixation duration on faces already from the age of 3 years, compared to children without SCT. Also, impairments in the clinical range for affect recognition were found (32.2% of the SCT group scored in the well below average range). CONCLUSIONS These results highlight the importance to further explore the development of social cognitive skills of children with SCT in a longitudinal design, the monitoring of affect recognition skills, and the implementation of (preventive) interventions aiming to support the development of attention to social important information and affect recognition.
Collapse
Affiliation(s)
- Nienke Bouw
- Clinical Neurodevelopmental Sciences, Faculty of Social and Behavioral Sciences, Leiden University, PO Box 9500, 2300 RA, Leiden, The Netherlands
- Leiden Institute for Brain and Cognition, Leiden, The Netherlands
| | - Hanna Swaab
- Clinical Neurodevelopmental Sciences, Faculty of Social and Behavioral Sciences, Leiden University, PO Box 9500, 2300 RA, Leiden, The Netherlands
- Leiden Institute for Brain and Cognition, Leiden, The Netherlands
| | - Nicole Tartaglia
- Developmental Pediatrics, University of Colorado School of Medicine, Children's Hospital Colorado, Aurora, CO, USA
- Department of Pediatrics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Lisa Cordeiro
- Developmental Pediatrics, University of Colorado School of Medicine, Children's Hospital Colorado, Aurora, CO, USA
- Department of Pediatrics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Sophie van Rijn
- Clinical Neurodevelopmental Sciences, Faculty of Social and Behavioral Sciences, Leiden University, PO Box 9500, 2300 RA, Leiden, The Netherlands.
- Leiden Institute for Brain and Cognition, Leiden, The Netherlands.
| |
Collapse
|
23
|
Nayar K, Shic F, Winston M, Losh M. A constellation of eye-tracking measures reveals social attention differences in ASD and the broad autism phenotype. Mol Autism 2022; 13:18. [PMID: 35509089 PMCID: PMC9069739 DOI: 10.1186/s13229-022-00490-w] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Accepted: 02/10/2022] [Indexed: 11/25/2022] Open
Abstract
Background Social attention differences, expressed through gaze patterns, have been documented in autism spectrum disorder (ASD), with subtle differences also reported among first-degree relatives, suggesting a shared genetic link. Findings have mostly been derived from standard eye-tracking methods (total fixation count or total fixation duration). Given the dynamics of visual attention, these standard methods may obscure subtle, yet core, differences in visual attention mechanisms, particularly those presenting sub-clinically. This study applied a constellation of eye-tracking analyses to gaze data from individuals with ASD and their parents. Methods This study included n = 156 participants across groups, including ASD (n = 24) and control (n = 32) groups, and parents of individuals with ASD (n = 61) and control parents (n = 39). A complex scene with social/non-social elements was displayed and gaze tracked via an eye tracker. Eleven analytic methods from the following categories were analyzed: (1) standard variables, (2) temporal dynamics (e.g., gaze over time), (3) fixation patterns (e.g., perseverative or regressive fixations), (4) first fixations, and (5) distribution patterns. MANOVAs, growth curve analyses, and Chi-squared tests were applied to examine group differences. Finally, group differences were examined on component scores derived from a principal component analysis (PCA) that reduced variables to distinct dimensions. Results No group differences emerged among standard, first fixation, and distribution pattern variables. Both the ASD and ASD parent groups demonstrated on average reduced social attention over time and atypical perseverative fixations. Lower social attention factor scores derived from PCA strongly differentiated the ASD and ASD parent groups from controls, with parent findings driven by the subset of parents demonstrating the broad autism phenotype. Limitations To generalize these findings, larger sample sizes, extended viewing contexts (e.g., dynamic stimuli), and even more eye-tracking analytical methods are needed. Conclusions Fixations over time and perseverative fixations differentiated ASD and the ASD parent groups from controls, with the PCA most robustly capturing social attention differences. Findings highlight their methodological utility in studies of the (broad) autism spectrum to capture nuanced visual attention differences that may relate to clinical symptoms in ASD, and reflect genetic liability in clinically unaffected relatives. This proof-of-concept study may inform future studies using eye tracking across populations where social attention is impacted. Supplementary Information The online version contains supplementary material available at 10.1186/s13229-022-00490-w.
Collapse
Affiliation(s)
- Kritika Nayar
- Neurodevelopmental Disabilities Lab, Roxelyn and Richard Pepper, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA.,Department of Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Molly Winston
- Neurodevelopmental Disabilities Lab, Roxelyn and Richard Pepper, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Molly Losh
- Neurodevelopmental Disabilities Lab, Roxelyn and Richard Pepper, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA.
| |
Collapse
|
24
|
Liu C, Sharma C, Xu Q, Gonzalez Viejo C, Fuentes S, Torrico DD. Influence of Label Design and Country of Origin Information in Wines on Consumers' Visual, Sensory, and Emotional Responses. SENSORS (BASEL, SWITZERLAND) 2022; 22:2158. [PMID: 35336334 PMCID: PMC8949006 DOI: 10.3390/s22062158] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 02/22/2022] [Accepted: 03/08/2022] [Indexed: 06/14/2023]
Abstract
This study aimed to evaluate the influence of origin information on Pinot Noir wine labels using eye-tracking and its associations with purchase intent, and hedonic and subconscious emotional responses. Two studies were carried out on untrained university staff and students aged 20-60 years old. Study 1 was conducted to assess consumers' (n = 55; 55% males, and 45% females) self-reported and subconscious responses towards four design labels (with and without New Zealand origin name/script or origin logo) using eye-tracking and video analysis to evaluate emotions of participants. In study 2, participants (n = 72, 56% males, and 44% females) blind-tasted the same wine sample from different labels while recording their self-reported responses. In study 1, no significant differences were found in fixations between origin name/script and origin logo. However, participants paid more attention to the image and the brand name on the wine labels. In study 2, no significant effects on emotional responses were found with or without the origin name/script or logo. Nonetheless, a multiple factor analysis showed either negative or no associations between the baseline (wine with no label) and the samples showing the different labels, even though the taste of the wine samples was the same, which confirmed an influence of the label on the wine appreciation. Among results from studies 1 and 2, origin information affected the purchase intent and hedonic responses marginally. These findings can be used to design wine labels for e-commerce.
Collapse
Affiliation(s)
- Chang Liu
- Centre of Excellence—Food for Future Consumers, Department of Wine, Food and Molecular Biosciences, Faculty of Agriculture and Life Sciences, Lincoln University, Lincoln 7647, New Zealand; (C.L.); (C.S.); (Q.X.)
| | - Chetan Sharma
- Centre of Excellence—Food for Future Consumers, Department of Wine, Food and Molecular Biosciences, Faculty of Agriculture and Life Sciences, Lincoln University, Lincoln 7647, New Zealand; (C.L.); (C.S.); (Q.X.)
| | - Qiqi Xu
- Centre of Excellence—Food for Future Consumers, Department of Wine, Food and Molecular Biosciences, Faculty of Agriculture and Life Sciences, Lincoln University, Lincoln 7647, New Zealand; (C.L.); (C.S.); (Q.X.)
| | - Claudia Gonzalez Viejo
- Digital Agriculture Food and Wine Group, School of Agriculture and Food, Faculty of Veterinary and Agricultural Sciences, University of Melbourne, Parkville, VIC 3010, Australia; (C.G.V.); (S.F.)
| | - Sigfredo Fuentes
- Digital Agriculture Food and Wine Group, School of Agriculture and Food, Faculty of Veterinary and Agricultural Sciences, University of Melbourne, Parkville, VIC 3010, Australia; (C.G.V.); (S.F.)
| | - Damir D. Torrico
- Centre of Excellence—Food for Future Consumers, Department of Wine, Food and Molecular Biosciences, Faculty of Agriculture and Life Sciences, Lincoln University, Lincoln 7647, New Zealand; (C.L.); (C.S.); (Q.X.)
| |
Collapse
|
25
|
Bouw N, Swaab H, van Rijn S. Early Preventive Intervention for Young Children With Sex Chromosome Trisomies (XXX, XXY, XYY): Supporting Social Cognitive Development Using a Neurocognitive Training Program Targeting Facial Emotion Understanding. Front Psychiatry 2022; 13:807793. [PMID: 35280174 PMCID: PMC8913493 DOI: 10.3389/fpsyt.2022.807793] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 01/31/2022] [Indexed: 11/13/2022] Open
Abstract
Background Sex Chromosome Trisomies (SCTs; XXX, XXY, XYY) are genetic conditions that are associated with increased risk for neurodevelopmental problems and psychopathology. There is a great need for early preventive intervention programs to optimize outcome, especially considering the increase in prenatal diagnoses due to recent advances in non-invasive prenatal screening. This study is the first to evaluate efficacy of a neurocognitive training in children with SCT. As social behavioral problems have been identified as among the key areas of vulnerability, it was targeted at improving a core aspect of social cognition, the understanding of social cues from facial expressions. Methods Participants were 24 children with SCT and 18 typically developing children, aged 4-8 years old. Children with SCT were assigned to a training (n = 13) or waiting list (no-training) group (n = 11). Children in the training group completed a neurocognitive training program (The Transporters), aimed to increase understanding of facial emotions. Participants were tested before and after the training on facial emotion recognition and Theory of Mind abilities (NEPSY-II), and on social orienting (eyetracking paradigm). The SCT no-training group and typically developing control group were also assessed twice with the same time interval without any training. Feasibility of the training was evaluated with the Social Validity Questionnaire filled out by the parents and by children's ratings on a Visual Analog Scale. Results The SCT training group improved significantly more than the SCT no-training and TD no-training group on facial emotion recognition (large effect size;η p 2 = 0.28), performing comparable to typical controls after completing the training program. There were no training effects on ToM abilities and social orienting. Both children and parents expressed satisfaction with the feasibility of the training. Conclusions The significant improvement in facial emotion recognition, with large effect sizes, suggests that there are opportunities for positively supporting the development of social cognition in children with an extra X- or Y-chromosome, already at a very young age. This evidence based support is of great importance given the need for preventive and early training programs in children with SCT, aimed to minimize neurodevelopmental impact.
Collapse
Affiliation(s)
- Nienke Bouw
- Clinical Neurodevelopmental Sciences, Department of Education and Child Studies, Leiden University, Leiden, Netherlands
- Leiden Institute for Brain and Cognition, Leiden, Netherlands
| | - Hanna Swaab
- Clinical Neurodevelopmental Sciences, Department of Education and Child Studies, Leiden University, Leiden, Netherlands
- Leiden Institute for Brain and Cognition, Leiden, Netherlands
| | - Sophie van Rijn
- Clinical Neurodevelopmental Sciences, Department of Education and Child Studies, Leiden University, Leiden, Netherlands
- Leiden Institute for Brain and Cognition, Leiden, Netherlands
| |
Collapse
|
26
|
Vehlen A, Standard W, Domes G. How to choose the size of facial areas of interest in interactive eye tracking. PLoS One 2022; 17:e0263594. [PMID: 35120188 PMCID: PMC8815978 DOI: 10.1371/journal.pone.0263594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 01/21/2022] [Indexed: 11/18/2022] Open
Abstract
Advances in eye tracking technology have enabled the development of interactive experimental setups to study social attention. Since these setups differ substantially from the eye tracker manufacturer's test conditions, validation is essential with regard to the quality of gaze data and other factors potentially threatening the validity of this signal. In this study, we evaluated the impact of accuracy and areas of interest (AOIs) size on the classification of simulated gaze (fixation) data. We defined AOIs of different sizes using the Limited-Radius Voronoi-Tessellation (LRVT) method, and simulated gaze data for facial target points with varying accuracy. As hypothesized, we found that accuracy and AOI size had strong effects on gaze classification. In addition, these effects were not independent and differed in falsely classified gaze inside AOIs (Type I errors; false alarms) and falsely classified gaze outside the predefined AOIs (Type II errors; misses). Our results indicate that smaller AOIs generally minimize false classifications as long as accuracy is good enough. For studies with lower accuracy, Type II errors can still be compensated to some extent by using larger AOIs, but at the cost of more probable Type I errors. Proper estimation of accuracy is therefore essential for making informed decisions regarding the size of AOIs in eye tracking research.
Collapse
Affiliation(s)
- Antonia Vehlen
- Department of Psychology, Biological and Clinical Psychology, University of Trier, Trier, Germany
| | - William Standard
- Department of Psychology, Biological and Clinical Psychology, University of Trier, Trier, Germany
| | - Gregor Domes
- Department of Psychology, Biological and Clinical Psychology, University of Trier, Trier, Germany
| |
Collapse
|
27
|
Bouw N, Swaab H, Tartaglia N, van Rijn S. The Impact of Sex Chromosome Trisomies (XXX, XXY, XYY) on Early Social Cognition: Social Orienting, Joint Attention, and Theory of Mind. Arch Clin Neuropsychol 2022; 37:63-77. [PMID: 34101798 PMCID: PMC8763088 DOI: 10.1093/arclin/acab042] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Revised: 04/28/2021] [Accepted: 05/16/2021] [Indexed: 11/12/2022] Open
Abstract
OBJECTIVE About 1:650-1,000 children are born with an extra X or Y chromosome (XXX; XXY; XYY), which results in a sex chromosome trisomy (SCT). This study aims to cross-sectionally investigate the impact of SCT on early social cognitive skills. Basic orienting toward social cues, joint attention, and theory of mind (ToM) in young children with SCT were evaluated. METHOD About 105 children with SCT (range: 1-7 years old) were included in this study, as well as 96 age-matched nonclinical controls. Eyetracking paradigms were used to investigate the eye gaze patterns indicative of joint attention skills and orienting to social interactions. The ToM abilities were measured using the subtest ToM of the Developmental NEuroPSYchological Assessment, second edition, neuropsychological test battery. Recruitment and assessment took place in the Netherlands and in the United States. RESULTS Eyetracking results revealed difficulties in children with SCT in social orienting. These difficulties were more pronounced in children aged 3 years and older, and in boys with 47,XYY. Difficulties in joint attention were found over all age groups and karyotypes. Children with SCT showed impairments in ToM (26.3% in the [well] below expected level), increasing with age. These impairments did not differ between karyotypes. CONCLUSIONS An impact of SCT on social cognitive abilities was found already at an early age, indicating the need for early monitoring and support of early social cognition. Future research should explore the longitudinal trajectories of social development in order to evaluate the predictive relationships between social cognition and outcome later in life in terms of social functioning and the risk for psychopathology.
Collapse
Affiliation(s)
| | | | | | - S van Rijn
- Corresponding author at: Wassenaarseweg 52, 2333 AK, Leiden, The Netherlands. Tel: +31 71 527 1846; E-mail address: (S. van Rijn)
| |
Collapse
|
28
|
Mahanama B, Jayawardana Y, Rengarajan S, Jayawardena G, Chukoskie L, Snider J, Jayarathna S. Eye Movement and Pupil Measures: A Review. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2021.733531] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Our subjective visual experiences involve complex interaction between our eyes, our brain, and the surrounding world. It gives us the sense of sight, color, stereopsis, distance, pattern recognition, motor coordination, and more. The increasing ubiquity of gaze-aware technology brings with it the ability to track gaze and pupil measures with varying degrees of fidelity. With this in mind, a review that considers the various gaze measures becomes increasingly relevant, especially considering our ability to make sense of these signals given different spatio-temporal sampling capacities. In this paper, we selectively review prior work on eye movements and pupil measures. We first describe the main oculomotor events studied in the literature, and their characteristics exploited by different measures. Next, we review various eye movement and pupil measures from prior literature. Finally, we discuss our observations based on applications of these measures, the benefits and practical challenges involving these measures, and our recommendations on future eye-tracking research directions.
Collapse
|
29
|
Masedu F, Vagnetti R, Pino MC, Valenti M, Mazza M. Comparison of Visual Fixation Trajectories in Toddlers with Autism Spectrum Disorder and Typical Development: A Markov Chain Model. Brain Sci 2021; 12:10. [PMID: 35053753 PMCID: PMC8773751 DOI: 10.3390/brainsci12010010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 12/10/2021] [Accepted: 12/21/2021] [Indexed: 11/28/2022] Open
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental condition in which visual attention and visual search strategies are altered. Eye-tracking paradigms have been used to detect these changes. In our study, 18 toddlers with ASD and 18 toddlers with typical development (TD; age range 12-36 months) underwent an eye-tracking paradigm where a face was shown together with a series of objects. Eye gaze was coded according to three areas of interest (AOIs) indicating where the toddlers' gaze was directed: 'Face', 'Object', and 'No-stimulus fixation'. The fixation sequence for the ASD and TD groups was modelled with a Markov chain model, obtaining transition probabilities between AOIs. Our results indicate that the transition between AOIs could differentiate between toddlers with ASD or TD, highlighting different visual exploration patterns between the groups. The sequence of exploration is strictly conditioned based on previous fixations, among which 'No-stimulus fixation' has a critical role in differentiating the two groups. Furthermore, our analyses underline difficulties of individuals with ASD to engage in stimulus exploration. These results could improve clinical and interventional practice by considering this dimension among the evaluation process.
Collapse
Affiliation(s)
- Francesco Masedu
- Department of Applied Clinical Sciences and Biotechnology, University of L’Aquila, 67100 L’Aquila, Italy; (F.M.); (M.C.P.); (M.V.); (M.M.)
| | - Roberto Vagnetti
- Department of Applied Clinical Sciences and Biotechnology, University of L’Aquila, 67100 L’Aquila, Italy; (F.M.); (M.C.P.); (M.V.); (M.M.)
| | - Maria Chiara Pino
- Department of Applied Clinical Sciences and Biotechnology, University of L’Aquila, 67100 L’Aquila, Italy; (F.M.); (M.C.P.); (M.V.); (M.M.)
- Regional Reference Centre for Autism of the Abruzzo Region, Local Health Unit ASL 1, 67100 L’Aquila, Italy
| | - Marco Valenti
- Department of Applied Clinical Sciences and Biotechnology, University of L’Aquila, 67100 L’Aquila, Italy; (F.M.); (M.C.P.); (M.V.); (M.M.)
- Regional Reference Centre for Autism of the Abruzzo Region, Local Health Unit ASL 1, 67100 L’Aquila, Italy
| | - Monica Mazza
- Department of Applied Clinical Sciences and Biotechnology, University of L’Aquila, 67100 L’Aquila, Italy; (F.M.); (M.C.P.); (M.V.); (M.M.)
- Regional Reference Centre for Autism of the Abruzzo Region, Local Health Unit ASL 1, 67100 L’Aquila, Italy
| |
Collapse
|
30
|
Masulli P, Galazka M, Eberhard D, Johnels JÅ, Gillberg C, Billstedt E, Hadjikhani N, Andersen TS. Data-driven analysis of gaze patterns in face perception: Methodological and clinical contributions. Cortex 2021; 147:9-23. [PMID: 34998084 DOI: 10.1016/j.cortex.2021.11.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 10/18/2021] [Accepted: 11/12/2021] [Indexed: 01/05/2023]
Abstract
Gaze patterns during face perception have been shown to relate to psychiatric symptoms. Standard analysis of gaze behavior includes calculating fixations within arbitrarily predetermined areas of interest. In contrast to this approach, we present an objective, data-driven method for the analysis of gaze patterns and their relation to diagnostic test scores. This method was applied to data acquired in an adult sample (N = 111) of psychiatry outpatients while they freely looked at images of human faces. Dimensional symptom scores of autism, attention deficit, and depression were collected. A linear regression model based on Principal Component Analysis coefficients computed for each participant was used to model symptom scores. We found that specific components of gaze patterns predicted autistic traits as well as depression symptoms. Gaze patterns shifted away from the eyes with increasing autism traits, a well-known effect. Additionally, the model revealed a lateralization component, with a reduction of the left visual field bias increasing with both autistic traits and depression symptoms independently. Taken together, our model provides a data-driven alternative for gaze data analysis, which can be applied to dimensionally-, rather than categorically-defined clinical subgroups within a variety of contexts. Methodological and clinical contribution of this approach are discussed.
Collapse
Affiliation(s)
- Paolo Masulli
- Department of Applied Mathematics and Computer Science DTU Compute, Section of Cognitive Systems, Technical University of Denmark, Kgs. Lyngby, Denmark; iMotions A/S, Copenhagen V, Denmark
| | - Martyna Galazka
- Gillberg Neuropsychiatry Center, University of Gothenburg, Gothenburg, Sweden
| | - David Eberhard
- Gillberg Neuropsychiatry Center, University of Gothenburg, Gothenburg, Sweden.
| | | | | | - Eva Billstedt
- Gillberg Neuropsychiatry Center, University of Gothenburg, Gothenburg, Sweden
| | - Nouchine Hadjikhani
- Gillberg Neuropsychiatry Center, University of Gothenburg, Gothenburg, Sweden; Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, USA.
| | - Tobias S Andersen
- Department of Applied Mathematics and Computer Science DTU Compute, Section of Cognitive Systems, Technical University of Denmark, Kgs. Lyngby, Denmark
| |
Collapse
|
31
|
Major S, Isaev D, Grapel J, Calnan T, Tenenbaum E, Carpenter K, Franz L, Howard J, Vermeer S, Sapiro G, Murias M, Dawson G. Shorter average look durations to dynamic social stimuli are associated with higher levels of autism symptoms in young autistic children. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2021; 26:1451-1459. [PMID: 34903084 PMCID: PMC9192829 DOI: 10.1177/13623613211056427] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
LAY ABSTRACT Many studies of autism look at the differences in how autistic research participants look at certain types of images. These studies often focus on where research participants are looking within the image, but that does not tell us everything about how much they are paying attention. It could be useful to know more about how well autistic research participants can focus on an image with people in it, because those who can look at images of people for longer duration without stopping may be able to easily learn other skills that help them to interact with people. We measured how long autistic research participants watched the video without breaking their attention. The video sometimes had a person speaking, and at other times had toys moving and making sounds. We measured the typical amount of time autistic research participants could look at the video before they looked away. We found that research participants with more severe autism tended to look at the video for shorter amounts of time. The ability to focus without stopping may be related to social skills in autistic people.
Collapse
|
32
|
Duran N, Atkinson AP. Foveal processing of emotion-informative facial features. PLoS One 2021; 16:e0260814. [PMID: 34855898 PMCID: PMC8638924 DOI: 10.1371/journal.pone.0260814] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 11/17/2021] [Indexed: 11/18/2022] Open
Abstract
Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow led to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 2b) and when briefly presented at the mouth (Experiment 2a). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.
Collapse
Affiliation(s)
- Nazire Duran
- Department of Psychology, Durham University, Durham, United Kingdom
| | - Anthony P. Atkinson
- Department of Psychology, Durham University, Durham, United Kingdom
- * E-mail:
| |
Collapse
|
33
|
Holleman GA, Hooge ITC, Huijding J, Deković M, Kemner C, Hessels RS. Gaze and speech behavior in parent–child interactions: The role of conflict and cooperation. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-02532-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractA primary mode of human social behavior is face-to-face interaction. In this study, we investigated the characteristics of gaze and its relation to speech behavior during video-mediated face-to-face interactions between parents and their preadolescent children. 81 parent–child dyads engaged in conversations about cooperative and conflictive family topics. We used a dual-eye tracking setup that is capable of concurrently recording eye movements, frontal video, and audio from two conversational partners. Our results show that children spoke more in the cooperation-scenario whereas parents spoke more in the conflict-scenario. Parents gazed slightly more at the eyes of their children in the conflict-scenario compared to the cooperation-scenario. Both parents and children looked more at the other's mouth region while listening compared to while speaking. Results are discussed in terms of the role that parents and children take during cooperative and conflictive interactions and how gaze behavior may support and coordinate such interactions.
Collapse
|
34
|
Dahl M, Tryding M, Heckler A, Nyström M. Quiet Eye and Computerized Precision Tasks in First-Person Shooter Perspective Esport Games. Front Psychol 2021; 12:676591. [PMID: 34819892 PMCID: PMC8606425 DOI: 10.3389/fpsyg.2021.676591] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 10/15/2021] [Indexed: 11/13/2022] Open
Abstract
The gaze behavior in sports and other applied settings has been studied for more than 20 years. A common finding is related to the “quiet eye” (QE), predicting that the duration of the last fixation before a critical event is associated with higher performance. Unlike previous studies conducted in applied settings with mobile eye trackers, we investigate the QE in a context similar to esport, in which participants click the mouse to hit targets presented on a computer screen under different levels of cognitive load. Simultaneously, eye and mouse movements were tracked using a high-end remote eye tracker at 300 Hz. Consistent with previous studies, we found that longer QE fixations were associated with higher performance. Increasing the cognitive load delayed the onset of the QE fixation, but had no significant influence on the QE duration. We discuss the implications of our results in the context of how the QE is defined, the quality of the eye-tracker data, and the type of analysis applied to QE data.
Collapse
Affiliation(s)
- Mats Dahl
- Department of Psychology, Lund University, Lund, Sweden
| | | | | | | |
Collapse
|
35
|
Straka O, Portešová Š, Halámková D, Jabůrek M. Metacognitive monitoring and metacognitive strategies of gifted and average children on dealing with deductive reasoning task. J Eye Mov Res 2021; 14. [PMID: 34729133 PMCID: PMC8559419 DOI: 10.16910/jemr.14.4.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
In this paper, we inquire into possible differences between children with exceptionally
high intellectual abilities and their average peers as regards metacognitive monitoring and
related metacognitive strategies. The question whether gifted children surpass their typically
developing peers not only in the intellectual abilities, but also in their level of metacognitive
skills, has not been convincingly answered so far. We sought to examine the
indicators of metacognitive behavior by means of eye-tracking technology and to compare
these findings with the participants’ subjective confidence ratings. Eye-movement data of
gifted and average students attending final grades of primary school (4th and 5th grades)
were recorded while they dealt with a deductive reasoning task, and four metrics supposed
to bear on metacognitive skills, namely the overall trial duration, mean fixation duration,
number of regressions and normalized gaze transition entropy, were analyzed. No significant
differences between gifted and average children were found in the normalized gaze
transition entropy, in mean fixation duration, nor - after controlling for the trial duration –
in number of regressions. Both groups of children differed in the time devoted to solving
the task. Both groups significantly differed in the association between time devoted to the
task and the participants’ subjective confidence rating, where only the gifted children
tended to devote more time when they felt less confident. Several implications of these
findings are discussed.
Collapse
|
36
|
Laskowitz S, Griffin JW, Geier CF, Scherf KS. Cracking the Code of Live Human Social Interactions in Autism: A Review of the Eye-Tracking Literature. PROCEEDINGS OF MACHINE LEARNING RESEARCH 2021; 173:242-264. [PMID: 36540356 PMCID: PMC9762806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Human social interaction involves a complex, dynamic exchange of verbal and non-verbal information. Over the last decade, eye-tracking technology has afforded unique insight into the way eye gaze information, including both holding gaze and shifting gaze, organizes live human interactions. For example, while playing a social game together, speakers end their turn by directing gaze at the listener, who begins to speak with averted gaze (Ho et al., 2015). These findings reflect how eye gaze can be used to signal important turn-taking transitions in social interactions. Deficits in conversational turn-taking is a core feature of autism spectrum disorders. Individuals on the autism spectrum also have notable difficulties processing eye gaze information (Griffin & Scherf, 2020). A central hypothesis in the literature is that the difficulties in processing eye gaze information are foundational to the social communication deficits that make social interactions so challenging for individuals on the autism spectrum. Although eye-tracking technology has been used extensively to assess the way individuals on the spectrum attend to stimuli presented on computer screens (for review see Papagiannopoulou et al., 2014), it has rarely been used to evaluate the critical question regarding whether and how autistic individuals process non-verbal social cues from their partners during live social interactions. Here, we review this emerging literature with a focus on characterizing the experimental paradigms and eye-tracking procedures to understand the scope (and limitations) of research questions and findings. We discuss the theoretical implications of the findings from this review and provide recommendations for future work that will be essential to understand whether and how fundamental difficulties in perceiving and processing information about eye gaze cues interfere with social communication skills in autism.
Collapse
|
37
|
Virtual reality facial emotion recognition in social environments: An eye-tracking study. Internet Interv 2021; 25:100432. [PMID: 34401391 PMCID: PMC8350588 DOI: 10.1016/j.invent.2021.100432] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 06/20/2021] [Accepted: 07/14/2021] [Indexed: 11/27/2022] Open
Abstract
BACKGROUND Virtual reality (VR) enables the administration of realistic and dynamic stimuli within a social context for the assessment and training of emotion recognition. We tested a novel VR emotion recognition task by comparing emotion recognition across a VR, video and photo task, investigating covariates of recognition and exploring visual attention in VR. METHODS Healthy individuals (n = 100) completed three emotion recognition tasks; a photo, video and VR task. During the VR task, emotions of virtual characters (avatars) in a VR street environment were rated, and eye-tracking was recorded in VR. RESULTS Recognition accuracy in VR (overall 75%) was comparable to the photo and video task. However, there were some differences; disgust and happiness had lower accuracy rates in VR, and better accuracy was achieved for surprise and anger in VR compared to the video task. Participants spent more time identifying disgust, fear and sadness than surprise and happiness. In general, attention was directed longer to the eye and nose areas than the mouth. DISCUSSION Immersive VR tasks can be used for training and assessment of emotion recognition. VR enables easily controllable avatars within environments relevant for daily life. Validated emotional expressions and tasks will be of relevance for clinical applications.
Collapse
|
38
|
Wang FS, Wolf J, Farshad M, Meboldt M, Lohmeyer Q. Object-Gaze Distance: Quantifying Near- Peripheral Gaze Behavior in Real-World Applications. J Eye Mov Res 2021; 14. [PMID: 34122747 PMCID: PMC8189527 DOI: 10.16910/jemr.14.1.5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Eye tracking (ET) has shown to reveal the wearer’s cognitive processes using the measurement
of the central point of foveal vision. However, traditional ET evaluation methods have
not been able to take into account the wearers’ use of the peripheral field of vision. We
propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object-
Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze
behavior in complex real-world environments. The algorithm uses machine learning for area
of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the
gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two
AOIs in a real surgical procedure, the results show that a considerable increase of interpretable
fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI
screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally,
the evaluation of a multi-OGD time series representation has shown the potential to
reveal novel gaze patterns, which may provide a more accurate depiction of human gaze
behavior in multi-object environments.
Collapse
|
39
|
Visual Neuroscience Methods for Marmosets: Efficient Receptive Field Mapping and Head-Free Eye Tracking. eNeuro 2021; 8:ENEURO.0489-20.2021. [PMID: 33863782 PMCID: PMC8143020 DOI: 10.1523/eneuro.0489-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Revised: 02/18/2021] [Accepted: 03/25/2021] [Indexed: 11/21/2022] Open
Abstract
The marmoset has emerged as a promising primate model system, in particular for visual neuroscience. Many common experimental paradigms rely on head fixation and an extended period of eye fixation during the presentation of salient visual stimuli. Both of these behavioral requirements can be challenging for marmosets. Here, we present two methodological developments, each addressing one of these difficulties. First, we show that it is possible to use a standard eye-tracking system without head fixation to assess visual behavior in the marmoset. Eye-tracking quality from head-free animals is sufficient to obtain precise psychometric functions from a visual acuity task. Second, we introduce a novel method for efficient receptive field (RF) mapping that does not rely on moving stimuli but uses fast flashing annuli and wedges. We present data recorded during head-fixation in areas V1 and V6 and show that RF locations are readily obtained within a short period of recording time. Thus, the methodological advancements presented in this work will contribute to establish the marmoset as a valuable model in neuroscience.
Collapse
|
40
|
Rim NW, Choe KW, Scrivner C, Berman MG. Introducing Point-of-Interest as an alternative to Area-of-Interest for fixation duration analysis. PLoS One 2021; 16:e0250170. [PMID: 33970920 PMCID: PMC8109773 DOI: 10.1371/journal.pone.0250170] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Accepted: 03/31/2021] [Indexed: 11/18/2022] Open
Abstract
Many eye-tracking data analyses rely on the Area-of-Interest (AOI) methodology, which utilizes AOIs to analyze metrics such as fixations. However, AOI-based methods have some inherent limitations including variability and subjectivity in shape, size, and location of AOIs. In this article, we propose an alternative approach to the traditional AOI dwell time analysis: Weighted Sum Durations (WSD). This approach decreases the subjectivity of AOI definitions by using Points-of-Interest (POI) while maintaining interpretability. In WSD, the durations of fixations toward each POI is weighted by the distance from the POI and summed together to generate a metric comparable to AOI dwell time. To validate WSD, we reanalyzed data from a previously published eye-tracking study (n = 90). The re-analysis replicated the original findings that people gaze less towards faces and more toward points of contact when viewing violent social interactions.
Collapse
Affiliation(s)
- Nak Won Rim
- Masters in Computational Social Science, The University of Chicago, Chicago, Illinois, United States of America
| | - Kyoung Whan Choe
- Department of Psychology, The University of Chicago, Chicago, Illinois, United States of America
- Mansueto Institute for Urban Innovation, The University of Chicago, Chicago, Illinois, United States of America
| | - Coltan Scrivner
- Department of Comparative Human Development, The University of Chicago, Chicago, Illinois, United States of America
- Institute for Mind and Biology, The University of Chicago, Chicago, Illinois, United States of America
| | - Marc G. Berman
- Department of Psychology, The University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, The University of Chicago, Chicago, Illinois, United States of America
- * E-mail:
| |
Collapse
|
41
|
Alexithymia explains atypical spatiotemporal dynamics of eye gaze in autism. Cognition 2021; 212:104710. [PMID: 33862441 DOI: 10.1016/j.cognition.2021.104710] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 03/24/2021] [Accepted: 03/26/2021] [Indexed: 11/23/2022]
Abstract
Recognition of emotional facial expressions is considered to be atypical in autism. This difficulty is thought to be due to the way that facial expressions are visually explored. Evidence for atypical visual exploration of emotional faces in autism is, however, equivocal. We propose that, where observed, atypical visual exploration of emotional facial expressions is due to alexithymia, a distinct but frequently co-occurring condition. In this eye-tracking study we tested the alexithymia hypothesis using a number of recent methodological advances to study eye gaze during several emotion processing tasks (emotion recognition, intensity judgements, free gaze), in 25 adults with, and 45 without, autism. A multilevel polynomial modelling strategy was used to describe the spatiotemporal dynamics of eye gaze to emotional facial expressions. Converging evidence from traditional and novel analysis methods revealed that atypical gaze to the eyes is best predicted by alexithymia in both autistic and non-autistic individuals. Information theoretic analyses also revealed differential effects of task on gaze patterns as a function of alexithymia, but not autism. These findings highlight factors underlying atypical emotion processing in autistic individuals, with wide-ranging implications for emotion research.
Collapse
|
42
|
Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest. Behav Res Methods 2021; 53:2037-2048. [PMID: 33742418 PMCID: PMC8516759 DOI: 10.3758/s13428-021-01544-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/14/2021] [Indexed: 01/14/2023]
Abstract
The assessment of gaze behaviour is essential for understanding the psychology of communication. Mobile eye-tracking glasses are useful to measure gaze behaviour during dynamic interactions. Eye-tracking data can be analysed by using manually annotated areas-of-interest. Computer vision algorithms may alternatively be used to reduce the amount of manual effort, but also the subjectivity and complexity of these analyses. Using additional re-identification (Re-ID) algorithms, different participants in the interaction can be distinguished. The aim of this study was to compare the results of manual annotation of mobile eye-tracking data with the results of a computer vision algorithm. We selected the first minute of seven randomly selected eye-tracking videos of consultations between physicians and patients in a Dutch Internal Medicine out-patient clinic. Three human annotators and a computer vision algorithm annotated mobile eye-tracking data, after which interrater reliability was assessed between the areas-of-interest annotated by the annotators and the computer vision algorithm. Additionally, we explored interrater reliability when using lengthy videos and different area-of-interest shapes. In total, we analysed more than 65 min of eye-tracking videos manually and with the algorithm. Overall, the absolute normalized difference between the manual and the algorithm annotations of face-gaze was less than 2%. Our results show high interrater agreements between human annotators and the algorithm with Cohen’s kappa ranging from 0.85 to 0.98. We conclude that computer vision algorithms produce comparable results to those of human annotators. Analyses by the algorithm are not subject to annotator fatigue or subjectivity and can therefore advance eye-tracking analyses.
Collapse
|
43
|
Van der Donck S, Vettori S, Dzhelyova M, Mahdi SS, Claes P, Steyaert J, Boets B. Investigating automatic emotion processing in boys with autism via eye tracking and facial mimicry recordings. Autism Res 2021; 14:1404-1420. [PMID: 33704930 DOI: 10.1002/aur.2490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Accepted: 02/08/2021] [Indexed: 11/08/2022]
Abstract
Difficulties in automatic emotion processing in individuals with autism spectrum disorder (ASD) might remain concealed in behavioral studies due to compensatory strategies. To gain more insight in the mechanisms underlying facial emotion recognition, we recorded eye tracking and facial mimicry data of 20 school-aged boys with ASD and 20 matched typically developing controls while performing an explicit emotion recognition task. Proportional looking times to specific face regions (eyes, nose, and mouth) and face exploration dynamics were analyzed. In addition, facial mimicry was assessed. Boys with ASD and controls were equally capable to recognize expressions and did not differ in proportional looking times, and number and duration of fixations. Yet, specific facial expressions elicited particular gaze patterns, especially within the control group. Both groups showed similar face scanning dynamics, although boys with ASD demonstrated smaller saccadic amplitudes. Regarding the facial mimicry, we found no emotion specific facial responses and no group differences in the responses to the displayed facial expressions. Our results indicate that boys with and without ASD employ similar eye gaze strategies to recognize facial expressions. Smaller saccadic amplitudes in boys with ASD might indicate a less exploratory face processing strategy. Yet, this slightly more persistent visual scanning behavior in boys with ASD does not imply less efficient emotion information processing, given the similar behavioral performance. Results on the facial mimicry data indicate similar facial responses to emotional faces in boys with and without ASD. LAY SUMMARY: We investigated (i) whether boys with and without autism apply different face exploration strategies when recognizing facial expressions and (ii) whether they mimic the displayed facial expression to a similar extent. We found that boys with and without ASD recognize facial expressions equally well, and that both groups show similar facial reactions to the displayed facial emotions. Yet, boys with ASD visually explored the faces slightly less than the boys without ASD.
Collapse
Affiliation(s)
- Stephanie Van der Donck
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| | - Sofie Vettori
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| | - Milena Dzhelyova
- Institute of Research in Psychological Sciences, Institute of Neuroscience, Université de Louvain, Louvain-La-Neuve, Belgium
| | - Soha Sadat Mahdi
- Medical Imaging Research Center, MIRC, Leuven, Belgium.,Department of Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
| | - Peter Claes
- Medical Imaging Research Center, MIRC, Leuven, Belgium.,Department of Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium.,Department of Human Genetics, KU Leuven, Leuven, Belgium
| | - Jean Steyaert
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| | - Bart Boets
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| |
Collapse
|
44
|
Vehlen A, Spenthof I, Tönsing D, Heinrichs M, Domes G. Evaluation of an eye tracking setup for studying visual attention in face-to-face conversations. Sci Rep 2021; 11:2661. [PMID: 33514767 PMCID: PMC7846602 DOI: 10.1038/s41598-021-81987-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 01/12/2021] [Indexed: 01/30/2023] Open
Abstract
Many eye tracking studies use facial stimuli presented on a display to investigate attentional processing of social stimuli. To introduce a more realistic approach that allows interaction between two real people, we evaluated a new eye tracking setup in three independent studies in terms of data quality, short-term reliability and feasibility. Study 1 measured the robustness, precision and accuracy for calibration stimuli compared to a classical display-based setup. Study 2 used the identical measures with an independent study sample to compare the data quality for a photograph of a face (2D) and the face of the real person (3D). Study 3 evaluated data quality over the course of a real face-to-face conversation and examined the gaze behavior on the facial features of the conversation partner. Study 1 provides evidence that quality indices for the scene-based setup were comparable to those of a classical display-based setup. Average accuracy was better than 0.4° visual angle. Study 2 demonstrates that eye tracking quality is sufficient for 3D stimuli and robust against short interruptions without re-calibration. Study 3 confirms the long-term stability of tracking accuracy during a face-to-face interaction and demonstrates typical gaze patterns for facial features. Thus, the eye tracking setup presented here seems feasible for studying gaze behavior in dyadic face-to-face interactions. Eye tracking data obtained with this setup achieves an accuracy that is sufficient for investigating behavior such as eye contact in social interactions in a range of populations including clinical conditions, such as autism spectrum and social phobia.
Collapse
Affiliation(s)
- Antonia Vehlen
- Department of Biological and Clinical Psychology, University of Trier, Johanniterufer 15, 54290, Trier, Germany
| | - Ines Spenthof
- Department of Psychology, Laboratory for Biological and Personality Psychology, Albert-Ludwigs-University of Freiburg, Stefan-Meier-Str. 8, 79104, Freiburg, Germany
| | - Daniel Tönsing
- Department of Psychology, Laboratory for Biological and Personality Psychology, Albert-Ludwigs-University of Freiburg, Stefan-Meier-Str. 8, 79104, Freiburg, Germany
| | - Markus Heinrichs
- Department of Psychology, Laboratory for Biological and Personality Psychology, Albert-Ludwigs-University of Freiburg, Stefan-Meier-Str. 8, 79104, Freiburg, Germany.
| | - Gregor Domes
- Department of Biological and Clinical Psychology, University of Trier, Johanniterufer 15, 54290, Trier, Germany.
| |
Collapse
|
45
|
The Effects of the Content Elements of Online Banner Ads on Visual Attention: Evidence from An-Eye-Tracking Study. FUTURE INTERNET 2021. [DOI: 10.3390/fi13010018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The aim of this paper is to examine the influence of the content elements of online banner ads on customers’ visual attention, and to evaluate the impacts of gender, discount rate and brand familiarity on this issue. An eye-tracking study with 34 participants (18 male and 16 female) was conducted, in which the participants were presented with eight types of online banner ads comprising three content elements—namely brand, discount rate and image—while their eye movements were recorded. The results showed that the image was the most attractive area among the three main content elements. Furthermore, the middle areas of the banners were noticed first, and areas located on the left side were mostly noticed earlier than those on the right side. The results also indicated that the discount areas of banners with higher discount rates were more attractive and eye-catching compared to those of banners with lower discount rates. In addition to these, the participants who were familiar with the brand mostly concentrated on the discount area, while those who were unfamiliar with the brand mostly paid attention to the image area. The findings from this study will assist marketers in creating more effective and efficient online banner ads that appeal to customers, ultimately fostering positive attitudes towards the advertisement.
Collapse
|
46
|
Winter M, Pryss R, Probst T, Reichert M. Applying Eye Movement Modeling Examples to Guide Novices' Attention in the Comprehension of Process Models. Brain Sci 2021; 11:brainsci11010072. [PMID: 33430418 PMCID: PMC7827780 DOI: 10.3390/brainsci11010072] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 12/23/2020] [Accepted: 01/01/2021] [Indexed: 11/20/2022] Open
Abstract
Process models are crucial artifacts in many domains, and hence, their proper comprehension is of importance. Process models mediate a plethora of aspects that are needed to be comprehended correctly. Novices especially face difficulties in the comprehension of process models, since the correct comprehension of such models requires process modeling expertise and visual observation capabilities to interpret these models correctly. Research from other domains demonstrated that the visual observation capabilities of experts can be conveyed to novices. In order to evaluate the latter in the context of process model comprehension, this paper presents the results from ongoing research, in which gaze data from experts are used as Eye Movement Modeling Examples (EMMEs) to convey visual observation capabilities to novices. Compared to prior results, the application of EMMEs improves process model comprehension significantly for novices. Novices achieved in some cases similar performances in process model comprehension to experts. The study’s insights highlight the positive effect of EMMEs on fostering the comprehension of process models.
Collapse
Affiliation(s)
- Michael Winter
- Institute of Databases and Information Systems, Ulm University, 89081 Ulm, Germany;
- Correspondence:
| | - Rüdiger Pryss
- Institute of Clinical Epidemiology and Biometry, University of Würzburg, 97070 Würzburg, Germany;
| | - Thomas Probst
- Department for Psychotherapy and Biopsychological Health, Danube University Krems, 3500 Krems, Austria;
| | - Manfred Reichert
- Institute of Databases and Information Systems, Ulm University, 89081 Ulm, Germany;
| |
Collapse
|
47
|
Abstract
There is a long history of interest in looking behavior during human interaction. With the advance of (wearable) video-based eye trackers, it has become possible to measure gaze during many different interactions. We outline the different types of eye-tracking setups that currently exist to investigate gaze during interaction. The setups differ mainly with regard to the nature of the eye-tracking signal (head- or world-centered) and the freedom of movement allowed for the participants. These features place constraints on the research questions that can be answered about human interaction. We end with a decision tree to help researchers judge the appropriateness of specific setups.
Collapse
|
48
|
Stein N. A Comparison of Eye Tracking Latencies Among Several Commercial Head-Mounted Displays. Iperception 2021; 12:2041669520983338. [PMID: 33628410 PMCID: PMC7883159 DOI: 10.1177/2041669520983338] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 11/16/2020] [Indexed: 11/15/2022] Open
Abstract
A number of virtual reality head-mounted displays (HMDs) with integrated eye trackers have recently become commercially available. If their eye tracking latency is low and reliable enough for gaze-contingent rendering, this may open up many interesting opportunities for researchers. We measured eye tracking latencies for the Fove-0, the Varjo VR-1, and the High Tech Computer Corporation (HTC) Vive Pro Eye using simultaneous electrooculography measurements. We determined the time from the occurrence of an eye position change to its availability as a data sample from the eye tracker (delay) and the time from an eye position change to the earliest possible change of the display content (latency). For each test and each device, participants performed 60 saccades between two targets 20° of visual angle apart. The targets were continuously visible in the HMD, and the saccades were instructed by an auditory cue. Data collection and eye tracking calibration were done using the recommended scripts for each device in Unity3D. The Vive Pro Eye was recorded twice, once using the SteamVR SDK and once using the Tobii XR SDK. Our results show clear differences between the HMDs. Delays ranged from 15 ms to 52 ms, and the latencies ranged from 45 ms to 81 ms. The Fove-0 appears to be the fastest device and best suited for gaze-contingent rendering.
Collapse
Affiliation(s)
- Niklas Stein
- Institute for Psychology, University of Muenster, Muenster, Germany
| |
Collapse
|
49
|
Abstract
Mobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant’s head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs’ Pupil in 3D mode, and (iv) Pupil-Labs’ Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8–3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup.
Collapse
|
50
|
Vettori S, Van der Donck S, Nys J, Moors P, Van Wesemael T, Steyaert J, Rossion B, Dzhelyova M, Boets B. Combined frequency-tagging EEG and eye-tracking measures provide no support for the "excess mouth/diminished eye attention" hypothesis in autism. Mol Autism 2020; 11:94. [PMID: 33228763 PMCID: PMC7686749 DOI: 10.1186/s13229-020-00396-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 11/02/2020] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND Scanning faces is important for social interactions. Difficulty with the social use of eye contact constitutes one of the clinical symptoms of autism spectrum disorder (ASD). It has been suggested that individuals with ASD look less at the eyes and more at the mouth than typically developing (TD) individuals, possibly due to gaze aversion or gaze indifference. However, eye-tracking evidence for this hypothesis is mixed. While gaze patterns convey information about overt orienting processes, it is unclear how this is manifested at the neural level and how relative covert attention to the eyes and mouth of faces might be affected in ASD. METHODS We used frequency-tagging EEG in combination with eye tracking, while participants watched fast flickering faces for 1-min stimulation sequences. The upper and lower halves of the faces were presented at 6 Hz and 7.5 Hz or vice versa in different stimulation sequences, allowing to objectively disentangle the neural saliency of the eyes versus mouth region of a perceived face. We tested 21 boys with ASD (8-12 years old) and 21 TD control boys, matched for age and IQ. RESULTS Both groups looked longer at the eyes than the mouth, without any group difference in relative fixation duration to these features. TD boys looked significantly more to the nose, while the ASD boys looked more outside the face. EEG neural saliency data partly followed this pattern: neural responses to the upper or lower face half were not different between groups, but in the TD group, neural responses to the lower face halves were larger than responses to the upper part. Face exploration dynamics showed that TD individuals mostly maintained fixations within the same facial region, whereas individuals with ASD switched more often between the face parts. LIMITATIONS Replication in large and independent samples may be needed to validate exploratory results. CONCLUSIONS Combined eye-tracking and frequency-tagged neural responses show no support for the excess mouth/diminished eye gaze hypothesis in ASD. The more exploratory face scanning style observed in ASD might be related to their increased feature-based face processing style.
Collapse
Affiliation(s)
- Sofie Vettori
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium.
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium.
| | - Stephanie Van der Donck
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
| | - Jannes Nys
- Department of Physics and Astronomy, Ghent University, Ghent, Belgium
- IDLab - Department of Computer Science, University of Antwerp - IMEC, Antwerp, Belgium
| | - Pieter Moors
- Laboratory of Experimental Psychology, University of Leuven (KU Leuven), Leuven, Belgium
| | - Tim Van Wesemael
- Department of Electrical Engineering (ESAT), Stadius Center for Dynamical Systems, Signal Processing and Data Analytics, Leuven, Belgium
| | - Jean Steyaert
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
| | - Bruno Rossion
- Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Louvain-La-Neuve, Belgium
- CNRS, CRAN - UMR 7039, Université de Lorraine, 54000, Nancy, France
- CHRU-Nancy, Service de Neurologie, Université de Lorraine, 54000, Nancy, France
| | - Milena Dzhelyova
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
- Institute of Research in Psychological Science, Institute of Neuroscience, University of Louvain, Louvain-La-Neuve, Belgium
| | - Bart Boets
- Center for Developmental Psychiatry, Department of Neurosciences, University of Leuven (KU Leuven), Leuven, Belgium
- Leuven Autism Research (LAuRes), University of Leuven (KU Leuven), Leuven, Belgium
| |
Collapse
|