1
|
Bast N, Mason L, Ecker C, Baumeister S, Banaschewski T, Jones EJH, Murphy DGM, Buitelaar JK, Loth E, Pandina G, Freitag CM, Auyeung B, Banaschewski T, Baron-Cohen S, Bast N, Baumeister S, Beckmann CF, Bölte S, Bourgeron T, Bours C, Brammer M, Brandeis D, Brogna C, de Bruijn Y, Buitelaar JK, Chakrabarti B, Charman T, Cornelissen I, Crawley D, Dell’Acqua F, Dumas G, Durston S, Ecker C, Faulkner J, Frouin V, Garcés P, Goyard D, Ham L, Hayward H, Hipp J, Holt R, Johnson M, Jones EJH, Kundu P, Lai MC, D’ardhuy XL, Lombardo MV, Loth E, Lythgoe DJ, Mandl R, Marquand A, Mason L, Mennes M, Meyer-Lindenberg A, Moessnang C, Murphy DGM, Oakley B, O’Dwyer L, Oldehinkel M, Oranje B, Pandina G, Persico AM, Ruggeri B, Ruigrok A, Sabet J, Sacco R, Cáceres ASJ, Simonoff E, Spooren W, Tillmann J, Toro R, Tost H, Waldman J, Williams SCR, Wooldridge C, Zwiers MP, Freitag CM. Sensory salience processing moderates attenuated gazes on faces in autism spectrum disorder: a case-control study. Mol Autism 2023; 14:5. [PMID: 36759875 PMCID: PMC9912590 DOI: 10.1186/s13229-023-00537-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 01/20/2023] [Indexed: 02/11/2023] Open
Abstract
BACKGROUND Attenuated social attention is a key marker of autism spectrum disorder (ASD). Recent neuroimaging findings also emphasize an altered processing of sensory salience in ASD. The locus coeruleus-norepinephrine system (LC-NE) has been established as a modulator of this sensory salience processing (SSP). We tested the hypothesis that altered LC-NE functioning contributes to different SSP and results in diverging social attention in ASD. METHODS We analyzed the baseline eye-tracking data of the EU-AIMS Longitudinal European Autism Project (LEAP) for subgroups of autistic participants (n = 166, age = 6-30 years, IQ = 61-138, gender [female/male] = 41/125) or neurotypical development (TD; n = 166, age = 6-30 years, IQ = 63-138, gender [female/male] = 49/117) that were matched for demographic variables and data quality. Participants watched brief movie scenes (k = 85) depicting humans in social situations (human) or without humans (non-human). SSP was estimated by gazes on physical and motion salience and a corresponding pupillary response that indexes phasic activity of the LC-NE. Social attention is estimated by gazes on faces via manual areas of interest definition. SSP is compared between groups and related to social attention by linear mixed models that consider temporal dynamics within scenes. Models are controlled for comorbid psychopathology, gaze behavior, and luminance. RESULTS We found no group differences in gazes on salience, whereas pupillary responses were associated with altered gazes on physical and motion salience. In ASD compared to TD, we observed pupillary responses that were higher for non-human scenes and lower for human scenes. In ASD, we observed lower gazes on faces across the duration of the scenes. Crucially, this different social attention was influenced by gazes on physical salience and moderated by pupillary responses. LIMITATIONS The naturalistic study design precluded experimental manipulations and stimulus control, while effect sizes were small to moderate. Covariate effects of age and IQ indicate that the findings differ between age and developmental subgroups. CONCLUSIONS Pupillary responses as a proxy of LC-NE phasic activity during visual attention are suggested to modulate sensory salience processing and contribute to attenuated social attention in ASD.
Collapse
Affiliation(s)
- Nico Bast
- Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Autism Research and Intervention Center of Excellence, University Hospital Frankfurt, Goethe-University, Deutschordenstraße 50, 60528, Frankfurt Am Main, Germany.
| | - Luke Mason
- grid.4464.20000 0001 2161 2573Centre for Brain and Cognitive Development, Birkbeck College, University of London, Malet Street, London, UK
| | - Christine Ecker
- grid.7839.50000 0004 1936 9721Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Autism Research and Intervention Center of Excellence, University Hospital Frankfurt, Goethe-University, Deutschordenstraße 50, 60528 Frankfurt Am Main, Germany
| | - Sarah Baumeister
- grid.7700.00000 0001 2190 4373Department of Child and Adolescent Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, University of Heidelberg, Mannheim, Germany
| | - Tobias Banaschewski
- grid.7700.00000 0001 2190 4373Department of Child and Adolescent Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, University of Heidelberg, Mannheim, Germany
| | - Emily J. H. Jones
- grid.4464.20000 0001 2161 2573Centre for Brain and Cognitive Development, Birkbeck College, University of London, Malet Street, London, UK
| | - Declan G. M. Murphy
- grid.13097.3c0000 0001 2322 6764Institute of Psychiatry, Psychology and Neuroscience, King’s College, London, London, UK
| | - Jan K. Buitelaar
- grid.10417.330000 0004 0444 9382Department of Cognitive Neuroscience, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Eva Loth
- grid.13097.3c0000 0001 2322 6764Institute of Psychiatry, Psychology and Neuroscience, King’s College, London, London, UK
| | - Gahan Pandina
- grid.497530.c0000 0004 0389 4927Janssen Research & Development, 1125 Trenton Harbourton Road, Titusville, NJ 08560 USA
| | | | - Christine M. Freitag
- grid.7839.50000 0004 1936 9721Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Autism Research and Intervention Center of Excellence, University Hospital Frankfurt, Goethe-University, Deutschordenstraße 50, 60528 Frankfurt Am Main, Germany
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
2
|
Abstract
SignificanceThe capacity to sense interoceptive signals is thought to be fundamental to broad functions including, but not limited to, homeostasis and the experience of the self. While neuroanatomical evidence suggests that nonhuman animals-namely, nonhuman primates-may possess features necessary for interoceptive processing in a way that is similar to humans, behavioral evidence of this capacity is slim. We presented macaques with audiovisual stimuli that were either synchronous or asynchronous with their heartbeat and demonstrated that they view asynchronous stimuli, whether faster or slower, for a significantly longer period than they do synchronous stimuli.
Collapse
|
3
|
Friedman L, Hanson T, Komogortsev OV. Multimodality During Fixation - Part II: Evidence for Multimodality in Spatial Precision-Related Distributions and Impact on Precision Estimates. J Eye Mov Res 2021; 14. [PMID: 34745443 PMCID: PMC8566061 DOI: 10.16910/jemr.14.3.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
This paper is a follow-on to our earlier paper (7), which focused on the multimodality of angular offsets. This paper applies the same
analysis to the measurement of spatial precision. Following the literature, we refer these
measurements as estimates of device precision, but, in fact, subject characteristics clearly
affect the measurements. One typical measure of the spatial precision of an eye-tracking
device is the standard deviation (SD) of the position signals (horizontal and vertical) during
a fixation. The SD is a highly interpretable measure of spread if the underlying error distribution
is unimodal and normal. However, in the context of an underlying multimodal distribution,
the SD is less interpretable. We will present evidence that the majority of such
distributions are multimodal (68-70% strongly multimodal). Only 21-23% of position distributions
were unimodal. We present an alternative method for measuring precision that is
appropriate for both unimodal and multimodal distributions. This alternative method produces
precision estimates that are substantially smaller than classic measures. We present
illustrations of both unimodality and multimodality with either drift or a microsaccade present
during fixation. At present, these observations apply only to the EyeLink 1000, and the
subjects evaluated herein.
Collapse
|
4
|
Hedger N, Chakrabarti B. Autistic differences in the temporal dynamics of social attention. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2021; 25:1615-1626. [PMID: 33706553 PMCID: PMC8323332 DOI: 10.1177/1362361321998573] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
LAY ABSTRACT One behaviour often observed in individuals with autism is that they tend to look less towards social stimuli relative to neurotypical individuals. For instance, many eye-tracking studies have shown that individuals with autism will look less towards people and more towards objects in scenes. However, we currently know very little about how these behaviours change over time. Tracking these moment-to-moment changes in looking behaviour in individuals with autism can more clearly illustrate how they respond to social stimuli. In this study, adults with and without autism were presented with displays of social and non-social stimuli, while looking behaviours were measured by eye-tracking. We found large differences in how the two groups looked towards social stimuli over time. Neurotypical individuals initially showed a high probability of looking towards social stimuli, then a decline in probability, and a subsequent increase in probability after prolonged viewing. By contrast, individuals with autism showed an initial increase in probability, followed by a continuous decline in probability that did not recover. This pattern of results may indicate that individuals with autism exhibit reduced responsivity to the reward value of social stimuli. Moreover, our data suggest that exploring the temporal nature of gaze behaviours can lead to more precise explanatory theories of attention in autism.
Collapse
Affiliation(s)
- Nicholas Hedger
- Centre for Autism, School of Psychology
& Clinical Language Sciences, University of Reading, UK
| | - Bhismadev Chakrabarti
- Centre for Autism, School of Psychology
& Clinical Language Sciences, University of Reading, UK
| |
Collapse
|
5
|
Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels. SENSORS 2021; 21:s21144686. [PMID: 34300425 PMCID: PMC8309511 DOI: 10.3390/s21144686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 06/28/2021] [Accepted: 07/06/2021] [Indexed: 11/17/2022]
Abstract
Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to gain insight through visualization. To avoid the complication, we often employ fixation identification algorithms for more abstract visualizations. In the past, many scientists have focused on gaze data abstraction with the attention map and analyzed detail gaze movement patterns with the scanpath visualization. Abstract eye movement patterns change dramatically depending on fixation identification algorithms in the preprocessing. However, it is difficult to find out how fixation identification algorithms affect gaze movement pattern visualizations. Additionally, scientists often spend much time on adjusting parameters manually in the fixation identification algorithms. In this paper, we propose a gaze behavior-based data processing method for abstract gaze data visualization. The proposed method classifies raw gaze data using machine learning models for image classification, such as CNN, AlexNet, and LeNet. Additionally, we compare the velocity-based identification (I-VT), dispersion-based identification (I-DT), density-based fixation identification, velocity and dispersion-based (I-VDT), and machine learning based and behavior-based modelson various visualizations at each abstraction level, such as attention map, scanpath, and abstract gaze movement visualization.
Collapse
|
6
|
Fló A. Evidence of ordinal position encoding of sequences extracted from continuous speech. Cognition 2021; 213:104646. [PMID: 33707004 DOI: 10.1016/j.cognition.2021.104646] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 12/11/2020] [Accepted: 02/23/2021] [Indexed: 10/22/2022]
Abstract
Infants' capacity to extract statistical regularities from sequential information is impressive and well documented. However, statistical learning's underlying mechanism remains mostly unknown, and its role in language acquisition is still under debate. To shed light on these issues, here we address the question of which information human subjects extract and encode after familiarisation with a continuous sequence of stimuli and its dependence on the type of segmentation cues and on the stimuli modality. Specifically, we investigate whether adults and 5-month-old infants learn the syllables' co-occurrence in the stream or generate a representation of the Words that include syllables' ordinal position. We test if subtle pauses signalling word boundaries change the encoding and, in adults, if it varies across modalities. In six behavioural experiments, we show that: (i) Adults and infants learn the streams' statistical structure. (ii) Ordinal encoding emerges in the auditory modality, and pauses enhanced it. However, (iii) ordinal encoding seems to depend on the learning stage and not on pauses marking Words' edges. Interestingly, (iv) for visual presentation of orthographic syllables, we do not find evidence of ordinal encoding in adults. Our results support the emergence, in the auditory modality, of a Word representation where its constituents are associated with an ordinal position at play already early in life, bringing new insights into speech processing and language acquisition. Additionally, we successfully use for the first time pupillometry in an infant segmentation task.
Collapse
Affiliation(s)
- Ana Fló
- Language, Cognition, and Development Laboratory, Scuola Internazionale di Studi Avanzati, Trieste, Italy; Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, Commissariat à l'Energie Atomique et aux énergies alternatives, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France.
| |
Collapse
|
7
|
Bast N, Mason L, Freitag CM, Smith T, Portugal AM, Poustka L, Banaschewski T, Johnson M. Saccade dysmetria indicates attenuated visual exploration in autism spectrum disorder. J Child Psychol Psychiatry 2021; 62:149-159. [PMID: 32449956 DOI: 10.1111/jcpp.13267] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 04/22/2020] [Accepted: 04/23/2020] [Indexed: 02/06/2023]
Abstract
BACKGROUND Visual exploration in autism spectrum disorder (ASD) is characterized by attenuated social attention. The underlying oculomotor function during visual exploration is understudied, whereas oculomotor function during restricted viewing suggested saccade dysmetria in ASD by altered pontocerebellar motor modulation. METHODS Oculomotor function was recorded using remote eye tracking in 142 ASD participants and 142 matched neurotypical controls during free viewing of naturalistic videos with and without human content. The sample was heterogenous concerning age (6-30 years), cognitive ability (60-140 IQ), and male/female ratio (3:1). Oculomotor function was defined as saccade, fixation, and pupil-dilation features that were compared between groups in linear mixed models. Oculomotor function was investigated as ASD classifier and features were correlated with clinical measures. RESULTS We observed decreased saccade duration (∆M = -0.50, CI [-0.21, -0.78]) and amplitude (∆M = -0.42, CI [-0.12, -0.72]), which was independent of human video content. We observed null findings concerning fixation and pupil-dilation features (POWER = .81). Oculomotor function is a valid ASD classifier comparable to social attention concerning discriminative power. Within ASD, saccade features correlated with measures of restricted and repetitive behavior. CONCLUSIONS We conclude saccade dysmetria as ASD oculomotor phenotype relevant to visual exploration. Decreased saccade amplitude and duration indicate spatially clustered fixations that attenuate visual exploration and emphasize endogenous over exogenous attention. We propose altered pontocerebellar motor modulation as underlying mechanism that contributes to atypical (oculo-)motor coordination and attention function in ASD.
Collapse
Affiliation(s)
- Nico Bast
- Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, University Hospital, Goethe University Frankfurt am Main, Frankfurt, Germany
| | - Luke Mason
- Center for Brain and Cognitive Development, Birkbeck College, University of London, London, UK
| | - Christine M Freitag
- Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, University Hospital, Goethe University Frankfurt am Main, Frankfurt, Germany
| | - Tim Smith
- Center for Brain and Cognitive Development, Birkbeck College, University of London, London, UK
| | - Ana Maria Portugal
- Center for Brain and Cognitive Development, Birkbeck College, University of London, London, UK
| | - Luise Poustka
- Department of Child and Adolescent Psychiatry/Psychotherapy, University Medical Center Göttingen, Medical University of Göttingen, Göttingen, Germany
| | - Tobias Banaschewski
- Department of Child and Adolescent Psychiatry and Psychotherapy, Central Institute of Mental Health, Heidelberg University, Heidelberg, Germany
| | - Mark Johnson
- Center for Brain and Cognitive Development, Birkbeck College, University of London, London, UK.,Department of Psychology, University of Cambridge, Cambridge, UK
| | | |
Collapse
|
8
|
De Anda S, Friend M. Lexical-Semantic Development in Bilingual Toddlers at 18 and 24 Months. Front Psychol 2020; 11:508363. [PMID: 33391064 PMCID: PMC7773918 DOI: 10.3389/fpsyg.2020.508363] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Accepted: 11/18/2020] [Indexed: 11/13/2022] Open
Abstract
An important question in early bilingual first language acquisition concerns the development of lexical-semantic associations within and across two languages. The present study investigates the earliest emergence of lexical-semantic priming at 18 and 24 months in Spanish-English bilinguals (N = 32) and its relation to vocabulary knowledge within and across languages. Results indicate a remarkably similar pattern of development between monolingual and bilingual children, such that lexical-semantic development begins at 18 months and strengthens by 24 months. Further, measures of cross-language lexical knowledge are stronger predictors of children's lexical-semantic processing skill than measures that capture single-language knowledge only. This suggests that children make use of both languages when processing semantic information. Together these findings inform the understanding of the relation between lexical-semantic breadth and organization in the context of dual language learners in early development.
Collapse
Affiliation(s)
- Stephanie De Anda
- Department of Special Education and Clinical Sciences, University of Oregon, Eugene, OR, United States
| | - Margaret Friend
- Department of Psychology, San Diego State University, San Diego, CA, United States
| |
Collapse
|
9
|
Haensel JX, Ishikawa M, Itakura S, Smith TJ, Senju A. Cultural influences on face scanning are consistent across infancy and adulthood. Infant Behav Dev 2020; 61:101503. [PMID: 33190091 PMCID: PMC7768814 DOI: 10.1016/j.infbeh.2020.101503] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Revised: 10/27/2020] [Accepted: 10/28/2020] [Indexed: 10/26/2022]
Abstract
The emergence of cultural differences in face scanning is thought to be shaped by social experience. However, previous studies mainly investigated eye movements of adults and little is known about early development. The current study recorded eye movements of British and Japanese infants (aged 10 and 16 months) and adults, who were presented with static and dynamic faces on screen. Cultural differences were observed across all age groups, with British participants exhibiting more mouth scanning, and Japanese individuals showing increased central face (nose) scanning for dynamic stimuli. Age-related influences independent of culture were also revealed, with a shift from eye to mouth scanning between 10 and 16 months, while adults distributed their gaze more flexibly. Against our prediction, no age-related increases in cultural differences were observed, suggesting the possibility that cultural differences are largely manifest by 10 months of age. Overall, the findings suggest that individuals adopt visual strategies in line with their cultural background from early in infancy, pointing to the development of a highly adaptive face processing system that is shaped by early sociocultural experience.
Collapse
Affiliation(s)
- Jennifer X Haensel
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, United Kingdom.
| | - Mitsuhiko Ishikawa
- Department of Psychology, Graduate School of Letters, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto, 606-8501, Japan
| | - Shoji Itakura
- Center for Baby Science, Doshisha University, 4-1-1 Kizugawadai, Kizugawa, Kyoto, 619-0225, Japan
| | - Tim J Smith
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, United Kingdom
| | - Atsushi Senju
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, United Kingdom
| |
Collapse
|
10
|
Hedger N, Chakrabarti B. To covet what we see: Autistic traits modulate the relationship between looking and choosing. Autism Res 2020; 14:289-300. [PMID: 32686920 DOI: 10.1002/aur.2349] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 05/05/2020] [Accepted: 05/24/2020] [Indexed: 02/06/2023]
Abstract
Behavioral studies indicate that autistic traits predict reduced gaze toward social stimuli. Moreover, experiments that require participants to make an explicit choice between stimuli indicate reduced preferences for social stimuli in individuals with high autistic traits. These observations, in combination, fit with the idea that gaze is actively involved in the formation of choices-gaze toward a stimulus increases the likelihood of its subsequent selection. Although these aspects of gaze and choice behavior have been well characterized separately, it remains unclear how autistic traits affect the relationship between gaze and socially relevant choices. In a choice-based eye-tracking paradigm, we observed that autistic traits predict less frequent and delayed selection of social stimuli. Critically, eye tracking revealed novel phenomena underlying these choice behaviors: first, the relationship between gaze and choice behavior was weaker in individuals with high autistic traits-an increase in gaze to a stimulus was associated with a smaller increase in choice probability. Second, time-series analyses revealed that gaze became predictive of choice behaviors at longer latencies in observers with high autistic traits. This dissociation between gaze and choice in individuals with high autistic traits may reflect wider atypicalities in value coding. Such atypicalities may predict the development of atypical social behaviors associated with the autism phenotype. LAY SUMMARY: When presented with multiple stimuli to choose from, we tend to look more toward the stimuli we later choose. Here, we found that this relationship between looking and choosing was reduced in individuals with high autistic traits. These data indicate that autistic traits may be associated with atypical processing of value, which may contribute to the reduced preferences for social stimuli exhibited by individuals with autism.
Collapse
Affiliation(s)
- Nicholas Hedger
- Centre for Autism, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Bhismadev Chakrabarti
- Centre for Autism, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
11
|
Carter BT, Luke SG. Best practices in eye tracking research. Int J Psychophysiol 2020; 155:49-62. [PMID: 32504653 DOI: 10.1016/j.ijpsycho.2020.05.010] [Citation(s) in RCA: 77] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 05/26/2020] [Accepted: 05/27/2020] [Indexed: 12/14/2022]
Abstract
This guide describes best practices in using eye tracking technology for research in a variety of disciplines. A basic outline of the anatomy and physiology of the eyes and of eye movements is provided, along with a description of the sorts of research questions eye tracking can address. We then explain how eye tracking technology works and what sorts of data it generates, and provide guidance on how to select and use an eye tracker as well as selecting appropriate eye tracking measures. Challenges to the validity of eye tracking studies are described, along with recommendations for overcoming these challenges. We then outline correct reporting standards for eye tracking studies.
Collapse
|
12
|
Hendry A, Johnson MH, Holmboe K. Early Development of Visual Attention: Change, Stability, and Longitudinal Associations. ACTA ACUST UNITED AC 2019. [DOI: 10.1146/annurev-devpsych-121318-085114] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Visual attention is a basic mechanism of information gathering and environment selection and consequently plays a fundamental role in influencing developmental trajectories. Here, we highlight evidence for predictive associations from early visual attention to emotion regulation, executive function, language and broader cognitive ability, mathematics and literacy skills, and neurodevelopmental conditions. Development of visual attention is also multifaceted and nonlinear. In daily life, core functions such as orienting, selective filtering, and processing of visual inputs are intertwined and influenced by many other cognitive components. Furthermore, the demands of an attention task vary according to the experience, motivation, and cognitive and physical constraints of participants, while the mechanisms underlying performance may change with development. Thus, markers of attention may need to be interpreted differently across development and between populations. We summarize research that has combined multiple measurements and techniques to further our understanding of visual attention development and highlight possibilities for the future.
Collapse
Affiliation(s)
- Alexandra Hendry
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, United Kingdom;,
| | - Mark H. Johnson
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Karla Holmboe
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, United Kingdom;,
| |
Collapse
|
13
|
Hessels RS, Hooge ITC. Eye tracking in developmental cognitive neuroscience - The good, the bad and the ugly. Dev Cogn Neurosci 2019; 40:100710. [PMID: 31593909 PMCID: PMC6974897 DOI: 10.1016/j.dcn.2019.100710] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 07/31/2019] [Accepted: 09/10/2019] [Indexed: 02/07/2023] Open
Abstract
Eye tracking is a popular research tool in developmental cognitive neuroscience for studying the development of perceptual and cognitive processes. However, eye tracking in the context of development is also challenging. In this paper, we ask how knowledge on eye-tracking data quality can be used to improve eye-tracking recordings and analyses in longitudinal research so that valid conclusions about child development may be drawn. We answer this question by adopting the data-quality perspective and surveying the eye-tracking setup, training protocols, and data analysis of the YOUth study (investigating neurocognitive development of 6000 children). We first show how our eye-tracking setup has been optimized for recording high-quality eye-tracking data. Second, we show that eye-tracking data quality can be operator-dependent even after a thorough training protocol. Finally, we report distributions of eye-tracking data quality measures for four age groups (5 months, 10 months, 3 years, and 9 years), based on 1531 recordings. We end with advice for (prospective) developmental eye-tracking researchers and generalizations to other methodologies.
Collapse
Affiliation(s)
- Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands; Developmental Psychology, Utrecht University, Utrecht, The Netherlands.
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
14
|
Liu J, Xue L. Visual Development of Chinese Children, Studied with Eye-Tracking Technology. VISUAL ANTHROPOLOGY 2019. [DOI: 10.1080/08949468.2019.1603033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
15
|
Abstract
Eye-trackers are a popular tool for studying cognitive, emotional, and attentional processes in different populations (e.g., clinical and typically developing) and participants of all ages, ranging from infants to the elderly. This broad range of processes and populations implies that there are many inter- and intra-individual differences that need to be taken into account when analyzing eye-tracking data. Standard parsing algorithms supplied by the eye-tracker manufacturers are typically optimized for adults and do not account for these individual differences. This paper presents gazepath, an easy-to-use R-package that comes with a graphical user interface (GUI) implemented in Shiny (RStudio Inc 2015). The gazepath R-package combines solutions from the adult and infant literature to provide an eye-tracking parsing method that accounts for individual differences and differences in data quality. We illustrate the usefulness of gazepath with three examples of different data sets. The first example shows how gazepath performs on free-viewing data of infants and adults, compared to standard EyeLink parsing. We show that gazepath controls for spurious correlations between fixation durations and data quality in infant data. The second example shows that gazepath performs well in high-quality reading data of adults. The third and last example shows that gazepath can also be used on noisy infant data collected with a Tobii eye-tracker and low (60 Hz) sampling rate.
Collapse
|
16
|
Slone LK, Abney DH, Borjon JI, Chen CH, Franchak JM, Pearcy D, Suarez-Rivera C, Xu TL, Zhang Y, Smith LB, Yu C. Gaze in Action: Head-mounted Eye Tracking of Children's Dynamic Visual Attention During Naturalistic Behavior. J Vis Exp 2018. [PMID: 30507907 DOI: 10.3791/58496] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
Abstract
Young children's visual environments are dynamic, changing moment-by-moment as children physically and visually explore spaces and objects and interact with people around them. Head-mounted eye tracking offers a unique opportunity to capture children's dynamic egocentric views and how they allocate visual attention within those views. This protocol provides guiding principles and practical recommendations for researchers using head-mounted eye trackers in both laboratory and more naturalistic settings. Head-mounted eye tracking complements other experimental methods by enhancing opportunities for data collection in more ecologically valid contexts through increased portability and freedom of head and body movements compared to screen-based eye tracking. This protocol can also be integrated with other technologies, such as motion tracking and heart-rate monitoring, to provide a high-density multimodal dataset for examining natural behavior, learning, and development than previously possible. This paper illustrates the types of data generated from head-mounted eye tracking in a study designed to investigate visual attention in one natural context for toddlers: free-flowing toy play with a parent. Successful use of this protocol will allow researchers to collect data that can be used to answer questions not only about visual attention, but also about a broad range of other perceptual, cognitive, and social skills and their development.
Collapse
Affiliation(s)
- Lauren K Slone
- Department of Psychological and Brain Sciences, Indiana University;
| | - Drew H Abney
- Department of Psychological and Brain Sciences, Indiana University
| | - Jeremy I Borjon
- Department of Psychological and Brain Sciences, Indiana University
| | - Chi-Hsin Chen
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University
| | - John M Franchak
- Department of Psychology, University of California, Riverside
| | - Daniel Pearcy
- Department of Psychological and Brain Sciences, Indiana University
| | | | - Tian Linger Xu
- Department of Psychological and Brain Sciences, Indiana University
| | - Yayun Zhang
- Department of Psychological and Brain Sciences, Indiana University
| | - Linda B Smith
- Department of Psychological and Brain Sciences, Indiana University
| | - Chen Yu
- Department of Psychological and Brain Sciences, Indiana University
| |
Collapse
|
17
|
Abstract
The present study evaluates the quality of gaze data produced by a low-cost eye tracker (The Eye Tribe©, The Eye Tribe, Copenhagen, Denmark) in order to verify its suitability for the performance of scientific research. An integrated methodological framework, based on artificial eye measurements and human eye tracking data, is proposed towards the implementation of the experimental process. The obtained results are used to remove the modeled noise through manual filtering and when detecting samples (fixations). The outcomes aim to serve as a robust reference for the verification of the validity of low-cost solutions, as well as a guide for the selection of appropriate fixation parameters towards the analysis of experimental data based on the used low-cost device. The results show higher deviation values for the real test persons in comparison to the artificial eyes, but these are still acceptable to be used in a scientific setting.
Collapse
|
18
|
Havy M, Zesiger P. Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children. Front Psychol 2017; 8:2122. [PMID: 29276493 PMCID: PMC5727082 DOI: 10.3389/fpsyg.2017.02122] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 11/21/2017] [Indexed: 12/02/2022] Open
Abstract
From the very first moments of their lives, infants are able to link specific movements of the visual articulators to auditory speech signals. However, recent evidence indicates that infants focus primarily on auditory speech signals when learning new words. Here, we ask whether 30-month-old children are able to learn new words based solely on visible speech information, and whether information from both auditory and visual modalities is available after learning in only one modality. To test this, children were taught new lexical mappings. One group of children experienced the words in the auditory modality (i.e., acoustic form of the word with no accompanying face). Another group experienced the words in the visual modality (seeing a silent talking face). Lexical recognition was tested in either the learning modality or in the other modality. Results revealed successful word learning in either modality. Results further showed cross-modal recognition following an auditory-only, but not a visual-only, experience of the words. Together, these findings suggest that visible speech becomes increasingly informative for the purpose of lexical learning, but that an auditory-only experience evokes a cross-modal representation of the words.
Collapse
Affiliation(s)
- Mélanie Havy
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | | |
Collapse
|
19
|
Hessels RS, Niehorster DC, Kemner C, Hooge ITC. Noise-robust fixation detection in eye movement data: Identification by two-means clustering (I2MC). Behav Res Methods 2017; 49:1802-1823. [PMID: 27800582 PMCID: PMC5628191 DOI: 10.3758/s13428-016-0822-1] [Citation(s) in RCA: 78] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Eye-tracking research in infants and older children has gained a lot of momentum over the last decades. Although eye-tracking research in these participant groups has become easier with the advance of the remote eye-tracker, this often comes at the cost of poorer data quality than in research with well-trained adults (Hessels, Andersson, Hooge, Nyström, & Kemner Infancy, 20, 601-633, 2015; Wass, Forssman, & Leppänen Infancy, 19, 427-460, 2014). Current fixation detection algorithms are not built for data from infants and young children. As a result, some researchers have even turned to hand correction of fixation detections (Saez de Urabain, Johnson, & Smith Behavior Research Methods, 47, 53-72, 2015). Here we introduce a fixation detection algorithm-identification by two-means clustering (I2MC)-built specifically for data across a wide range of noise levels and when periods of data loss may occur. We evaluated the I2MC algorithm against seven state-of-the-art event detection algorithms, and report that the I2MC algorithm's output is the most robust to high noise and data loss levels. The algorithm is automatic, works offline, and is suitable for eye-tracking data recorded with remote or tower-mounted eye-trackers using static stimuli. In addition to application of the I2MC algorithm in eye-tracking research with infants, school children, and certain patient groups, the I2MC algorithm also may be useful when the noise and data loss levels are markedly different between trials, participants, or time points (e.g., longitudinal research).
Collapse
Affiliation(s)
- Roy S Hessels
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
- Department of Developmental Psychology, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Humanities Laboratory and Department of Psychology, Lund University, Lund, Sweden
- Institute for Psychology, University of Muenster, Muenster, Germany
| | - Chantal Kemner
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Department of Developmental Psychology, Utrecht University, Utrecht, The Netherlands
- Brain Center Rudolf Magnus, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Ignace T C Hooge
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
20
|
Saez de Urabain IR, Nuthmann A, Johnson MH, Smith TJ. Disentangling the mechanisms underlying infant fixation durations in scene perception: A computational account. Vision Res 2017; 134:43-59. [PMID: 28159609 DOI: 10.1016/j.visres.2016.10.015] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2016] [Revised: 10/15/2016] [Accepted: 10/17/2016] [Indexed: 11/22/2022]
Abstract
The goal of this article is to investigate the unexplored mechanisms underlying the development of saccadic control in infancy by determining the generalizability and potential limitations of extending the CRISP theoretical framework and computational model of fixation durations (FDs) in adult scene-viewing to infants. The CRISP model was used to investigate the underlying mechanisms modulating FDs in 6-month-olds by applying the model to empirical eye-movement data gathered from groups of infants and adults during free-viewing of naturalistic and semi-naturalistic videos. Participants also performed a gap-overlap task to measure their disengagement abilities. Results confirmed the CRISP model's applicability to infant data. Specifically, model simulations support the view that infant saccade programming is completed in two stages: an initial labile stage, followed by a non-labile stage. Moreover, results from the empirical data and simulation studies highlighted the influence of the material viewed on the FD distributions in infants and adults, as well as the impact that the developmental state of the oculomotor system can have on saccade programming and execution at 6months. The present work suggests that infant FDs reflect on-line perceptual and cognitive activity in a similar way to adults, but that the individual developmental state of the oculomotor system affects this relationship at 6months. Furthermore, computational modeling filled the gaps of psychophysical studies and allowed the effects of these two factors on FDs to be simulated in infant data providing greater insights into the development of oculomotor and attentional control than can be gained from behavioral results alone.
Collapse
Affiliation(s)
| | - Antje Nuthmann
- School of Philosophy, Psychology and Language Sciences, Psychology Department, University of Edinburgh, UK
| | - Mark H Johnson
- Centre for Brain and Cognitive Development, Birkbeck, University of London, UK
| | - Tim J Smith
- Centre for Brain and Cognitive Development, Birkbeck, University of London, UK.
| |
Collapse
|
21
|
Abstract
Recent years have witnessed a remarkable growth in the way mathematics, informatics, and computer science can process data. In disciplines such as machine learning,
pattern recognition, computer vision, computational neurology, molecular biology,
information retrieval, etc., many new methods have been developed to cope with the
ever increasing amount and complexity of the data. These new methods offer interesting possibilities for processing, classifying and interpreting eye-tracking data. The
present paper exemplifies the application of topological arguments to improve the
evaluation of eye-tracking data. The task of classifying raw eye-tracking data into
saccades and fixations, with a single, simple as well as intuitive argument, described
as coherence of spacetime, is discussed, and the hierarchical ordering of the fixations
into dwells is shown. The method, namely identification by topological characteristics
(ITop), is parameter-free and needs no pre-processing and post-processing of the raw
data. The general and robust topological argument is easy to expand into complex
settings of higher visual tasks, making it possible to identify visual strategies.
Collapse
Affiliation(s)
- Oliver Hein
- Neurological University Clinic Hamburg UKE, Germany
| | | |
Collapse
|
22
|
Müller N, Baumeister S, Dziobek I, Banaschewski T, Poustka L. Validation of the Movie for the Assessment of Social Cognition in Adolescents with ASD: Fixation Duration and Pupil Dilation as Predictors of Performance. J Autism Dev Disord 2016; 46:2831-44. [DOI: 10.1007/s10803-016-2828-z] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
23
|
Galeazzi JM, Navajas J, Mender BMW, Quian Quiroga R, Minini L, Stringer SM. The visual development of hand-centered receptive fields in a neural network model of the primate visual system trained with experimentally recorded human gaze changes. NETWORK (BRISTOL, ENGLAND) 2016; 27:29-51. [PMID: 27253452 PMCID: PMC4926791 DOI: 10.1080/0954898x.2016.1187311] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2016] [Revised: 05/04/2016] [Accepted: 05/04/2016] [Indexed: 06/05/2023]
Abstract
Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant's gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views.
Collapse
Affiliation(s)
- Juan M. Galeazzi
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Joaquín Navajas
- Institute of Cognitive Neuroscience, University College London, London, UK
- Centre for Systems Neuroscience, University of Leicester, Leicester, UK
| | - Bedeho M. W. Mender
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxford, UK
| | | | - Loredana Minini
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Simon M. Stringer
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
24
|
Qualitative tests of remote eyetracker recovery and performance during head rotation. Behav Res Methods 2016; 47:848-59. [PMID: 25033759 DOI: 10.3758/s13428-014-0507-6] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
What are the decision criteria for choosing an eyetracker? Often the choice is based on specifications by the manufacturer of the validity (accuracy) and reliability (precision) of measurements that can be achieved using a particular eyetracker. These specifications are mostly achieved under optimal conditions-for example, by using an artificial eye or trained participants fixed in a chinrest. Research, however, does not always take place in optimal conditions: For instance, when investigating eye movements in infants, school children, and patient groups with disorders such as attention-deficit hyperactivity disorder, it is practically impossible to restrict movement. We modeled movements often seen in infant research in two behaviors: (1) looking away from and back to the screen, to investigate eyetracker recovery, and (2) head orientations, to investigate eyetracker performance with nonoptimal orientations of the eyes. We investigated how eight eyetracking setups by three manufacturers (SMI, Tobii, and LC Technologies) coped with these modeled behaviors in adults. We report that the tested SMI eyetrackers dropped in sampling frequency when the eyes were not visible to the eyetracker, whereas the other systems did not, and discuss the potential consequences thereof. Furthermore, we report that the tested eyetrackers varied in their rates of data loss and systematic offsets during shifted head orientations. We conclude that (prospective) eye-movement researchers who cannot restrict movement or nonoptimal head orientations in their participants might benefit from testing their eyetracker in nonoptimal conditions. Additionally, researchers should be aware of the data loss and inaccuracies that might result from nonoptimal head orientations.
Collapse
|
25
|
Papageorgiou KA, Smith TJ, Wu R, Johnson MH, Kirkham NZ, Ronald A. Individual Differences in Infant Fixation Duration Relate to Attention and Behavioral Control in Childhood. Psychol Sci 2014; 25:1371-9. [DOI: 10.1177/0956797614531295] [Citation(s) in RCA: 68] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2013] [Accepted: 03/17/2014] [Indexed: 11/16/2022] Open
Abstract
Individual differences in fixation duration are considered a reliable measure of attentional control in adults. However, the degree to which individual differences in fixation duration in infancy (0–12 months) relate to temperament and behavior in childhood is largely unknown. In the present study, data were examined from 120 infants (mean age = 7.69 months, SD = 1.90) who previously participated in an eye-tracking study. At follow-up, parents completed age-appropriate questionnaires about their child’s temperament and behavior (mean age of children = 41.59 months, SD = 9.83). Mean fixation duration in infancy was positively associated with effortful control (β = 0.20, R2 = .02, p = .04) and negatively with surgency (β = −0.37, R2 = .07, p = .003) and hyperactivity-inattention (β = −0.35, R2 = .06, p = .005) in childhood. These findings suggest that individual differences in mean fixation duration in infancy are linked to attentional and behavioral control in childhood.
Collapse
Affiliation(s)
- Kostas A. Papageorgiou
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London
| | - Tim J. Smith
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London
| | - Rachel Wu
- Brain and Cognitive Sciences, University of Rochester
| | - Mark H. Johnson
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London
| | - Natasha Z. Kirkham
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London
| | - Angelica Ronald
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London
| |
Collapse
|