1
|
Lazar R, Degen J, Fiechter AS, Monticelli A, Spitschan M. Regulation of pupil size in natural vision across the human lifespan. ROYAL SOCIETY OPEN SCIENCE 2024; 11:191613. [PMID: 39100191 PMCID: PMC11295891 DOI: 10.1098/rsos.191613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 02/27/2024] [Indexed: 08/06/2024]
Abstract
Vision is mediated by light passing through the pupil, which changes in diameter from approximately 2 to 8 mm between bright and dark illumination. With age, mean pupil size declines. In laboratory experiments, factors affecting pupil size can be experimentally controlled. How the pupil reflects the change in retinal input from the visual environment under natural viewing conditions is unclear. We address this question in a field experiment (N = 83, 43 female, 18-87 years) using a custom-made wearable video-based eye tracker with a spectroradiometer measuring near-corneal spectral irradiance. Participants moved in and between indoor and outdoor environments varying in spectrum and engaged in a range of everyday tasks. Our data confirm that light-adapted pupil size is determined by light level, with a better model fit of melanopic over photopic units, and that it decreased with increasing age, yielding steeper slopes at lower light levels. We found no indication that sex, iris colour or reported caffeine consumption affects pupil size. Our exploratory results point to a role of photoreceptor integration in controlling steady-state pupil size. The data provide evidence for considering age in personalized lighting solutions and against the use of photopic illuminance alone to assess the impact of real-world lighting conditions.
Collapse
Affiliation(s)
- Rafael Lazar
- Centre for Chronobiology, Psychiatric Hospital of the University of Basel, Switzerland
- Research Cluster Molecular and Cognitive Neurosciences, University of Basel, Switzerland
- Department of Biomedicine, University of Basel, Switzerland
| | - Josefine Degen
- Centre for Chronobiology, Psychiatric Hospital of the University of Basel, Switzerland
| | - Ann-Sophie Fiechter
- Centre for Chronobiology, Psychiatric Hospital of the University of Basel, Switzerland
| | - Aurora Monticelli
- Centre for Chronobiology, Psychiatric Hospital of the University of Basel, Switzerland
| | - Manuel Spitschan
- Max Planck Institute for Biological Cybernetics, Translational Sensory & Circadian Neuroscience, Tübingen, Germany
- TUM School of Medicine & Health, Chronobiology & Health, Technical University of Munich, Munich, Germany
- TUM Institute for Advanced Study (TUM-IAS), Technical University of Munich, Garching, Germany
| |
Collapse
|
2
|
Niehorster DC, Hessels RS, Benjamins JS, Nyström M, Hooge ITC. GlassesValidator: A data quality tool for eye tracking glasses. Behav Res Methods 2024; 56:1476-1484. [PMID: 37326770 PMCID: PMC10991001 DOI: 10.3758/s13428-023-02105-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/03/2023] [Indexed: 06/17/2023]
Abstract
According to the proposal for a minimum reporting guideline for an eye tracking study by Holmqvist et al. (2022), the accuracy (in degrees) of eye tracking data should be reported. Currently, there is no easy way to determine accuracy for wearable eye tracking recordings. To enable determining the accuracy quickly and easily, we have produced a simple validation procedure using a printable poster and accompanying Python software. We tested the poster and procedure with 61 participants using one wearable eye tracker. In addition, the software was tested with six different wearable eye trackers. We found that the validation procedure can be administered within a minute per participant and provides measures of accuracy and precision. Calculating the eye-tracking data quality measures can be done offline on a simple computer and requires no advanced computer skills.
Collapse
Affiliation(s)
- Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden.
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute & Social, Health and Organisational Psychology, Utrecht University, Utrecht, Netherlands
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
3
|
Takahashi M, Veale R. Pathways for Naturalistic Looking Behavior in Primate I: Behavioral Characteristics and Brainstem Circuits. Neuroscience 2023; 532:133-163. [PMID: 37776945 DOI: 10.1016/j.neuroscience.2023.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 09/09/2023] [Accepted: 09/18/2023] [Indexed: 10/02/2023]
Abstract
Organisms control their visual worlds by moving their eyes, heads, and bodies. This control of "gaze" or "looking" is key to survival and intelligence, but our investigation of the underlying neural mechanisms in natural conditions is hindered by technical limitations. Recent advances have enabled measurement of both brain and behavior in freely moving animals in complex environments, expanding on historical head-fixed laboratory investigations. We juxtapose looking behavior as traditionally measured in the laboratory against looking behavior in naturalistic conditions, finding that behavior changes when animals are free to move or when stimuli have depth or sound. We specifically focus on the brainstem circuits driving gaze shifts and gaze stabilization. The overarching goal of this review is to reconcile historical understanding of the differential neural circuits for different "classes" of gaze shift with two inconvenient truths. (1) "classes" of gaze behavior are artificial. (2) The neural circuits historically identified to control each "class" of behavior do not operate in isolation during natural behavior. Instead, multiple pathways combine adaptively and non-linearly depending on individual experience. While the neural circuits for reflexive and voluntary gaze behaviors traverse somewhat independent brainstem and spinal cord circuits, both can be modulated by feedback, meaning that most gaze behaviors are learned rather than hardcoded. Despite this flexibility, there are broadly enumerable neural pathways commonly adopted among primate gaze systems. Parallel pathways which carry simultaneous evolutionary and homeostatic drives converge in superior colliculus, a layered midbrain structure which integrates and relays these volitional signals to brainstem gaze-control circuits.
Collapse
Affiliation(s)
- Mayu Takahashi
- Department of Systems Neurophysiology, Graduate School of Medical and Dental, Sciences, Tokyo Medical and Dental University, Japan.
| | - Richard Veale
- Department of Neurobiology, Graduate School of Medicine, Kyoto University, Japan
| |
Collapse
|
4
|
Deane O, Toth E, Yeo SH. Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data. Behav Res Methods 2023; 55:1372-1391. [PMID: 35650384 PMCID: PMC10126076 DOI: 10.3758/s13428-022-01833-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2022] [Indexed: 11/08/2022]
Abstract
With continued advancements in portable eye-tracker technology liberating experimenters from the restraints of artificial laboratory designs, research can now collect gaze data from real-world, natural navigation. However, the field lacks a robust method for achieving this, as past approaches relied upon the time-consuming manual annotation of eye-tracking data, while previous attempts at automation lack the necessary versatility for in-the-wild navigation trials consisting of complex and dynamic scenes. Here, we propose a system capable of informing researchers of where and what a user's gaze is focused upon at any one time. The system achieves this by first running footage recorded on a head-mounted camera through a deep-learning-based object detection algorithm called Masked Region-based Convolutional Neural Network (Mask R-CNN). The algorithm's output is combined with frame-by-frame gaze coordinates measured by an eye-tracking device synchronized with the head-mounted camera to detect and annotate, without any manual intervention, what a user looked at for each frame of the provided footage. The effectiveness of the presented methodology was legitimized by a comparison between the system output and that of manual coders. High levels of agreement between the two validated the system as a preferable data collection technique as it was capable of processing data at a significantly faster rate than its human counterpart. Support for the system's practicality was then further demonstrated via a case study exploring the mediatory effects of gaze behaviors on an environment-driven attentional bias.
Collapse
Affiliation(s)
- Oliver Deane
- School of Sport, Exercise and Rehabilitation Sciences, The University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
| | - Eszter Toth
- School of Psychology, The University of Birmingham, Birmingham, UK
| | - Sang-Hoon Yeo
- School of Sport, Exercise and Rehabilitation Sciences, The University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK.
| |
Collapse
|
5
|
Lu Z, Pesarakli H. Seeing Is Believing: Using Eye-Tracking Devices in Environmental Research. HERD-HEALTH ENVIRONMENTS RESEARCH & DESIGN JOURNAL 2023; 16:15-52. [PMID: 36254371 DOI: 10.1177/19375867221130806] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
OBJECTIVES This article aims to provide methodological guidance for research that uses eye-tracking devices (ETDs) to study environment and behavior relationships. BACKGROUND Vision is an important human sense through which people acquire a large amount of environmental information. ETDs are tools for detecting eye/gaze behaviors, facilitating better understanding about how people collect visual information and how such information is related to emotions and psychological states. However, there is a lack of guidance for the application of ETDs to environment and behavior studies. METHODS A literature review was conducted on articles reporting empirical studies that used ETDs. The data were extracted and compiled, including information such as research questions, research design, types of ETDs, variables measured, types of physical environment (or visual stimuli), stimuli durations, data analysis methods, and so on. RESULTS Fifty articles were identified. The main research topics were related to urban and landscape environments, and architecture and interior spaces. Most of the research designs were experimental or quasi-experimental designs, with a few cross-sectional studies. The majority types of ETDs were screen-based ETDs, followed by mobile ETDs (glasses). Main variables were gaze fixations, fixation durations, and scan paths. Typical types of stimuli included images, videos, virtual reality, and real environments and/or objects. CONCLUSIONS Guidance for eye-tracking research on environment and behavior was developed based on the literature review results, to provide direction for determining research questions, selecting appropriate research designs, establishing participant inclusion and/or excluding criteria, collecting and analyzing data, and interpreting research results.
Collapse
Affiliation(s)
- Zhipeng Lu
- Department of Architecture, Texas A&M University, College Station, TX, USA
| | - Homa Pesarakli
- Department of Architecture, Texas A&M University, College Station, TX, USA
| |
Collapse
|
6
|
Matthis JS, Muller KS, Bonnen KL, Hayhoe MM. Retinal optic flow during natural locomotion. PLoS Comput Biol 2022; 18:e1009575. [PMID: 35192614 PMCID: PMC8896712 DOI: 10.1371/journal.pcbi.1009575] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/04/2022] [Accepted: 10/14/2021] [Indexed: 11/18/2022] Open
Abstract
We examine the structure of the visual motion projected on the retina during natural locomotion in real world environments. Bipedal gait generates a complex, rhythmic pattern of head translation and rotation in space, so without gaze stabilization mechanisms such as the vestibular-ocular-reflex (VOR) a walker's visually specified heading would vary dramatically throughout the gait cycle. The act of fixation on stable points in the environment nulls image motion at the fovea, resulting in stable patterns of outflow on the retinae centered on the point of fixation. These outflowing patterns retain a higher order structure that is informative about the stabilized trajectory of the eye through space. We measure this structure by applying the curl and divergence operations on the retinal flow velocity vector fields and found features that may be valuable for the control of locomotion. In particular, the sign and magnitude of foveal curl in retinal flow specifies the body's trajectory relative to the gaze point, while the point of maximum divergence in the retinal flow field specifies the walker's instantaneous overground velocity/momentum vector in retinotopic coordinates. Assuming that walkers can determine the body position relative to gaze direction, these time-varying retinotopic cues for the body's momentum could provide a visual control signal for locomotion over complex terrain. In contrast, the temporal variation of the eye-movement-free, head-centered flow fields is large enough to be problematic for use in steering towards a goal. Consideration of optic flow in the context of real-world locomotion therefore suggests a re-evaluation of the role of optic flow in the control of action during natural behavior.
Collapse
Affiliation(s)
- Jonathan Samir Matthis
- Department of Biology, Northeastern University, Boston, Massachusetts, United States of America
| | - Karl S. Muller
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | - Kathryn L. Bonnen
- School of Optometry, Indiana University Bloomington, Bloomington, Indiana, United States of America
| | - Mary M. Hayhoe
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
7
|
Automatic Visual Attention Detection for Mobile Eye Tracking Using Pre-Trained Computer Vision Models and Human Gaze. SENSORS 2021; 21:s21124143. [PMID: 34208736 PMCID: PMC8235043 DOI: 10.3390/s21124143] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 06/11/2021] [Accepted: 06/12/2021] [Indexed: 12/28/2022]
Abstract
Processing visual stimuli in a scene is essential for the human brain to make situation-aware decisions. These stimuli, which are prevalent subjects of diagnostic eye tracking studies, are commonly encoded as rectangular areas of interest (AOIs) per frame. Because it is a tedious manual annotation task, the automatic detection and annotation of visual attention to AOIs can accelerate and objectify eye tracking research, in particular for mobile eye tracking with egocentric video feeds. In this work, we implement two methods to automatically detect visual attention to AOIs using pre-trained deep learning models for image classification and object detection. Furthermore, we develop an evaluation framework based on the VISUS dataset and well-known performance metrics from the field of activity recognition. We systematically evaluate our methods within this framework, discuss potentials and limitations, and propose ways to improve the performance of future automatic visual attention detection methods.
Collapse
|
8
|
Papinutto M, Lao J, Lalanne D, Caldara R. Watchers do not follow the eye movements of Walkers. Vision Res 2020; 176:130-140. [PMID: 32882595 DOI: 10.1016/j.visres.2020.08.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 08/03/2020] [Accepted: 08/05/2020] [Indexed: 11/27/2022]
Abstract
Eye movements are a functional signature of how the visual system effectively decodes and adapts to the environment. However, scientific knowledge in eye movements mostly arises from studies conducted in laboratories, with well-controlled stimuli presented in constrained unnatural settings. Only a few studies have attempted to directly compare and assess whether eye movement data acquired in the real world generalize with those in laboratory settings, with same visual inputs. However, none of these studies controlled for both the auditory signals typical of real-world settings and the top-down task effects across conditions, leaving this question unresolved. To minimize this inherent gap across conditions, we compared the eye movements recorded from observers during ecological spatial navigation in the wild (the Walkers) with those recorded in laboratory (the Watchers) on the same visual and auditory inputs, with both groups performing the very same active cognitive task. We derived robust data-driven statistical saliency and motion maps. The Walkers and Watchers differed in terms of eye movement characteristics: fixation number and duration, saccade amplitude. The Watchers relied significantly more on saliency and motion than the Walkers. Interestingly, both groups exhibited similar fixation patterns towards social agents and objects. Altogether, our data show that eye movements patterns obtained in laboratory do not fully generalize to real world, even when task and auditory information is controlled. These observations invite to caution when generalizing the eye movements obtained in laboratory with those of ecological spatial navigation.
Collapse
Affiliation(s)
- M Papinutto
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Switzerland; Human-IST Institute, Department of Informatics, University of Fribourg, Switzerland.
| | - J Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Switzerland
| | - D Lalanne
- Human-IST Institute, Department of Informatics, University of Fribourg, Switzerland
| | - R Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Switzerland
| |
Collapse
|
9
|
Simpson J, Freeth M, Simpson KJ, Thwaites K. Visual engagement with urban street edges: insights using mobile eye-tracking. ACTA ACUST UNITED AC 2018. [DOI: 10.1080/17549175.2018.1552884] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- James Simpson
- Department of Landscape Architecture, University of Sheffield, Sheffield, UK
| | - Megan Freeth
- Department of Psychology, University of Sheffield, Sheffield, UK
| | | | - Kevin Thwaites
- Department of Landscape Architecture, University of Sheffield, Sheffield, UK
| |
Collapse
|
10
|
Hassoumi A, Peysakhovich V, Hurter C. Uncertainty visualization of gaze estimation to support operator-controlled calibration. J Eye Mov Res 2018; 10:10.16910/jemr.10.5.6. [PMID: 33828671 PMCID: PMC7141080 DOI: 10.16910/jemr.10.5.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
In this paper, we investigate how visualization assets can support the qualitative evaluation of gaze estimation uncertainty. Although eye tracking data are commonly available, little has been done to visually investigate the uncertainty of recorded gaze information. This paper tries to fill this gap by using innovative uncertainty computation and visualization. Given a gaze processing pipeline, we estimate the location of this gaze position in the world camera. To do so we developed our own gaze data processing which give us access to every stage of the data transformation and thus the uncertainty computation. To validate our gaze estimation pipeline, we designed an experiment with 12 participants and showed that the correction methods we proposed reduced the Mean Angular Error by about 1.32 cm, aggregating all 12 participants' results. The Mean Angular Error is 0.25° (SD=0.15°) after correction of the estimated gaze. Next, to support the qualitative assessment of this data, we provide a map which codes the actual uncertainty in the user point of view.
Collapse
|
11
|
Abstract
The present article shows that infant and dyad differences in hand-eye coordination predict dyad differences in joint attention (JA). In the study reported here, 51 toddlers ranging in age from 11 to 24 months and their parents wore head-mounted eye trackers as they played with objects together. We found that physically active toddlers aligned their looking behavior with their parent and achieved a substantial proportion of time spent jointly attending to the same object. However, JA did not arise through gaze following but rather through the coordination of gaze with manual actions on objects as both infants and parents attended to their partner's object manipulations. Moreover, dyad differences in JA were associated with dyad differences in hand following.
Collapse
|