1
|
Morton MP, Denagamage S, Blume IJ, Reynolds JH, Jadi MP, Nandy AS. Brain state and cortical layer-specific mechanisms underlying perception at threshold. eLife 2024; 12:RP91722. [PMID: 39556415 PMCID: PMC11573349 DOI: 10.7554/elife.91722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2024] Open
Abstract
Identical stimuli can be perceived or go unnoticed across successive presentations, producing divergent behavioral outcomes despite similarities in sensory input. We sought to understand how fluctuations in behavioral state and cortical layer and cell class-specific neural activity underlie this perceptual variability. We analyzed physiological measurements of state and laminar electrophysiological activity in visual area V4 while monkeys were rewarded for correctly reporting a stimulus change at perceptual threshold. Hit trials were characterized by a behavioral state with heightened arousal, greater eye position stability, and enhanced decoding performance of stimulus identity from neural activity. Target stimuli evoked stronger responses in V4 in hit trials, and excitatory neurons in the superficial layers, the primary feed-forward output of the cortical column, exhibited lower variability. Feed-forward interlaminar population correlations were stronger on hits. Hit trials were further characterized by greater synchrony between the output layers of the cortex during spontaneous activity, while the stimulus-evoked period showed elevated synchrony in the feed-forward pathway. Taken together, these results suggest that a state of elevated arousal and stable retinal images allow enhanced processing of sensory stimuli, which contributes to hits at perceptual threshold.
Collapse
Affiliation(s)
- Mitchell P Morton
- Department of Neuroscience, Yale UniversityNew HavenUnited States
- Interdepartmental Neuroscience Program, Yale UniversityNew HavenUnited States
| | - Sachira Denagamage
- Department of Neuroscience, Yale UniversityNew HavenUnited States
- Interdepartmental Neuroscience Program, Yale UniversityNew HavenUnited States
| | - Isabel J Blume
- Department of Neuroscience, Yale UniversityNew HavenUnited States
| | - John H Reynolds
- Systems Neurobiology Laboratories, The Salk Institute for Biological StudiesLa JollaUnited States
| | - Monika P Jadi
- Department of Neuroscience, Yale UniversityNew HavenUnited States
- Interdepartmental Neuroscience Program, Yale UniversityNew HavenUnited States
- Department of Psychiatry, Yale UniversityNew HavenUnited States
- Wu Tsai Institute, Yale UniversityNew HavenUnited States
| | - Anirvan S Nandy
- Department of Neuroscience, Yale UniversityNew HavenUnited States
- Interdepartmental Neuroscience Program, Yale UniversityNew HavenUnited States
- Wu Tsai Institute, Yale UniversityNew HavenUnited States
- Department of Psychology, Yale UniversityNew HavenUnited States
- Kavli Institute for Neuroscience, Yale UniversityNew HavenUnited States
| |
Collapse
|
2
|
Taore A, Tiang M, Dakin SC. (The limits of) eye-tracking with iPads. J Vis 2024; 24:1. [PMID: 38953861 PMCID: PMC11223623 DOI: 10.1167/jov.24.7.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 04/22/2024] [Indexed: 07/04/2024] Open
Abstract
Applications for eye-tracking-particularly in the clinic-are limited by a reliance on dedicated hardware. Here we compare eye-tracking implemented on an Apple iPad Pro 11" (third generation)-using the device's infrared head-tracking and front-facing camera-with a Tobii 4c infrared eye-tracker. We estimated gaze location using both systems while 28 observers performed a variety of tasks. For estimating fixation, gaze position estimates from the iPad were less accurate and precise than the Tobii (mean absolute error of 3.2° ± 2.0° compared with 0.75° ± 0.43°), but fixation stability estimates were correlated across devices (r = 0.44, p < 0.05). For tasks eliciting saccades >1.5°, estimated saccade counts (r = 0.4-0.73, all p < 0.05) were moderately correlated across devices. For tasks eliciting saccades >8° we observed moderate correlations in estimated saccade speed and amplitude (r = 0.4-0.53, all p < 0.05). We did, however, note considerable variation in the vertical component of estimated smooth pursuit speed from the iPad and a catastrophic failure of tracking on the iPad in 5% to 20% of observers (depending on the test). Our findings sound a note of caution to researchers seeking to use iPads for eye-tracking and emphasize the need to properly examine their eye-tracking data to remove artifacts and outliers.
Collapse
Affiliation(s)
- Aryaman Taore
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand
| | - Michelle Tiang
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand
| | - Steven C Dakin
- School of Optometry & Vision Science, The University of Auckland, Auckland, New Zealand
- UCL Institute of Ophthalmology, University College London, London, United Kingdom
| |
Collapse
|
3
|
Xiao W, Sharma S, Kreiman G, Livingstone MS. Feature-selective responses in macaque visual cortex follow eye movements during natural vision. Nat Neurosci 2024; 27:1157-1166. [PMID: 38684892 PMCID: PMC11156562 DOI: 10.1038/s41593-024-01631-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 03/26/2024] [Indexed: 05/02/2024]
Abstract
In natural vision, primates actively move their eyes several times per second via saccades. It remains unclear whether, during this active looking, visual neurons exhibit classical retinotopic properties, anticipate gaze shifts or mirror the stable quality of perception, especially in complex natural scenes. Here, we let 13 monkeys freely view thousands of natural images across 4.6 million fixations, recorded 883 h of neuronal responses in six areas spanning primary visual to anterior inferior temporal cortex and analyzed spatial, temporal and featural selectivity in these responses. Face neurons tracked their receptive field contents, indicated by category-selective responses. Self-consistency analysis showed that general feature-selective responses also followed eye movements and remained gaze-dependent over seconds of viewing the same image. Computational models of feature-selective responses located retinotopic receptive fields during free viewing. We found limited evidence for feature-selective predictive remapping and no viewing-history integration. Thus, ventral visual neurons represent the world in a predominantly eye-centered reference frame during natural vision.
Collapse
Affiliation(s)
- Will Xiao
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA.
| | - Saloni Sharma
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Gabriel Kreiman
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | | |
Collapse
|
4
|
He X, Selesnick I, Zhu M. Research Progress of Eye Movement Analyses and its Detection Algorithms in Alzheimer's Disease. Curr Alzheimer Res 2024; 21:91-100. [PMID: 38661033 DOI: 10.2174/0115672050300564240416074025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 03/29/2024] [Accepted: 03/29/2024] [Indexed: 04/26/2024]
Abstract
Alzheimer's disease (AD) has been considered one of the most challenging forms of dementia. The earlier the people are diagnosed with AD, the easier it is for doctors to find a treatment. Based on the previous literature summarizing the research results on the relationship between eye movement and AD before 2013, this paper reviewed 34 original eye movements research papers only closely related to AD published in the past ten years and pointed out that the prosaccade (4 papers) and antisaccade (5 papers) tasks, reading tasks (3 papers), visual search tasks (3 papers) are still the research objects of many researchers, Some researchers have looked at King-Devick tasks (2 papers), reading tasks (3 papers) and special tasks (8 papers), and began to use combinations of different saccade tasks to detect the relationship between eye movement and AD, which had not been done before. These reflect the diversity of eye movement tasks and the complexity and difficulty of the relationship between eye movement and AD. On this basis, the current processing and analysis methods of eye movement datasets are analyzed and discussed in detail, and we note that certain key data that may be especially important for the early diagnosis of AD by using eye movement studies cannot be miss-classified as noise and removed. Finally, we note that the development of methods that can accurately denoise and classify and quickly process massive eye movement data is quite significant for detecting eye movements in early diagnosis of AD.
Collapse
Affiliation(s)
- Xueying He
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, Hubei, CN, USA
- Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - Ivan Selesnick
- Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - Ming Zhu
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, Hubei, CN, USA
| |
Collapse
|
5
|
González-Vides L, Hernández-Verdejo JL, Cañadas-Suárez P. Eye Tracking in Optometry: A Systematic Review. J Eye Mov Res 2023; 16:10.16910/jemr.16.3.3. [PMID: 38111688 PMCID: PMC10725735 DOI: 10.16910/jemr.16.3.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2023] Open
Abstract
This systematic review examines the use of eye-tracking devices in optometry, describing their main characteristics, areas of application and metrics used. Using the PRISMA method, a systematic search was performed of three databases. The search strategy identified 141 reports relevant to this topic, indicating the exponential growth over the past ten years of the use of eye trackers in optometry. Eye-tracking technology was applied in at least 12 areas of the field of optometry and rehabilitation, the main ones being optometric device technology, and the assessment, treatment, and analysis of ocular disorders. The main devices reported on were infrared light-based and had an image capture frequency of 60 Hz to 2000 Hz. The main metrics mentioned were fixations, saccadic movements, smooth pursuit, microsaccades, and pupil variables. Study quality was sometimes limited in that incomplete information was provided regarding the devices used, the study design, the methods used, participants' visual function and statistical treatment of data. While there is still a need for more research in this area, eye-tracking devices should be more actively incorporated as a useful tool with both clinical and research applications. This review highlights the robustness this technology offers to obtain objective information about a person's vision in terms of optometry and visual function, with implications for improving visual health services and our understanding of the vision process.
Collapse
|
6
|
Denagamage S, Morton MP, Hudson NV, Reynolds JH, Jadi MP, Nandy AS. Laminar mechanisms of saccadic suppression in primate visual cortex. Cell Rep 2023; 42:112720. [PMID: 37392385 PMCID: PMC10528056 DOI: 10.1016/j.celrep.2023.112720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 04/15/2023] [Accepted: 06/13/2023] [Indexed: 07/03/2023] Open
Abstract
Saccadic eye movements are known to cause saccadic suppression, a temporary reduction in visual sensitivity and visual cortical firing rates. While saccadic suppression has been well characterized at the level of perception and single neurons, relatively little is known about the visual cortical networks governing this phenomenon. Here we examine the effects of saccadic suppression on distinct neural subpopulations within visual area V4. We find subpopulation-specific differences in the magnitude and timing of peri-saccadic modulation. Input-layer neurons show changes in firing rate and inter-neuronal correlations prior to saccade onset, and putative inhibitory interneurons in the input layer elevate their firing rate during saccades. A computational model of this circuit recapitulates our empirical observations and demonstrates that an input-layer-targeting pathway can initiate saccadic suppression by enhancing local inhibitory activity. Collectively, our results provide a mechanistic understanding of how eye movement signaling interacts with cortical circuitry to enforce visual stability.
Collapse
Affiliation(s)
- Sachira Denagamage
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Mitchell P Morton
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Nyomi V Hudson
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA
| | - John H Reynolds
- Systems Neurobiology Laboratories, The Salk Institute for Biological Studies, La Jolla, CA 92037, USA
| | - Monika P Jadi
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Department of Psychiatry, Yale University, New Haven, CT 06511, USA; Kavli Institute for Neuroscience, Yale University, New Haven, CT 06511, USA; Wu Tsai Institute, Yale University, New Haven, CT 06511, USA.
| | - Anirvan S Nandy
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Kavli Institute for Neuroscience, Yale University, New Haven, CT 06511, USA; Wu Tsai Institute, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
7
|
Brilhault A, Neuenschwander S, Rios RA. A new robust multivariate mode estimator for eye-tracking calibration. Behav Res Methods 2023; 55:516-553. [PMID: 35297014 DOI: 10.3758/s13428-022-01809-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/27/2022] [Indexed: 11/08/2022]
Abstract
We propose in this work a new method for estimating the main mode of multivariate distributions, with application to eye-tracking calibration. When performing eye-tracking experiments with poorly cooperative subjects, such as infants or monkeys, the calibration data generally suffer from high contamination. Outliers are typically organized in clusters, corresponding to fixations in the time intervals when subjects were not looking at the calibration points. In this type of multimodal distributions, most central tendency measures fail at estimating the principal fixation coordinates (the first mode), resulting in errors and inaccuracies when mapping the gaze to the screen coordinates. Here, we developed a new algorithm to identify the first mode of multivariate distributions, named BRIL, which relies on recursive depth-based filtering. This novel approach was tested on artificial mixtures of Gaussian and Uniform distributions, and compared to existing methods (conventional depth medians, robust estimators of location and scatter, and clustering-based approaches). We obtained outstanding performances, even for distributions containing very high proportions of outliers, both grouped in clusters and randomly distributed. Finally, we demonstrate the strength of our method in a real-world scenario using experimental data from eye-tracking calibrations with Capuchin monkeys, especially for highly contaminated distributions where other algorithms typically lack accuracy.
Collapse
Affiliation(s)
- Adrien Brilhault
- Department of Computer Science, Federal University of Bahia, Salvador, Brazil
| | | | - Ricardo Araujo Rios
- Department of Computer Science, Federal University of Bahia, Salvador, Brazil.
| |
Collapse
|
8
|
Chen S, Jiang M, Yang J, Zhao Q. Attention in Reasoning: Dataset, Analysis, and Modeling. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:7310-7326. [PMID: 34550881 DOI: 10.1109/tpami.2021.3114582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
While attention has been an increasingly popular component in deep neural networks to both interpret and boost the performance of models, little work has examined how attention progresses to accomplish a task and whether it is reasonable. In this work, we propose an Attention with Reasoning capability (AiR) framework that uses attention to understand and improve the process leading to task outcomes. We first define an evaluation metric based on a sequence of atomic reasoning operations, enabling a quantitative measurement of attention that considers the reasoning process. We then collect human eye-tracking and answer correctness data, and analyze various machine and human attention mechanisms on their reasoning capability and how they impact task performance. To improve the attention and reasoning ability of visual question answering models, we propose to supervise the learning of attention progressively along the reasoning process and to differentiate the correct and incorrect attention patterns. We demonstrate the effectiveness of the proposed framework in analyzing and modeling attention with better reasoning capability and task performance. The code and data are available at https://github.com/szzexpoi/AiR.
Collapse
|
9
|
Zdravkova K, Krasniqi V, Dalipi F, Ferati M. Cutting-edge communication and learning assistive technologies for disabled children: An artificial intelligence perspective. Front Artif Intell 2022; 5:970430. [PMID: 36388402 PMCID: PMC9650429 DOI: 10.3389/frai.2022.970430] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 09/27/2022] [Indexed: 01/13/2024] Open
Abstract
In this study we provide an in-depth review and analysis of the impact of artificial intelligence (AI) components and solutions that support the development of cutting-edge assistive technologies for children with special needs. Various disabilities are addressed and the most recent assistive technologies that enhance communication and education of disabled children, as well as the AI technologies that have enabled their development, are presented. The paper summarizes with an AI perspective on future assistive technologies and ethical concerns arising from the use of such cutting-edge communication and learning technologies for children with disabilities.
Collapse
Affiliation(s)
- Katerina Zdravkova
- Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University in Skopje, Skopje, North Macedonia
| | - Venera Krasniqi
- Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University in Skopje, Skopje, North Macedonia
| | - Fisnik Dalipi
- Department of Informatics, Faculty of Technology, Linnaeus University, Växjö, Sweden
| | - Mexhid Ferati
- Department of Informatics, Faculty of Technology, Linnaeus University, Växjö, Sweden
| |
Collapse
|
10
|
Real-time identification of eye fixations and saccades using Radial Basis Function Networks and Markov Chains. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.08.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
11
|
Teng C, Sharma H, Drukker L, Papageorghiou AT, Noble JA. Visualising Spatio-Temporal Gaze Characteristics for Exploratory Data Analysis in Clinical Fetal Ultrasound Scans. PROCEEDINGS. EYE TRACKING RESEARCH & APPLICATIONS SYMPOSIUM 2022; 2022:31. [PMID: 36649381 PMCID: PMC7614061 DOI: 10.1145/3517031.3529635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Visualising patterns in clinicians' eye movements while interpreting fetal ultrasound imaging videos is challenging. Across and within videos, there are differences in size an d position of Areas-of-Interest (AOIs) due to fetal position, movement and sonographer skill. Currently, AOIs are manually labelled or identified using eye-tracker manufacturer specifications which are not study specific. We propose using unsupervised clustering to identify meaningful AOIs and bi-contour plots to visualise spatio-temporal gaze characteristics. We use Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) to identify the AOIs, and use their corresponding images to capture granular changes within each AOI. Then we visualise transitions within and between AOIs as read by the sonographer. We compare our method to a standardised eye-tracking manufacturer algorithm. Our method captures granular changes in gaze characteristics which are otherwise not shown. Our method is suitable for exploratory data analysis of eye-tracking data involving multiple participants and AOIs.
Collapse
Affiliation(s)
- Clare Teng
- University of Oxford, Institute of Biomedical Engineering, United Kingdom
| | - Harshita Sharma
- University of Oxford, Institute of Biomedical Engineering, United Kingdom
| | - Lior Drukker
- University of Oxford, Nuffield Department of Women's and Reproductive Health, United Kingdom and Tel-Aviv University, Women's Ultrasound, Department of Obstetrics and Gynecology, Beilinson Medical Center, Sackler Faculty of Medicine, Israel
| | - Aris T Papageorghiou
- University of Oxford, Nuffield Department of Women's and Reproductive Health, United Kingdom
| | - J. Alison Noble
- University of Oxford, Institute of Biomedical Engineering, United Kingdom
| |
Collapse
|
12
|
Rose O, Johnson J, Wang B, Ponce CR. Visual prototypes in the ventral stream are attuned to complexity and gaze behavior. Nat Commun 2021; 12:6723. [PMID: 34795262 PMCID: PMC8602238 DOI: 10.1038/s41467-021-27027-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Accepted: 11/01/2021] [Indexed: 01/02/2023] Open
Abstract
Early theories of efficient coding suggested the visual system could compress the world by learning to represent features where information was concentrated, such as contours. This view was validated by the discovery that neurons in posterior visual cortex respond to edges and curvature. Still, it remains unclear what other information-rich features are encoded by neurons in more anterior cortical regions (e.g., inferotemporal cortex). Here, we use a generative deep neural network to synthesize images guided by neuronal responses from across the visuocortical hierarchy, using floating microelectrode arrays in areas V1, V4 and inferotemporal cortex of two macaque monkeys. We hypothesize these images ("prototypes") represent such predicted information-rich features. Prototypes vary across areas, show moderate complexity, and resemble salient visual attributes and semantic content of natural images, as indicated by the animals' gaze behavior. This suggests the code for object recognition represents compressed features of behavioral relevance, an underexplored aspect of efficient coding.
Collapse
Affiliation(s)
- Olivia Rose
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - James Johnson
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA
| | - Binxu Wang
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Carlos R Ponce
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA.
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
13
|
Daddaoua N, Jedema HP, Bradberry CW. Deliberative Decision-Making in Macaques Removes Reward-Driven Response Vigor. Front Behav Neurosci 2021; 15:674169. [PMID: 34489655 PMCID: PMC8416506 DOI: 10.3389/fnbeh.2021.674169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 07/22/2021] [Indexed: 11/16/2022] Open
Abstract
Most of our daily decisions are governed by one of two systems: an impulsive system driving instantaneous decisions and a deliberative system driving thoughtful ones. The impulsive system reacts to immediately available concrete rewards. In contrast, the deliberative system reacts to more delayed rewards and/or punishments, which imposes consideration of longer-term choice consequences. Contingency management for addiction treatment is hypothesized to engage deliberative processes. Ultimately, in both decision-making situations, an action is needed to enact the decision. Whether those actions differ in implementation is an open question whose answer could inform as to whether distinct neural systems are engaged. To explore whether there is evidence of separate mechanisms between deliberated and immediate choices, we trained monkeys to perform a decision-making task where they made a choice on a touch screen between two visual cues predicting different amounts of reward. In immediate choice (IC) trials, the cues appeared at the final response locations where subjects could immediately touch the chosen cue. In deliberated choice (DC) trials, compound cues appeared orthogonally to the response locations. After a delay, allowing for decision formation, an identifying cue component was displaced to the randomly assigned response locations, permitting subjects to reach for the chosen cue. Both trial types showed an effect of cue value on cue selection time. However, only IC trials showed an effect of the competing cue on response vigor (measured by movement duration) and a reach trajectory that deviated in the direction of the competing cue, suggesting a decision reexamination process. Reward modulation of response vigor implicates dopaminergic mechanisms. In DC trials, reach trajectories revealed a commitment to the chosen choice target, and reach vigor was not modulated by the value of the competing cue. Our results suggest that choice–action dynamics are shaped by competing offers only during instantaneous, impulsive choice. After a deliberated decision, choice–action dynamics are unaffected by the alternative offer cue, demonstrating a commitment to the choice. The potential relevance to contingency management is discussed.
Collapse
Affiliation(s)
- Nabil Daddaoua
- National Institute on Drug Abuse (NIDA) Intramural Research Program, Baltimore, MD, United States
| | - Hank P Jedema
- National Institute on Drug Abuse (NIDA) Intramural Research Program, Baltimore, MD, United States
| | - Charles W Bradberry
- National Institute on Drug Abuse (NIDA) Intramural Research Program, Baltimore, MD, United States
| |
Collapse
|
14
|
Relevance of Artifact Removal and Number of Stimuli for Video Head Impulse Test Examination. Ear Hear 2021; 41:1397-1406. [PMID: 32058350 DOI: 10.1097/aud.0000000000000849] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To evaluate the effect of artifacts on the impulse and response recordings with the video head impulse test (VHIT) and determine how many stimuli are necessary for obtaining acceptably efficient measurements. METHODS One hundred fifty patients were examined using VHIT and their registries searched for artifacts. We compared several variations of the dataset. The first variation used only samples without artifacts, the second used all samples (with and without artifacts), and the rest used only samples with each type of artifact. We calculated the relative efficiency (RE) of evaluating an increasingly large number of samples (3 to 19 per side) when compared with the complete sample (20 impulses per side). RESULTS Overshoot was associated with significantly higher speed (p = 0.005), higher duration (p < 0.001) and lower amplitude of the impulses (p = 0.002), and consequent higher saccades' latency (p = 0.035) and lower amplitude (p = 0.025). Loss of track was associated with lower gain (p = 0.035). Blink was associated with a higher number of saccades (p < 0.001), and wrong way was associated with lower saccade latency (p = 0.012). The coefficient of quartile deviation escalated as the number of artifacts of any type rose, indicating an increment of variability. Overshoot increased the probability of the impulse to lay on the outlier range for gain and peak speed. Blink did so for the number of saccades, and wrong way for the saccade amplitude and speed. RE reached a tolerable level of 1.1 at 7 to 10 impulses for all measurements except the PR score. CONCLUSIONS Our results suggest the necessity of removing artifacts after collecting VHIT samples to improve the accuracy and precision of results. Ten impulses are sufficient for achieving acceptable RE for all measurements except the PR score.
Collapse
|
15
|
Analysis of the Learning Process through Eye Tracking Technology and Feature Selection Techniques. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11136157] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
In recent decades, the use of technological resources such as the eye tracking methodology is providing cognitive researchers with important tools to better understand the learning process. However, the interpretation of the metrics requires the use of supervised and unsupervised learning techniques. The main goal of this study was to analyse the results obtained with the eye tracking methodology by applying statistical tests and supervised and unsupervised machine learning techniques, and to contrast the effectiveness of each one. The parameters of fixations, saccades, blinks and scan path, and the results in a puzzle task were found. The statistical study concluded that no significant differences were found between participants in solving the crossword puzzle task; significant differences were only detected in the parameters saccade amplitude minimum and saccade velocity minimum. On the other hand, this study, with supervised machine learning techniques, provided possible features for analysis, some of them different from those used in the statistical study. Regarding the clustering techniques, a good fit was found between the algorithms used (k-means ++, fuzzy k-means and DBSCAN). These algorithms provided the learning profile of the participants in three types (students over 50 years old; and students and teachers under 50 years of age). Therefore, the use of both types of data analysis is considered complementary.
Collapse
|
16
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. Detection of normal and slow saccades using implicit piecewise polynomial approximation. J Vis 2021; 21:8. [PMID: 34125160 PMCID: PMC8212426 DOI: 10.1167/jov.21.6.8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The quantitative analysis of saccades in eye movement data unveils information associated with intention, cognition, and health status. Abnormally slow saccades are indicative of neurological disorders and often imply a specific pathological disturbance. However, conventional saccade detection algorithms are not designed to detect slow saccades, and are correspondingly unreliable when saccades are unusually slow. In this article, we propose an algorithm that is effective for the detection of both normal and slow saccades. The proposed algorithm is partly based on modeling saccadic waveforms as piecewise-quadratic signals. The algorithm first decreases noise in acquired eye-tracking data using optimization to minimize a prescribed objective function, then uses velocity thresholding to detect saccades. Using both simulated saccades and real saccades generated by healthy subjects and patients, we evaluate the performance of the proposed algorithm and 10 other detection algorithms. We show the proposed algorithm is more accurate in detecting both normal and slow saccades than other algorithms.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.,
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA.,
| |
Collapse
|
17
|
Boundary-anchored neural mechanisms of location-encoding for self and others. Nature 2020; 589:420-425. [PMID: 33361808 DOI: 10.1038/s41586-020-03073-y] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Accepted: 11/12/2020] [Indexed: 11/08/2022]
Abstract
Everyday tasks in social settings require humans to encode neural representations of not only their own spatial location, but also the location of other individuals within an environment. At present, the vast majority of what is known about neural representations of space for self and others stems from research in rodents and other non-human animals1-3. However, it is largely unknown how the human brain represents the location of others, and how aspects of human cognition may affect these location-encoding mechanisms. To address these questions, we examined individuals with chronically implanted electrodes while they carried out real-world spatial navigation and observation tasks. We report boundary-anchored neural representations in the medial temporal lobe that are modulated by one's own as well as another individual's spatial location. These representations depend on one's momentary cognitive state, and are strengthened when encoding of location is of higher behavioural relevance. Together, these results provide evidence for a common encoding mechanism in the human brain that represents the location of oneself and others in shared environments, and shed new light on the neural mechanisms that underlie spatial navigation and awareness of others in real-world scenarios.
Collapse
|
18
|
Voloh B, Watson MR, König S, Womelsdorf T. MAD saccade: statistically robust saccade threshold estimation via the median absolute deviation. J Eye Mov Res 2020; 12:10.16910/jemr.12.8.3. [PMID: 33828776 PMCID: PMC7881893 DOI: 10.16910/jemr.12.8.3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Saccade detection is a critical step in the analysis of gaze data. A common method for saccade detection is to use a simple threshold for velocity or acceleration values, which can be estimated from the data using the mean and standard deviation. However, this method has the downside of being influenced by the very signal it is trying to detect, the outlying velocities or accelerations that occur during saccades. We propose instead to use the median absolute deviation (MAD), a robust estimator of dispersion that is not influenced by outliers. We modify an algorithm proposed by Nyström and colleagues, and quantify saccade detection performance in both simulated and human data. Our modified algorithm shows a significant and marked improvement in saccade detection - showing both more true positives and less false negatives - especially under higher noise levels. We conclude that robust estimators can be widely adopted in other common, automatic gaze classification algorithms due to their ease of implementation.
Collapse
|
19
|
Sakon JJ, Suzuki WA. A neural signature of pattern separation in the monkey hippocampus. Proc Natl Acad Sci U S A 2019; 116:9634-9643. [PMID: 31010929 PMCID: PMC6511004 DOI: 10.1073/pnas.1900804116] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
The CA3 and dentate gyrus (DG) regions of the hippocampus are considered key for disambiguating sensory inputs from similar experiences in memory, a process termed pattern separation. The neural mechanisms underlying pattern separation, however, have been difficult to compare across species: rodents offer robust recording methods with less human-centric tasks, while humans provide complex behavior with less recording potential. To overcome these limitations, we trained monkeys to perform a visual pattern separation task similar to those used in humans while recording activity from single CA3/DG neurons. We find that, when animals discriminate recently seen novel images from similar (lure) images, behavior indicative of pattern separation, CA3/DG neurons respond to lure images more like novel than repeat images. Using a population of these neurons, we are able to classify novel, lure, and repeat images from each other using this pattern of firing rates. Notably, one subpopulation of these neurons is more responsible for distinguishing lures and repeats-the key discrimination indicative of pattern separation.
Collapse
Affiliation(s)
- John J Sakon
- Center for Neural Science, New York University, New York, NY 10003
| | - Wendy A Suzuki
- Center for Neural Science, New York University, New York, NY 10003
| |
Collapse
|
20
|
Dalrymple KA, Jiang M, Zhao Q, Elison JT. Machine learning accurately classifies age of toddlers based on eye tracking. Sci Rep 2019; 9:6255. [PMID: 31000762 PMCID: PMC6472500 DOI: 10.1038/s41598-019-42764-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Accepted: 04/05/2019] [Indexed: 11/16/2022] Open
Abstract
How people extract visual information from complex scenes provides important information about cognitive processes. Eye tracking studies that have used naturalistic, rather than highly controlled experimental stimuli, reveal that variability in looking behavior is determined by bottom-up image properties such as intensity, color, and orientation, top-down factors such as task instructions and semantic information, and individual differences in genetics, cognitive function and social functioning. These differences are often revealed using areas of interest that are chosen by the experimenter or other human observers. In contrast, we adopted a data-driven approach by using machine learning (Support Vector Machine (SVM) and Deep Learning (DL)) to elucidate factors that contribute to age-related variability in gaze patterns. These models classified the infants by age with a high degree of accuracy, and identified meaningful features distinguishing the age groups. Our results demonstrate that machine learning is an effective tool for understanding how looking patterns vary according to age, providing insight into how toddlers allocate attention and how that changes with development. This sensitivity for detecting differences in exploratory gaze behavior in toddlers highlights the utility of machine learning for characterizing a variety of developmental capacities.
Collapse
Affiliation(s)
| | - Ming Jiang
- Computer Science and Engineering, University of Minnesota, Minneapolis, USA
| | - Qi Zhao
- Computer Science and Engineering, University of Minnesota, Minneapolis, USA
| | - Jed T Elison
- Institute of Child Development, University of Minnesota, Minneapolis, USA
- Department of Pediatrics, University of Minnesota, Minneapolis, USA
| |
Collapse
|
21
|
Visual Performance and Perception as a Target of Saccadic Strategies in Patients With Unilateral Vestibular Loss. Ear Hear 2019; 39:1176-1186. [PMID: 29578887 DOI: 10.1097/aud.0000000000000576] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES To evaluate the ability of saccadic strategies developed during vestibular compensation to reduce the effect of an impaired vestibulo-ocular reflex (VOR) on a retinal smear and image motion sensation. DESIGN Twenty patients with unilateral vestibular loss were examined with a video head impulse test before and after vestibular rehabilitation (VR) with the use of gaze stabilization and refixation saccades training. Head and eye velocity functions were processed to infer the retinal eccentricity, and through its correlation with visual acuity (VA), several measurements are proposed to evaluate the influence of VR on saccades behavior and visual performance. To isolate the effect of saccades on the findings and avoid bias because of gain differences, only patients whose VOR gain values remained unchanged after VR were included. RESULTS Improved contribution of covert saccades and reduction of overt saccades latency were measured after VR. We found significant differences when assessing both the interval less than 70% VA (50.25 ms), which is considered the limit of a moderate low vision, and less than 50% VA (39.515 ms), which is the limit for severe low vision. Time to recover a VA of 75% (near normal) was reduced in all the patients (median: 56.472 ms). CONCLUSION Despite the absence of VOR gain improvement, patients with unilateral vestibular loss are able to develop saccadic strategies that allow the shortening of the interval of retinal smear and image motion. The proposed measurements might be of use to evaluate VR outcomes and visually induced impairment.
Collapse
|
22
|
Stuart S, Hickey A, Vitorio R, Welman K, Foo S, Keen D, Godfrey A. Eye-tracker algorithms to detect saccades during static and dynamic tasks: a structured review. Physiol Meas 2019; 40:02TR01. [DOI: 10.1088/1361-6579/ab02ab] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
23
|
Cornia M, Abati D, Baraldi L, Palazzi A, Calderara S, Cucchiara R. Attentive models in vision: Computing saliency maps in the deep learning era. INTELLIGENZA ARTIFICIALE 2019. [DOI: 10.3233/ia-170033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Marcella Cornia
- Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia
| | - Davide Abati
- Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia
| | - Lorenzo Baraldi
- Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia
| | - Andrea Palazzi
- Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia
| | - Simone Calderara
- Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia
| | - Rita Cucchiara
- Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia
| |
Collapse
|
24
|
Costela FM, Woods RL. When Watching Video, Many Saccades Are Curved and Deviate From a Velocity Profile Model. Front Neurosci 2019; 12:960. [PMID: 30666178 PMCID: PMC6330331 DOI: 10.3389/fnins.2018.00960] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Accepted: 12/03/2018] [Indexed: 12/20/2022] Open
Abstract
Commonly, saccades are thought to be ballistic eye movements, not modified during flight, with a straight path and a well-described velocity profile. However, they do not always follow a straight path and studies of saccade curvature have been reported previously. In a prior study, we developed a real-time, saccade-trajectory prediction algorithm to improve the updating of gaze-contingent displays and found that saccades with a curved path or that deviated from the expected velocity profile were not well fit by our saccade-prediction algorithm (velocity-profile deviation), and thus had larger updating errors than saccades that had a straight path and had a velocity profile that was fit well by the model. Further, we noticed that the curved saccades and saccades with high velocity-profile deviations were more common than we had expected when participants performed a natural-viewing task. Since those saccades caused larger display updating errors, we sought a better understanding of them. Here we examine factors that could affect curvature and velocity profile of saccades using a pool of 218,744 saccades from 71 participants watching “Hollywood” video clips. Those factors included characteristics of the participants (e.g., age), of the videos (importance of faces for following the story, genre), of the saccade (e.g., magnitude, direction), time during the session (e.g., fatigue) and presence and timing of scene cuts. While viewing the video clips, saccades were most likely horizontal or vertical over oblique. Measured curvature and velocity-profile deviation had continuous, skewed frequency distributions. We used mixed-effects regression models that included cubic terms and found a complex relationship between curvature, velocity-profile deviation and saccade duration (or magnitude). Curvature and velocity-profile deviation were related to some video-dependent features such as lighting, face presence, or nature and human figure content. Time during the session was a predictor for velocity profile deviations. Further, we found a relationship for saccades that were in flight at the time of a scene cut to have higher velocity-profile deviations and lower curvature in univariable models. Saccades characteristics vary with a variety of factors, which suggests complex interactions between oculomotor control and scene content that could be explored further.
Collapse
Affiliation(s)
- Francisco M Costela
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA, United States.,Department of Ophthalmology, Harvard Medical School, Boston, MA, United States
| | - Russell L Woods
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA, United States.,Department of Ophthalmology, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
25
|
Hessels RS, Niehorster DC, Nyström M, Andersson R, Hooge ITC. Is the eye-movement field confused about fixations and saccades? A survey among 124 researchers. ROYAL SOCIETY OPEN SCIENCE 2018; 5:180502. [PMID: 30225041 PMCID: PMC6124022 DOI: 10.1098/rsos.180502] [Citation(s) in RCA: 76] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Accepted: 08/06/2018] [Indexed: 06/08/2023]
Abstract
Eye movements have been extensively studied in a wide range of research fields. While new methods such as mobile eye tracking and eye tracking in virtual/augmented realities are emerging quickly, the eye-movement terminology has scarcely been revised. We assert that this may cause confusion about two of the main concepts: fixations and saccades. In this study, we assessed the definitions of fixations and saccades held in the eye-movement field, by surveying 124 eye-movement researchers. These eye-movement researchers held a variety of definitions of fixations and saccades, of which the breadth seems even wider than what is reported in the literature. Moreover, these definitions did not seem to be related to researcher background or experience. We urge researchers to make their definitions more explicit by specifying all the relevant components of the eye movement under investigation: (i) the oculomotor component: e.g. whether the eye moves slow or fast; (ii) the functional component: what purposes does the eye movement (or lack thereof) serve; (iii) the coordinate system used: relative to what does the eye move; (iv) the computational definition: how is the event represented in the eye-tracker signal. This should enable eye-movement researchers from different fields to have a discussion without misunderstandings.
Collapse
Affiliation(s)
- Roy S. Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Developmental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Diederick C. Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | | | - Ignace T. C. Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
26
|
Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades. J Vis 2017; 17:10. [PMID: 28813566 PMCID: PMC5852949 DOI: 10.1167/17.9.10] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter.
Collapse
Affiliation(s)
- Weiwei Dai
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - Ivan Selesnick
- Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - John-Ross Rizzo
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| | - Janet Rucker
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| | - Todd Hudson
- Department of Neurology, School of Medicine, New York University, New York, NY, USA
| |
Collapse
|
27
|
König SD, Buffalo EA. Modeling Visual Exploration in Rhesus Macaques with Bottom-Up Salience and Oculomotor Statistics. Front Integr Neurosci 2016; 10:23. [PMID: 27445721 PMCID: PMC4928494 DOI: 10.3389/fnint.2016.00023] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2016] [Accepted: 06/16/2016] [Indexed: 11/25/2022] Open
Abstract
There is a growing interest in studying biological systems in natural settings, in which experimental stimuli are less artificial and behavior is less controlled. In primate vision research, free viewing of complex images has elucidated novel neural responses, and free viewing in humans has helped discover attentional and behavioral impairments in patients with neurological disorders. In order to fully interpret data collected from free viewing of complex scenes, it is critical to better understand what aspects of the stimuli guide viewing behavior. To this end, we have developed a novel viewing behavior model called a Biased Correlated Random Walk (BCRW) to describe free viewing behavior during the exploration of complex scenes in monkeys. The BCRW can predict fixation locations better than bottom-up salience. Additionally, we show that the BCRW can be used to test hypotheses regarding specific attentional mechanisms. For example, we used the BCRW to examine the source of the central bias in fixation locations. Our analyses suggest that the central bias may be caused by a natural tendency to reorient the eyes toward the center of the stimulus, rather than a photographer's bias to center salient items in a scene. Taken together these data suggest that the BCRW can be used to further our understanding of viewing behavior and attention, and could be useful in optimizing stimulus and task design.
Collapse
Affiliation(s)
- Seth D König
- Wallace H. Coulter Department of Biomedical Engineering at the Georgia Institute of Technology and Emory UniversityAtlanta, GA, USA; Yerkes National Primate Research CenterAtlanta, GA, USA; Graduate Program in Neuroscience, University of WashingtonSeattle, WA, USA; Washington National Primate Research CenterSeattle, WA, USA
| | - Elizabeth A Buffalo
- Yerkes National Primate Research CenterAtlanta, GA, USA; Washington National Primate Research CenterSeattle, WA, USA; Department of Neurology, Emory University School of MedicineAtlanta, GA, USA; Department of Physiology and Biophysics, University of WashingtonSeattle, WA, USA
| |
Collapse
|
28
|
Singh T, Perry CM, Herter TM. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment. J Neuroeng Rehabil 2016; 13:10. [PMID: 26812907 PMCID: PMC4728792 DOI: 10.1186/s12984-015-0107-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2015] [Accepted: 12/08/2015] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. RESULTS Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. CONCLUSIONS The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.
Collapse
Affiliation(s)
- Tarkeshwar Singh
- Department of Exercise Science, Arnold School of Public Health, University of South Carolina, 921 Assembly Street, Columbia, SC-29208, USA.
| | - Christopher M Perry
- Department of Exercise Science, Arnold School of Public Health, University of South Carolina, 921 Assembly Street, Columbia, SC-29208, USA.
| | - Troy M Herter
- Department of Exercise Science, Arnold School of Public Health, University of South Carolina, 921 Assembly Street, Columbia, SC-29208, USA.
| |
Collapse
|