1
|
Peng X, Zhang Y, Jimenez-Navarro D, Serrano A, Myszkowski K, Sun Q. Measuring and Predicting Multisensory Reaction Latency: A Probabilistic Model for Visual-Auditory Integration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7364-7374. [PMID: 39250397 DOI: 10.1109/tvcg.2024.3456185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Virtual/augmented reality (VR/AR) devices offer both immersive imagery and sound. With those wide-field cues, we can simultaneously acquire and process visual and auditory signals to quickly identify objects, make decisions, and take action. While vision often takes precedence in perception, our visual sensitivity degrades in the periphery. In contrast, auditory sensitivity can exhibit an opposite trend due to the elevated interaural time difference. What occurs when these senses are simultaneously integrated, as is common in VR applications such as 360° video watching and immersive gaming? We present a computational and probabilistic model to predict VR users' reaction latency to visual-auditory multisensory targets. To this aim, we first conducted a psychophysical experiment in VR to measure the reaction latency by tracking the onset of eye movements. Experiments with numerical metrics and user studies with naturalistic scenarios showcase the model's accuracy and generalizability. Lastly, we discuss the potential applications, such as measuring the sufficiency of target appearance duration in immersive video playback, and suggesting the optimal spatial layouts for AR interface design.
Collapse
|
2
|
Naffrechoux M, Koun E, Volland F, Farnè A, Roy AC, Pélisson D. Eyes and hand are both reliable at localizing somatosensory targets. Exp Brain Res 2024; 242:2653-2664. [PMID: 39340566 DOI: 10.1007/s00221-024-06922-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 09/04/2024] [Indexed: 09/30/2024]
Abstract
Body representations (BR) for action are critical to perform accurate movements. Yet, behavioral measures suggest that BR are distorted even in healthy people. However, the upper limb has mostly been used as a probe so far, making difficult to decide whether BR are truly distorted or whether this depends on the effector used as a readout. Here, we aimed to assess in healthy humans the accuracy of the eye and hand effectors in localizing somatosensory targets, to determine whether they may probe BR similarly. Twenty-six participants completed two localization tasks in which they had to localize an unseen target (proprioceptive or tactile) with either their eyes or hand. Linear mixed model revealed in both tasks a larger horizontal (but not vertical) localization error for the ocular than for the manual localization performance. However, despite better hand mean accuracy, manual and ocular localization performance positively correlated to each other in both tasks. Moreover, target position also affected localization performance for both eye and hand responses: accuracy was higher for the more flexed position of the elbow in the proprioceptive task and for the thumb than for the index finger in the tactile task, thus confirming previous results of better performance for the thumb. These findings indicate that the hand seems to beat the eyes along the horizontal axis when localizing somatosensory targets, but the localization patterns revealed by the two effectors seemed to be related and characterized by the same target effect, opening the way to assess BR with the eyes when upper limb motor control is disturbed.
Collapse
Affiliation(s)
- Marion Naffrechoux
- Integrative Multisensory Perception Action and Cognition Team of the Lyon Neuroscience Research Center, INSERM U1028 CNRS U5292 University Lyon 1, 16 avenue du Doyen Lépine, Lyon, 69500, France.
- Laboratoire Dynamique Du Langage CNRS, UMR 5596 University Lyon 2, Lyon, France.
| | - Eric Koun
- Integrative Multisensory Perception Action and Cognition Team of the Lyon Neuroscience Research Center, INSERM U1028 CNRS U5292 University Lyon 1, 16 avenue du Doyen Lépine, Lyon, 69500, France
| | - Frederic Volland
- Integrative Multisensory Perception Action and Cognition Team of the Lyon Neuroscience Research Center, INSERM U1028 CNRS U5292 University Lyon 1, 16 avenue du Doyen Lépine, Lyon, 69500, France
| | - Alessandro Farnè
- Integrative Multisensory Perception Action and Cognition Team of the Lyon Neuroscience Research Center, INSERM U1028 CNRS U5292 University Lyon 1, 16 avenue du Doyen Lépine, Lyon, 69500, France
| | - Alice Catherine Roy
- Laboratoire Dynamique Du Langage CNRS, UMR 5596 University Lyon 2, Lyon, France
| | - Denis Pélisson
- Integrative Multisensory Perception Action and Cognition Team of the Lyon Neuroscience Research Center, INSERM U1028 CNRS U5292 University Lyon 1, 16 avenue du Doyen Lépine, Lyon, 69500, France
| |
Collapse
|
3
|
Vasudevan V, Murthy A, Padhi R. Modeling kinematic variability reveals displacement and velocity based dual control of saccadic eye movements. Exp Brain Res 2024; 242:2159-2176. [PMID: 38980340 DOI: 10.1007/s00221-024-06870-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 06/01/2024] [Indexed: 07/10/2024]
Abstract
Noise is a ubiquitous component of motor systems that leads to behavioral variability of all types of movements. Nonetheless, systems-based models investigating human movements are generally deterministic and explain only the central tendencies like mean trajectories. In this paper, a novel approach to modeling kinematic variability of movements is presented and tested on the oculomotor system. This approach reconciles the two prominent philosophies of saccade control: displacement-based control versus velocity-based control. This was achieved by quantifying the variability in saccadic eye movements and developing a stochastic model of its control. The proposed stochastic dual model generated significantly better fits of inter-trial variances of the saccade trajectories compared to existing models. These results suggest that the saccadic system can flexibly use the information of both desired displacement and velocity for its control. This study presents a potential framework for investigating computational principles of motor control in the presence of noise utilizing stochastic modeling of kinematic variability.
Collapse
Affiliation(s)
- Varsha Vasudevan
- Department of Bioengineering, Indian Institute of Science, Bangalore, 560012, India.
| | - Aditya Murthy
- Department of Bioengineering, Indian Institute of Science, Bangalore, 560012, India
- Centre for Neuroscience, Indian Institute of Science, Bangalore, 560012, India
| | - Radhakant Padhi
- Department of Bioengineering, Indian Institute of Science, Bangalore, 560012, India
- Department of Aerospace Engineering, Indian Institute of Science, Bangalore, 560012, India
| |
Collapse
|
4
|
Goettker A, Locke SM, Gegenfurtner KR, Mamassian P. Sensorimotor confidence for tracking eye movements. J Vis 2024; 24:12. [PMID: 39177998 PMCID: PMC11363210 DOI: 10.1167/jov.24.8.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Accepted: 07/12/2024] [Indexed: 08/24/2024] Open
Abstract
For successful interactions with the world, we often have to evaluate our own performance. Although eye movements are one of the most frequent actions we perform, we are typically unaware of them. Here, we investigated whether there is any evidence for metacognitive sensitivity for the accuracy of eye movements. Participants tracked a dot cloud as it followed an unpredictable sinusoidal trajectory and then reported if they thought their performance was better or worse than their average tracking performance. Our results show above-chance identification of better tracking behavior across all trials and also for repeated attempts of the same target trajectories. Sensitivity in discriminating performance between better and worse trials was stable across sessions, but judgements within a trial relied more on performance in the final seconds. This behavior matched previous reports when judging the quality of hand movements, although overall metacognitive sensitivity for eye movements was significantly lower.
Collapse
Affiliation(s)
- Alexander Goettker
- Abteilung Allgemeine Psychologie, Justus-Liebig University Giessen, Giessen, Germany
| | - Shannon M Locke
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL University, Paris, France
| | - Karl R Gegenfurtner
- Abteilung Allgemeine Psychologie, Justus-Liebig University Giessen, Giessen, Germany
| | - Pascal Mamassian
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL University, Paris, France
| |
Collapse
|
5
|
Egger SW, Keemink SW, Goldman MS, Britten KH. Context-dependence of deterministic and nondeterministic contributions to closed-loop steering control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.26.605325. [PMID: 39131368 PMCID: PMC11312469 DOI: 10.1101/2024.07.26.605325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
In natural circumstances, sensory systems operate in a closed loop with motor output, whereby actions shape subsequent sensory experiences. A prime example of this is the sensorimotor processing required to align one's direction of travel, or heading, with one's goal, a behavior we refer to as steering. In steering, motor outputs work to eliminate errors between the direction of heading and the goal, modifying subsequent errors in the process. The closed-loop nature of the behavior makes it challenging to determine how deterministic and nondeterministic processes contribute to behavior. We overcome this by applying a nonparametric, linear kernel-based analysis to behavioral data of monkeys steering through a virtual environment in two experimental contexts. In a given context, the results were consistent with previous work that described the transformation as a second-order linear system. Classically, the parameters of such second-order models are associated with physical properties of the limb such as viscosity and stiffness that are commonly assumed to be approximately constant. By contrast, we found that the fit kernels differed strongly across tasks in these and other parameters, suggesting context-dependent changes in neural and biomechanical processes. We additionally fit residuals to a simple noise model and found that the form of the noise was highly conserved across both contexts and animals. Strikingly, the fitted noise also closely matched that found previously in a human steering task. Altogether, this work presents a kernel-based analysis that characterizes the context-dependence of deterministic and non-deterministic components of a closed-loop sensorimotor task.
Collapse
Affiliation(s)
- Seth W. Egger
- Center for Neuroscience, University of California, Davis
| | - Sander W. Keemink
- Department of Neurobiology, Physiology and Behavior, University of California, Davis
| | - Mark S. Goldman
- Center for Neuroscience, University of California, Davis
- Department of Neurobiology, Physiology and Behavior, University of California, Davis
- Department of Ophthalmology and Vision Science, University of California, Davis
| | - Kenneth H. Britten
- Center for Neuroscience, University of California, Davis
- Department of Neurobiology, Physiology and Behavior, University of California, Davis
| |
Collapse
|
6
|
Rahnuma T, Jothiraj SN, Kuvar V, Faber M, Knight RT, Kam JWY. Gaze-Based Detection of Thoughts across Naturalistic Tasks Using a PSO-Optimized Random Forest Algorithm. Bioengineering (Basel) 2024; 11:760. [PMID: 39199718 PMCID: PMC11351278 DOI: 10.3390/bioengineering11080760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2024] [Revised: 07/13/2024] [Accepted: 07/23/2024] [Indexed: 09/01/2024] Open
Abstract
One key aspect of the human experience is our ongoing stream of thoughts. These thoughts can be broadly categorized into various dimensions, which are associated with different impacts on mood, well-being, and productivity. While the past literature has often identified eye movements associated with a specific thought dimension (task-relatedness) during experimental tasks, few studies have determined if these various thought dimensions can be classified by oculomotor activity during naturalistic tasks. Employing thought sampling, eye tracking, and machine learning, we assessed the classification of nine thought dimensions (task-relatedness, freely moving, stickiness, goal-directedness, internal-external orientation, self-orientation, others orientation, visual modality, and auditory modality) across seven multi-day recordings of seven participants during self-selected computer tasks. Our analyses were based on a total of 1715 thought probes across 63 h of recordings. Automated binary-class classification of the thought dimensions was based on statistical features extracted from eye movement measures, including fixation and saccades. These features all served as input into a random forest (RF) classifier, which was then improved with particle swarm optimization (PSO)-based selection of the best subset of features for classifier performance. The mean Matthews correlation coefficient (MCC) values from the PSO-based RF classifier across the thought dimensions ranged from 0.25 to 0.54, indicating above-chance level performance in all nine thought dimensions across participants and improved performance compared to the RF classifier without feature selection. Our findings highlight the potential of machine learning approaches combined with eye movement measures for the real-time prediction of naturalistic ongoing thoughts, particularly in ecologically valid contexts.
Collapse
Affiliation(s)
- Tarannum Rahnuma
- Department of Psychology, University of Calgary, Calgary, AB T2N 1N4, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 4N1, Canada
| | - Sairamya Nanjappan Jothiraj
- Department of Psychology, University of Calgary, Calgary, AB T2N 1N4, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 4N1, Canada
| | - Vishal Kuvar
- Department of Educational Psychology, University of Minnesota Twin Cities, Minneapolis, MN 55455, USA
| | - Myrthe Faber
- Department of Cognitive Science and Artificial Intelligence, Tilburg University, 5037 AB Tilburg, The Netherlands
- Donders Centre for Cognitive Neuroimaging, Radboud University, 6525 EN Nijmegen, The Netherlands
| | - Robert T. Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94720, USA
- Department of Psychology, University of California, Berkeley, CA 94704, USA
| | - Julia W. Y. Kam
- Department of Psychology, University of Calgary, Calgary, AB T2N 1N4, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 4N1, Canada
| |
Collapse
|
7
|
Nakazato R, Aoyama C, Komiyama T, Himo R, Shimegi S. Table tennis players use superior saccadic eye movements to track moving visual targets. Front Sports Act Living 2024; 6:1289800. [PMID: 38406764 PMCID: PMC10884183 DOI: 10.3389/fspor.2024.1289800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 01/24/2024] [Indexed: 02/27/2024] Open
Abstract
Introduction Table tennis players perform visually guided visuomotor responses countlessly. The exposure of the visual system to frequent and long-term motion stimulation has been known to improve perceptual motion detection and discrimination abilities as a learning effect specific to that stimulus, so may also improve visuo-oculomotor performance. We hypothesized and verified that table tennis players have good spatial accuracy of saccades to moving targets. Methods University table tennis players (TT group) and control participants with no striking-sports experience (Control group) wore a virtual reality headset and performed two ball-tracking tasks to track moving and stationary targets in virtual reality. The ball moved from a predetermined position on the opponent's court toward the participant's court. A total of 54 conditions were examined for the moving targets in combinations of three ball trajectories (familiar parabolic, unfamiliar descent, and unfamiliar horizontal), three courses (left, right, and center), and six speeds. Results and discussion All participants primarily used catch-up saccades to track the moving ball. The TT group had lower mean and inter-trial variability in saccade endpoint error compared to the Control group, showing higher spatial accuracy and precision, respectively. It suggests their improvement of the ability to analyze the direction and speed of the ball's movement and predict its trajectory and future destination. The superiority of the spatial accuracy in the TT group was seen in both the right and the left courses for all trajectories but that of precision was for familiar parabolic only. The trajectory dependence of improved saccade precision in the TT group implies the possibility that the motion vision system is trained by the visual stimuli frequently encountered in table tennis. There was no difference between the two groups in the onset time or spatial accuracy of saccades for stationary targets appearing at various positions on the ping-pong table. Conclusion Table tennis players can obtain high performance (spatial accuracy and precision) of saccades to track moving targets as a result of motion vision ability improved through a vast amount of visual and visuo-ocular experience in their play.
Collapse
Affiliation(s)
- Riku Nakazato
- Graduate School of Frontier Biosciences, Osaka University, Toyonaka, Osaka, Japan
| | - Chisa Aoyama
- Graduate School of Medicine, Osaka University, Toyonaka, Osaka, Japan
| | - Takaaki Komiyama
- Center for Education in Liberal Arts and Sciences, Osaka University, Toyonaka, Osaka, Japan
| | - Ryoto Himo
- Faculty of Science, Osaka University, Toyonaka, Osaka, Japan
| | - Satoshi Shimegi
- Graduate School of Frontier Biosciences, Osaka University, Toyonaka, Osaka, Japan
- Center for Education in Liberal Arts and Sciences, Osaka University, Toyonaka, Osaka, Japan
| |
Collapse
|
8
|
Wyche NJ, Edwards M, Goodhew SC. An updating-based working memory load alters the dynamics of eye movements but not their spatial extent during free viewing of natural scenes. Atten Percept Psychophys 2024; 86:503-524. [PMID: 37468789 PMCID: PMC10805812 DOI: 10.3758/s13414-023-02741-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/24/2023] [Indexed: 07/21/2023]
Abstract
The relationship between spatial deployments of attention and working memory load is an important topic of study, with clear implications for real-world tasks such as driving. Previous research has generally shown that attentional breadth broadens under higher load, while exploratory eye-movement behaviour also appears to change with increasing load. However, relatively little research has compared the effects of working memory load on different kinds of spatial deployment, especially in conditions that require updating of the contents of working memory rather than simple retrieval. The present study undertook such a comparison by measuring participants' attentional breadth (via an undirected Navon task) and their exploratory eye-movement behaviour (a free-viewing recall task) under low and high updating working memory loads. While spatial aspects of task performance (attentional breadth, and peripheral extent of image exploration in the free-viewing task) were unaffected by the load manipulation, the exploratory dynamics of the free-viewing task (including fixation durations and scan-path lengths) changed under increasing load. These findings suggest that temporal dynamics, rather than the spatial extent of exploration, are the primary mechanism affected by working memory load during the spatial deployment of attention. Further, individual differences in exploratory behaviour were observed on the free-viewing task: all metrics were highly correlated across working memory load blocks. These findings suggest a need for further investigation of individual differences in eye-movement behaviour; potential factors associated with these individual differences, including working memory capacity and persistence versus flexibility orientations, are discussed.
Collapse
Affiliation(s)
- Nicholas J Wyche
- Research School of Psychology (Building 39), The Australian National University, Canberra, ACT, 2601, Australia.
| | - Mark Edwards
- Research School of Psychology (Building 39), The Australian National University, Canberra, ACT, 2601, Australia
| | - Stephanie C Goodhew
- Research School of Psychology (Building 39), The Australian National University, Canberra, ACT, 2601, Australia
| |
Collapse
|
9
|
Goliskina V, Ceple I, Kassaliete E, Serpa E, Truksa R, Svede A, Krauze L, Fomins S, Ikaunieks G, Krumina G. The Effect of Stimulus Contrast and Spatial Position on Saccadic Eye Movement Parameters. Vision (Basel) 2023; 7:68. [PMID: 37873896 PMCID: PMC10594497 DOI: 10.3390/vision7040068] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/05/2023] [Accepted: 10/16/2023] [Indexed: 10/25/2023] Open
Abstract
(1) Background: Saccadic eye movements are rapid eye movements aimed to position the object image on the central retina, ensuring high-resolution data sampling across the visual field. Although saccadic eye movements are studied extensively, different experimental settings applied across different studies have left an open question of whether and how stimulus parameters can affect the saccadic performance. The current study aims to explore the effect of stimulus contrast and spatial position on saccadic eye movement latency, peak velocity and accuracy measurements. (2) Methods: Saccadic eye movement targets of different contrast levels were presented at four different spatial positions. The eye movements were recorded with a Tobii Pro Fusion video-oculograph (250 Hz). (3) Results: The results demonstrate a significant effect of stimulus spatial position on the latency and peak velocity measurements at a medium grey background, 30 cd/m2 (negative and positive stimulus polarity), light grey background, 90 cd/m2 (negative polarity), and black background, 3 cd/m2 (positive polarity). A significant effect of the stimulus spatial position was observed on the accuracy measurements when the saccadic eye movement stimuli were presented on a medium grey background (negative polarity) and on a black background. No significant effect of stimulus contrast was observed on the peak velocity measurements under all conditions. A significant stimulus contrast effect on latency and accuracy was observed only on a light grey background. (4) Conclusions: The best saccadic eye movement performance (lowest latency, highest peak velocity and accuracy measurements) can be observed when the saccades are oriented to the right and left from the central fixation point. Furthermore, when presenting the stimulus on a light grey background, a very low contrast stimuli should be considered carefully.
Collapse
Affiliation(s)
- Viktorija Goliskina
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, LV-1586 Riga, Latvia; (E.K.); (E.S.); (R.T.); (A.S.); (L.K.); (G.I.); (G.K.)
| | - Ilze Ceple
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, LV-1586 Riga, Latvia; (E.K.); (E.S.); (R.T.); (A.S.); (L.K.); (G.I.); (G.K.)
| | - Evita Kassaliete
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, LV-1586 Riga, Latvia; (E.K.); (E.S.); (R.T.); (A.S.); (L.K.); (G.I.); (G.K.)
| | - Evita Serpa
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, LV-1586 Riga, Latvia; (E.K.); (E.S.); (R.T.); (A.S.); (L.K.); (G.I.); (G.K.)
| | - Renars Truksa
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, LV-1586 Riga, Latvia; (E.K.); (E.S.); (R.T.); (A.S.); (L.K.); (G.I.); (G.K.)
| | - Aiga Svede
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, LV-1586 Riga, Latvia; (E.K.); (E.S.); (R.T.); (A.S.); (L.K.); (G.I.); (G.K.)
| | - Linda Krauze
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, LV-1586 Riga, Latvia; (E.K.); (E.S.); (R.T.); (A.S.); (L.K.); (G.I.); (G.K.)
| | - Sergejs Fomins
- Institute of Solid State Physics, University of Latvia, LV-1063 Riga, Latvia;
| | - Gatis Ikaunieks
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, LV-1586 Riga, Latvia; (E.K.); (E.S.); (R.T.); (A.S.); (L.K.); (G.I.); (G.K.)
| | - Gunta Krumina
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, LV-1586 Riga, Latvia; (E.K.); (E.S.); (R.T.); (A.S.); (L.K.); (G.I.); (G.K.)
| |
Collapse
|
10
|
Han NX, Eckstein MP. Inferential eye movement control while following dynamic gaze. eLife 2023; 12:e83187. [PMID: 37615158 PMCID: PMC10473837 DOI: 10.7554/elife.83187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 07/31/2023] [Indexed: 08/25/2023] Open
Abstract
Attending to other people's gaze is evolutionary important to make inferences about intentions and actions. Gaze influences covert attention and triggers eye movements. However, we know little about how the brain controls the fine-grain dynamics of eye movements during gaze following. Observers followed people's gaze shifts in videos during search and we related the observer eye movement dynamics to the time course of gazer head movements extracted by a deep neural network. We show that the observers' brains use information in the visual periphery to execute predictive saccades that anticipate the information in the gazer's head direction by 190-350ms. The brain simultaneously monitors moment-to-moment changes in the gazer's head velocity to dynamically alter eye movements and re-fixate the gazer (reverse saccades) when the head accelerates before the initiation of the first forward gaze-following saccade. Using saccade-contingent manipulations of the videos, we experimentally show that the reverse saccades are planned concurrently with the first forward gaze-following saccade and have a functional role in reducing subsequent errors fixating on the gaze goal. Together, our findings characterize the inferential and functional nature of social attention's fine-grain eye movement dynamics.
Collapse
Affiliation(s)
- Nicole Xiao Han
- Department of Psychological and Brain Sciences, Institute for Collaborative Biotechnologies, University of California, Santa BarbaraSanta BarbaraUnited States
| | - Miguel Patricio Eckstein
- Department of Psychological and Brain Sciences, Department of Electrical and Computer Engineering, Department of Computer Science, Institute for Collaborative Biotechnologies, University of California, Santa BarbaraSanta BarbaraUnited States
| |
Collapse
|
11
|
Barne LC, Giordano J, Collins T, Desantis A. Decoding Trans-Saccadic Prediction Error. J Neurosci 2023; 43:1933-1939. [PMID: 36759191 PMCID: PMC10027026 DOI: 10.1523/jneurosci.0563-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 11/14/2022] [Accepted: 11/17/2022] [Indexed: 02/11/2023] Open
Abstract
We are constantly sampling our environment by moving our eyes, but our subjective experience of the world is stable and constant. Stimulus displacement during or shortly after a saccade often goes unnoticed, a phenomenon called the saccadic suppression of displacement. Although we fail to notice such displacements, our oculomotor system computes the prediction errors and adequately adjusts the gaze and future saccadic execution, a phenomenon known as saccadic adaptation. In the present study, we aimed to find a brain signature of the trans-saccadic prediction error that informs the motor system but not explicit perception. We asked participants (either sex) to report whether a visual target was displaced during a saccade while recording electroencephalography (EEG). Using multivariate pattern analysis, we were able to differentiate displacements from no displacements, even when participants failed to report the displacement. In other words, we found that trans-saccadic prediction error is represented in the EEG signal 100 ms after the displacement presentation, mainly in occipital and parieto-occipital channels, even in the absence of explicit perception of the displacement.SIGNIFICANCE STATEMENT Stability in vision occurs even while performing saccades. One suggested mechanism for this counterintuitive visual phenomenon is that external displacement is suppressed during the retinal remapping caused by a saccade. Here, we shed light on the mechanisms of trans-saccadic stability by showing that displacement information is not entirely suppressed and specifically present in the early stages of visual processing. Such a signal is relevant and computed for oculomotor adjustment despite being neglected for perception.
Collapse
Affiliation(s)
- Louise Catheryne Barne
- Département Traitement de l'Information et Systèmes, Office National d'Études et de Recherches Aérospatiales, Salon-de-Provence 13661, France
- Institut de Neurosciences de la Timone (Unité Mixte de Recherche 7289), Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille 13005, France
| | - Jonathan Giordano
- Integrative Neuroscience and Cognition Center (Unité Mixte de Recherche 8002), Centre National de la Recherche Scientifique, Université Paris Cité, Paris 75006, France
| | - Thérèse Collins
- Integrative Neuroscience and Cognition Center (Unité Mixte de Recherche 8002), Centre National de la Recherche Scientifique, Université Paris Cité, Paris 75006, France
| | - Andrea Desantis
- Département Traitement de l'Information et Systèmes, Office National d'Études et de Recherches Aérospatiales, Salon-de-Provence 13661, France
- Integrative Neuroscience and Cognition Center (Unité Mixte de Recherche 8002), Centre National de la Recherche Scientifique, Université Paris Cité, Paris 75006, France
- Institut de Neurosciences de la Timone (Unité Mixte de Recherche 7289), Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille 13005, France
| |
Collapse
|
12
|
AttentionMNIST: a mouse-click attention tracking dataset for handwritten numeral and alphabet recognition. Sci Rep 2023; 13:3305. [PMID: 36849543 PMCID: PMC9971057 DOI: 10.1038/s41598-023-29880-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 02/11/2023] [Indexed: 03/01/2023] Open
Abstract
Multiple attention-based models that recognize objects via a sequence of glimpses have reported results on handwritten numeral recognition. However, no attention-tracking data for handwritten numeral or alphabet recognition is available. Availability of such data would allow attention-based models to be evaluated in comparison to human performance. We collect mouse-click attention tracking data from 382 participants trying to recognize handwritten numerals and alphabets (upper and lowercase) from images via sequential sampling. Images from benchmark datasets are presented as stimuli. The collected dataset, called AttentionMNIST, consists of a sequence of sample (mouse click) locations, predicted class label(s) at each sampling, and the duration of each sampling. On average, our participants observe only 12.8% of an image for recognition. We propose a baseline model to predict the location and the class(es) a participant will select at the next sampling. When exposed to the same stimuli and experimental conditions as our participants, a highly-cited attention-based reinforcement model falls short of human efficiency.
Collapse
|
13
|
Barack DL, Bakkour A, Shohamy D, Salzman CD. Visuospatial information foraging describes search behavior in learning latent environmental features. Sci Rep 2023; 13:1126. [PMID: 36670132 PMCID: PMC9860038 DOI: 10.1038/s41598-023-27662-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 01/05/2023] [Indexed: 01/22/2023] Open
Abstract
In the real world, making sequences of decisions to achieve goals often depends upon the ability to learn aspects of the environment that are not directly perceptible. Learning these so-called latent features requires seeking information about them. Prior efforts to study latent feature learning often used single decisions, used few features, and failed to distinguish between reward-seeking and information-seeking. To overcome this, we designed a task in which humans and monkeys made a series of choices to search for shapes hidden on a grid. On our task, the effects of reward and information outcomes from uncovering parts of shapes could be disentangled. Members of both species adeptly learned the shapes and preferred to select tiles expected to be informative earlier in trials than previously rewarding ones, searching a part of the grid until their outcomes dropped below the average information outcome-a pattern consistent with foraging behavior. In addition, how quickly humans learned the shapes was predicted by how well their choice sequences matched the foraging pattern, revealing an unexpected connection between foraging and learning. This adaptive search for information may underlie the ability in humans and monkeys to learn latent features to support goal-directed behavior in the long run.
Collapse
Affiliation(s)
- David L Barack
- Department of Neuroscience, Columbia University, New York, USA.
- Mortimer B. Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, USA.
| | - Akram Bakkour
- Department of Psychology, University of Chicago, Chicago, USA
| | - Daphna Shohamy
- Mortimer B. Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, USA
- Department of Psychology, Columbia University, New York, USA
- Kavli Institute for Brain Sciences, Columbia University, New York, USA
| | - C Daniel Salzman
- Department of Neuroscience, Columbia University, New York, USA
- Mortimer B. Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, USA
- Kavli Institute for Brain Sciences, Columbia University, New York, USA
- Department of Psychiatry, Columbia University, New York, USA
- New York State Psychiatric Institute, New York, USA
| |
Collapse
|
14
|
Gharib A, Thompson BL. Analysis and novel methods for capture of normative eye-tracking data in 2.5-month old infants. PLoS One 2022; 17:e0278423. [PMID: 36490239 PMCID: PMC9733894 DOI: 10.1371/journal.pone.0278423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 11/15/2022] [Indexed: 12/13/2022] Open
Abstract
Development of attention systems is essential for both cognitive and social behavior maturation. Visual behavior has been used to assess development of these attention systems. Yet, given its importance, there is a notable lack of literature detailing successful methods and procedures for using eye-tracking in early infancy to assess oculomotor and attention dynamics. Here we show that eye-tracking technology can be used to automatically record and assess visual behavior in infants as young as 2.5 months, and present normative data describing fixation and saccade behavior at this age. Features of oculomotor dynamics were analyzed from 2.5-month old infants who viewed videos depicting live action, cartoons, geometric shapes, social and non-social scenes. Of the 54 infants enrolled, 50 infants successfully completed the eye-tracking task and high-quality data was collected for 32 of those infants. We demonstrate that modifications specifically tailored for the infant population allowed for consistent tracking of pupil and corneal reflection and minimal data loss. Additionally, we found consistent fixation and saccade behaviors across the entire six-minute duration of the videos, indicating that this is a feasible task for 2.5-month old infants. Moreover, normative oculomotor metrics for a free-viewing task in 2.5-month old infants are documented for the first time as a result of this high-quality data collection.
Collapse
Affiliation(s)
- Alma Gharib
- Department of Computer Science, University of Southern California, Los Angeles, California, United States of America
- Program in Developmental Neuroscience and Neurogenetics, The Saban Research Institute and Department of Pediatrics at Children’s Hospital of Los Angeles, Keck School of Medicine, University of Southern California, Los Angeles, California, United States of America
| | - Barbara L. Thompson
- Department of Pediatrics and Human Development, College of Human Medicine, Michigan State University, Grand Rapids, Michigan, United States of America
- * E-mail:
| |
Collapse
|
15
|
Malladi SPK, Mukherjee J, Larabi MC, Chaudhury S. EG-SNIK: A Free Viewing Egocentric Gaze Dataset and Its Applications. IEEE ACCESS 2022; 10:129626-129641. [DOI: 10.1109/access.2022.3228484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
Affiliation(s)
- Sai Phani Kumar Malladi
- Advanced Technology Development Centre, Indian Institute of Technology Kharagpur, Kharagpur, India
| | - Jayanta Mukherjee
- Department of Computer Science and Engineering, IIT Kharagpur, Kharagpur, India
| | | | - Santanu Chaudhury
- Department of Computer Science and Engineering, IIT Jodhpur, Jodhpur, India
| |
Collapse
|
16
|
Perkovich E, Sun L, Mire S, Laakman A, Sakhuja U, Yoshida H. What children with and without ASD see: Similar visual experiences with different pathways through parental attention strategies. AUTISM & DEVELOPMENTAL LANGUAGE IMPAIRMENTS 2022; 7:23969415221137293. [PMID: 36518657 PMCID: PMC9742584 DOI: 10.1177/23969415221137293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Background and aims Although young children's gaze behaviors in experimental task contexts have been shown to be potential biobehavioral markers relevant to autism spectrum disorder (ASD), we know little about their everyday gaze behaviors. The present study aims (1) to document early gaze behaviors that occur within a live, social interactive context among children with and without ASD and their parents, and (2) to examine how children's and parents' gaze behaviors are related for ASD and typically developing (TD) groups. A head-mounted eye-tracking system was used to record the frequency and duration of a set of gaze behaviors (such as sustained attention [SA] and joint attention [JA]) that are relevant to early cognitive and language development. Methods Twenty-six parent-child dyads (ASD group = 13, TD group = 13) participated. Children were between the ages of 3 and 8 years old. We placed head-mounted eye trackers on parents and children to record their parent- and child-centered views, and we also recorded their interactive parent-child object play scene from both a wall- and ceiling-mounted camera. We then annotated the frequency and duration of gaze behaviors (saccades, fixation, SA, and JA) for different regions of interest (object, face, and hands), and attention shifting. Independent group t-tests and ANOVAs were used to observe group comparisons, and linear regression was used to test the predictiveness of parent gaze behaviors for JA. Results The present study found no differences in visual experiences between children with and without ASD. Interestingly, however, significant group differences were found for parent gaze behaviors. Compared to parents of ASD children, parents of TD children focused on objects and shifted their attention between objects and their children's faces more. In contrast, parents of ASD children were more likely to shift their attention between their own hands and their children. JA experiences were also predicted differently, depending on the group: among parents of TD children, attention to objects predicted JA, but among parents of ASD children, attention to their children predicted JA. Conclusion Although no differences were found between gaze behaviors of autistic and TD children in this study, there were significant group differences in parents' looking behaviors. This suggests potentially differential pathways for the scaffolding effect of parental gaze for ASD children compared with TD children. Implications The present study revealed the impact of everyday life, social interactive context on early visual experiences, and point to potentially different pathways by which parental looking behaviors guide the looking behaviors of children with and without ASD. Identifying parental social input relevant to early attention development (e.g., JA) among autistic children has implications for mechanisms that could support socially mediated attention behaviors that have been documented to facilitate early cognitive and language development and implications for the development of parent-mediated interventions for young children with or at risk for ASD.Note: This paper uses a combination of person-first and identity-first language, an intentional decision aligning with comments put forth by Vivanti (Vivanti, 2020), recognizing the complexities of known and unknown preferences of those in the larger autism community.
Collapse
Affiliation(s)
| | - Lichao Sun
- Department of Psychology, University of Houston, Houston, TX, USA
| | - Sarah Mire
- Educational Psychology Department, Baylor University, Waco, TX, USA
| | - Anna Laakman
- Department of Psychological Health and Learning Sciences, University of Houston, Houston, TX, USA
| | - Urvi Sakhuja
- Department of Psychology, University of Houston, Houston, TX, USA
| | - Hanako Yoshida
- Department of Psychology, University of Houston, Houston, TX, USA
| |
Collapse
|
17
|
Han NX, Chakravarthula PN, Eckstein MP. Peripheral facial features guiding eye movements and reducing fixational variability. J Vis 2021; 21:7. [PMID: 34347018 PMCID: PMC8340657 DOI: 10.1167/jov.21.8.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Face processing is a fast and efficient process due to its evolutionary and social importance. A majority of people direct their first eye movement to a featureless point just below the eyes that maximizes accuracy in recognizing a person's identity and gender. Yet, the exact properties or features of the face that guide the first eye movements and reduce fixational variability are unknown. Here, we manipulated the presence of the facial features and the spatial configuration of features to investigate their effect on the location and variability of first and second fixations to peripherally presented faces. Our results showed that observers can utilize the face outline, individual facial features, and feature spatial configuration to guide the first eye movements to their preferred point of fixation. The eyes have a preferential role in guiding the first eye movements and reducing fixation variability. Eliminating the eyes or altering their position had the greatest influence on the location and variability of fixations and resulted in the largest detriment to face identification performance. The other internal features (nose and mouth) also contribute to reducing fixation variability. A subsequent experiment measuring detection of single features showed that the eyes have the highest detectability (relative to other features) in the visual periphery providing a strong sensory signal to guide the oculomotor system. Together, the results suggest a flexible multiple-cue approach that might be a robust solution to cope with how the varying eccentricities in the real world influence the ability to resolve individual feature properties and the preferential role of the eyes.
Collapse
Affiliation(s)
- Nicole X Han
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, USA.,
| | - Puneeth N Chakravarthula
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, USA.,
| | - Miguel P Eckstein
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, USA.,
| |
Collapse
|
18
|
A Stochastic Optimal Control Model with Internal Feedback and Velocity Tracking for Saccadic Eye Movements. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102679] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
19
|
Bansal S, Joiner WM. Transsaccadic visual perception of foveal compared to peripheral environmental changes. J Vis 2021; 21:12. [PMID: 34160578 PMCID: PMC8237106 DOI: 10.1167/jov.21.6.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The maintenance of stable visual perception across eye movements is hypothesized to be aided by extraretinal information (e.g., corollary discharge [CD]). Previous studies have focused on the benefits of this information for perception at the fovea. However, there is little information on the extent that CD benefits peripheral visual perception. Here we systematically examined the extent that CD supports the ability to perceive transsaccadic changes at the fovea compared to peripheral changes. Human subjects made saccades to targets positioned at different amplitudes (4° or 8°) and directions (rightward or upward). On each trial there was a reference point located either at (fovea) or 4° away (periphery) from the target. During the saccade the target and reference disappeared and, after a blank period, the reference reappeared at a shifted location. Subjects reported the perceived shift direction, and we determined the perceptual threshold for detection and estimate of the reference location. We also simulated the detection and location if subjects solely relied on the visual error of the shifted reference experienced after the saccade. The comparison of the reference location under these two conditions showed that overall the perceptual estimate was approximately 53% more accurate and 30% less variable than estimates based solely on visual information at the fovea. These values for peripheral shifts were consistently lower than that at the fovea: 34% more accurate and 9% less variable. Overall, the results suggest that CD information does support stable visual perception in the periphery, but is consistently less beneficial compared to the fovea.
Collapse
Affiliation(s)
- Sonia Bansal
- Department of Neuroscience, George Mason University, Fairfax, VA, USA.,Maryland Psychiatric Research Center, Department of Psychiatry, University of Maryland School of Medicine, Baltimore, MD, USA.,
| | - Wilsaan M Joiner
- Department of Bioengineering, George Mason University, Fairfax, VA, USA.,Department of Neurobiology, Physiology and Behavior, University of California Davis, Davis, CA, USA.,Department of Neurology, University of California Davis, Davis, CA, USA.,
| |
Collapse
|
20
|
TDCS effects on pointing task learning in young and old adults. Sci Rep 2021; 11:3421. [PMID: 33564052 PMCID: PMC7873227 DOI: 10.1038/s41598-021-82275-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Accepted: 01/14/2021] [Indexed: 01/19/2023] Open
Abstract
Skill increase in motor performance can be defined as explicitly measuring task success but also via more implicit measures of movement kinematics. Even though these measures are often related, there is evidence that they represent distinct concepts of learning. In the present study, the effect of multiple tDCS-sessions on both explicit and implicit measures of learning are investigated in a pointing task in 30 young adults (YA) between 27.07 ± 3.8 years and 30 old adults (OA) between 67.97 years ± 5.3 years. We hypothesized, that OA would show slower explicit skill learning indicated by higher movement times/lower accuracy and slower implicit learning indicated by higher spatial variability but profit more from anodal tDCS compared with YA. We found age-related differences in movement time but not in accuracy or spatial variability. TDCS did not skill learning facilitate learning neither in explicit nor implicit parameters. However, contrary to our hypotheses, we found tDCS-associated higher accuracy only in YA but not in spatial variability. Taken together, our data shows limited overlapping of tDCS effects in explicit and implicit skill parameters. Furthermore, it supports the assumption that tDCS is capable of producing a performance-enhancing brain state at least for explicit skill acquisition.
Collapse
|
21
|
Rigby D, Vass C, Payne K. Opening the 'Black Box': An Overview of Methods to Investigate the Decision-Making Process in Choice-Based Surveys. PATIENT-PATIENT CENTERED OUTCOMES RESEARCH 2021; 13:31-41. [PMID: 31486021 DOI: 10.1007/s40271-019-00385-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
The desire to understand the preferences of patients, healthcare professionals and the public continues to grow. Health valuation studies, often in the form of discrete choice experiments, a choice based survey approach, proliferate as a result. A variety of methods of pre-choice process analysis have been developed to investigate how and why people make their decisions in such experiments and surveys. These techniques have been developed to investigate how people acquire and process information and make choices. These techniques offer the potential to test and improve theories of choice and/or associated empirical models. This paper provides an overview of such methods, with the focus on their use in stated choice-based healthcare studies. The methods reviewed are eye tracking, mouse tracing, brain imaging, deliberation time analysis and think aloud. For each method, we summarise the rationale, implementation, type of results generated and associated challenges, along with a discussion of possible future developments.
Collapse
Affiliation(s)
- Dan Rigby
- Economics, School of Social Sciences, The University of Manchester, Oxford Road, Manchester, M13 9PL, UK.
| | - Caroline Vass
- Division of Population Health, Health Services Research and Primary Care, Manchester Centre for Health Economics, The University of Manchester, Oxford Road, Manchester, M13 9PL, UK
| | - Katherine Payne
- Division of Population Health, Health Services Research and Primary Care, Manchester Centre for Health Economics, The University of Manchester, Oxford Road, Manchester, M13 9PL, UK
| |
Collapse
|
22
|
Coutinho JD, Lefèvre P, Blohm G. Confidence in predicted position error explains saccadic decisions during pursuit. J Neurophysiol 2020; 125:748-767. [PMID: 33356899 DOI: 10.1152/jn.00492.2019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A fundamental problem in motor control is the coordination of complementary movement types to achieve a common goal. As a common example, humans view moving objects through coordinated pursuit and saccadic eye movements. Pursuit is initiated and continuously controlled by retinal image velocity. During pursuit, eye position may lag behind the target. This can be compensated by the discrete execution of a catch-up saccade. The decision to trigger a saccade is influenced by both position and velocity errors, and the timing of saccades can be highly variable. The observed distributions of saccade frequency and trigger time remain poorly understood, and this decision process remains imprecisely quantified. Here, we propose a predictive, probabilistic model explaining the decision to trigger saccades during pursuit to foveate moving targets. In this model, expected position error and its associated uncertainty are predicted through Bayesian inference across noisy, delayed sensory observations (Kalman filtering). This probabilistic prediction is used to estimate the confidence that a saccade is needed (quantified through log-probability ratio), triggering a saccade upon accumulating to a fixed threshold. The model qualitatively explains behavioral observations on the frequency and trigger time distributions of saccades during pursuit over a range of target motion trajectories. Furthermore, this model makes novel predictions that saccade decisions are highly sensitive to uncertainty for small predicted position errors, but this influence diminishes as the magnitude of predicted position error increases. We suggest that this predictive, confidence-based decision-making strategy represents a fundamental principle for the probabilistic neural control of coordinated movements.NEW & NOTEWORTHY This is the first stochastic dynamical systems model of pursuit-saccade coordination accounting for noise and delays in the sensorimotor system. The model uses Bayesian inference to predictively estimate visual motion, triggering saccades when confidence in predicted position error accumulates to a threshold. This model explains saccade frequency and trigger time distributions across target trajectories and makes novel predictions about the influence of sensory uncertainty in saccade decisions during pursuit.
Collapse
Affiliation(s)
- Jonathan D Coutinho
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Philippe Lefèvre
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Université catholique de Louvain, Louvain-la-Neuve, Belgium.,Institute of Neuroscience, Université catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
23
|
Stewart EEM, Hübner C, Schütz AC. Stronger saccadic suppression of displacement and blanking effect in children. J Vis 2020; 20:13. [PMID: 33052408 PMCID: PMC7571331 DOI: 10.1167/jov.20.10.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Accepted: 09/07/2020] [Indexed: 11/24/2022] Open
Abstract
Humans do not notice small displacements to objects that occur during saccades, termed saccadic suppression of displacement (SSD), and this effect is reduced when a blank is introduced between the pre- and postsaccadic stimulus (Bridgeman, Hendry, & Stark, 1975; Deubel, Schneider, & Bridgeman, 1996). While these effects have been studied extensively in adults, it is unclear how these phenomena are characterized in children. A potentially related mechanism, saccadic suppression of contrast sensitivity-a prerequisite to achieve a stable percept-is stronger for children (Bruno, Brambati, Perani, & Morrone, 2006). However, the evidence for how transsaccadic stimulus displacements may be suppressed or integrated is mixed. While they can integrate basic visual feature information from an early age, they cannot integrate multisensory information (Gori, Viva, Sandini, & Burr, 2008; Nardini, Jones, Bedford, & Braddick, 2008), suggesting a failure in the ability to integrate more complex sensory information. We tested children 7 to 12 years old and adults 19 to 23 years old on their ability to perceive intrasaccadic stimulus displacements, with and without a postsaccadic blank. Results showed that children had stronger SSD than adults and a larger blanking effect. Children also had larger undershoots and more variability in their initial saccade endpoints, indicating greater intrinsic uncertainty, and they were faster in executing corrective saccades to account for these errors. Together, these results suggest that children may have a greater internal expectation or prediction of saccade error than adults; thus, the stronger SSD in children may be due to higher intrinsic uncertainty in target localization or saccade execution.
Collapse
Affiliation(s)
- Emma E M Stewart
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
| | - Carolin Hübner
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
| | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
- Center for Mind, Brain and Behaviour, Philipps-Universität Marburg, Marburg, Germany
- https://www.uni-marburg.de/en/fb04/team-schuetz/team/alexander-schutz
| |
Collapse
|
24
|
Abstract
Despite recent advances on the mechanisms and purposes of fine oculomotor behavior, a rigorous assessment of the precision and accuracy of the smallest saccades is still lacking. Yet knowledge of how effectively these movements shift gaze is necessary for understanding their functions and is helpful in further elucidating their motor underpinnings. Using a combination of high-resolution eye-tracking and gaze-contingent control, here we examined the accuracy and precision of saccades aimed toward targets ranging from [Formula: see text] to [Formula: see text] eccentricity. We show that even small saccades of just 14-[Formula: see text] are very effective in centering the stimulus on the retina. Furthermore, we show that for a target at any given eccentricity, the probability of eliciting a saccade depends on its efficacy in reducing the foveal offset. The pattern of results reported here is consistent with current knowledge on the motor mechanisms of microsaccade production.
Collapse
Affiliation(s)
- Martina Poletti
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, 14627, USA.
- Department of Neuroscience, University of Rochester, Rochester, NY, 14627, USA.
- Center for Visual Science, University of Rochester, Rochester, NY, 14627, USA.
| | - Janis Intoy
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY, 14627, USA
- Graduate Program for Neuroscience, Boston University, Boston, MA, 02215, USA
| | - Michele Rucci
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY, 14627, USA
| |
Collapse
|
25
|
Phani Kumar Malladi S, Mukhopadhyay J, Larabi MC, Chaudhury S. Eye Movement State Trajectory Estimator based on Ancestor Sampling. 2020 IEEE 22ND INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP) 2020. [DOI: 10.1109/mmsp48831.2020.9287155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
26
|
Mohl JT, Pearson JM, Groh JM. Monkeys and humans implement causal inference to simultaneously localize auditory and visual stimuli. J Neurophysiol 2020; 124:715-727. [PMID: 32727263 DOI: 10.1152/jn.00046.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
The environment is sampled by multiple senses, which are woven together to produce a unified perceptual state. However, optimally unifying such signals requires assigning particular signals to the same or different underlying objects or events. Many prior studies (especially in animals) have assumed fusion of cross-modal information, whereas recent work in humans has begun to probe the appropriateness of this assumption. Here we present results from a novel behavioral task in which both monkeys (Macaca mulatta) and humans localized visual and auditory stimuli and reported their perceived sources through saccadic eye movements. When the locations of visual and auditory stimuli were widely separated, subjects made two saccades, while when the two stimuli were presented at the same location they made only a single saccade. Intermediate levels of separation produced mixed response patterns: a single saccade to an intermediate position on some trials or separate saccades to both locations on others. The distribution of responses was well described by a hierarchical causal inference model that accurately predicted both the explicit "same vs. different" source judgments as well as biases in localization of the source(s) under each of these conditions. The results from this task are broadly consistent with prior work in humans across a wide variety of analogous tasks, extending the study of multisensory causal inference to nonhuman primates and to a natural behavioral task with both a categorical assay of the number of perceived sources and a continuous report of the perceived position of the stimuli.NEW & NOTEWORTHY We developed a novel behavioral paradigm for the study of multisensory causal inference in both humans and monkeys and found that both species make causal judgments in the same Bayes-optimal fashion. To our knowledge, this is the first demonstration of behavioral causal inference in animals, and this cross-species comparison lays the groundwork for future experiments using neuronal recording techniques that are impractical or impossible in human subjects.
Collapse
Affiliation(s)
- Jeff T Mohl
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina
| | - John M Pearson
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Biostatistics and Bioinformatics, Duke University Medical School, Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina
| |
Collapse
|
27
|
Sajad A, Sadeh M, Crawford JD. Spatiotemporal transformations for gaze control. Physiol Rep 2020; 8:e14533. [PMID: 32812395 PMCID: PMC7435051 DOI: 10.14814/phy2.14533] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/30/2020] [Accepted: 07/01/2020] [Indexed: 12/13/2022] Open
Abstract
Sensorimotor transformations require spatiotemporal coordination of signals, that is, through both time and space. For example, the gaze control system employs signals that are time-locked to various sensorimotor events, but the spatial content of these signals is difficult to assess during ordinary gaze shifts. In this review, we describe the various models and methods that have been devised to test this question, and their limitations. We then describe a new method that can (a) simultaneously test between all of these models during natural, head-unrestrained conditions, and (b) track the evolving spatial continuum from target (T) to future gaze coding (G, including errors) through time. We then summarize some applications of this technique, comparing spatiotemporal coding in the primate frontal eye field (FEF) and superior colliculus (SC). The results confirm that these areas preferentially encode eye-centered, effector-independent parameters, and show-for the first time in ordinary gaze shifts-a spatial transformation between visual and motor responses from T to G coding. We introduce a new set of spatial models (T-G continuum) that revealed task-dependent timing of this transformation: progressive during a memory delay between vision and action, and almost immediate without such a delay. We synthesize the results from our studies and supplement it with previous knowledge of anatomy and physiology to propose a conceptual model where cumulative transformation noise is realized as inaccuracies in gaze behavior. We conclude that the spatiotemporal transformation for gaze is both local (observed within and across neurons in a given area) and distributed (with common signals shared across remote but interconnected structures).
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Psychology DepartmentVanderbilt UniversityNashvilleTNUSA
| | - Morteza Sadeh
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Department of NeurosurgeryUniversity of Illinois at ChicagoChicagoILUSA
| | - John Douglas Crawford
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Vision: Science to Applications Program (VISTA)Neuroscience Graduate Diploma ProgramDepartments of Psychology, Biology, Kinesiology & Health SciencesYork UniversityTorontoONCanada
| |
Collapse
|
28
|
Towards assessing extra-retinal uncertainty: A reply to M. Lisi (2020). Cortex 2020; 130:444-448. [PMID: 32641212 DOI: 10.1016/j.cortex.2020.05.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 05/27/2020] [Accepted: 05/27/2020] [Indexed: 11/23/2022]
|
29
|
The role of the posterior parietal cortex in saccadic error processing. Brain Struct Funct 2020; 225:763-784. [PMID: 32065255 DOI: 10.1007/s00429-020-02034-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 01/27/2020] [Indexed: 10/25/2022]
Abstract
Ocular saccades rapidly displace the fovea from one point of interest to another, thus minimizing the loss of visual information and ensuring the seamless continuity of visual perception. However, because of intrinsic variability in sensory-motor processing, saccades often miss their intended target, necessitating a secondary corrective saccade. Behavioral evidence suggests that the oculomotor system estimates saccadic error by relying on two sources of information: the retinal feedback obtained post-saccadically and an internal extra-retinal signal obtained from efference copy or proprioception. However, the neurophysiological mechanisms underlying this process remain elusive. We trained two rhesus monkeys to perform visually guided saccades towards a target that was imperceptibly displaced at saccade onset on some trials. We recorded activity from neurons in the lateral intraparietal area (LIP), an area implicated in visual, attentional and saccadic processing. We found that a subpopulation of neurons detect saccadic motor error by firing more strongly after an inaccurate saccade. This signal did not depend on retinal feedback or on the execution of a secondary corrective saccade. Moreover, inactivating LIP led to a large and selective increase in the latency of small (i.e., natural) corrective saccade initiation. Our results indicate a key role for LIP in saccadic error processing.
Collapse
|
30
|
Bian T, Wolpert DM, Jiang ZP. Model-Free Robust Optimal Feedback Mechanisms of Biological Motor Control. Neural Comput 2020; 32:562-595. [PMID: 31951794 DOI: 10.1162/neco_a_01260] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Sensorimotor tasks that humans perform are often affected by different sources of uncertainty. Nevertheless, the central nervous system (CNS) can gracefully coordinate our movements. Most learning frameworks rely on the internal model principle, which requires a precise internal representation in the CNS to predict the outcomes of our motor commands. However, learning a perfect internal model in a complex environment over a short period of time is a nontrivial problem. Indeed, achieving proficient motor skills may require years of training for some difficult tasks. Internal models alone may not be adequate to explain the motor adaptation behavior during the early phase of learning. Recent studies investigating the active regulation of motor variability, the presence of suboptimal inference, and model-free learning have challenged some of the traditional viewpoints on the sensorimotor learning mechanism. As a result, it may be necessary to develop a computational framework that can account for these new phenomena. Here, we develop a novel theory of motor learning, based on model-free adaptive optimal control, which can bypass some of the difficulties in existing theories. This new theory is based on our recently developed adaptive dynamic programming (ADP) and robust ADP (RADP) methods and is especially useful for accounting for motor learning behavior when an internal model is inaccurate or unavailable. Our preliminary computational results are in line with experimental observations reported in the literature and can account for some phenomena that are inexplicable using existing models.
Collapse
Affiliation(s)
- Tao Bian
- Control and Networks Lab, Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY 11201, U.S.A.
| | - Daniel M Wolpert
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, U.S.A., and Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, U.K.
| | - Zhong-Ping Jiang
- Control and Networks Lab, Department of Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY 11201, U.S.A.
| |
Collapse
|
31
|
|
32
|
Abstract
Saccades are rapid eye movements that orient the visual axis toward objects of interest to allow their processing by the central, high-acuity retina. Our ability to collect visual information efficiently relies on saccadic accuracy, which is limited by a combination of uncertainty in the location of the target and motor noise. It has been observed that saccades have a systematic tendency to fall short of their intended targets, and it has been suggested that this bias originates from a cost function that overly penalizes hypermetric errors. Here, we tested this hypothesis by systematically manipulating the positional uncertainty of saccadic targets. We found that increasing uncertainty produced not only a larger spread of the saccadic endpoints but also more hypometric errors and a systematic bias toward the average of target locations in a given block, revealing that prior knowledge was integrated into saccadic planning. Moreover, by examining how variability and bias covaried across conditions, we estimated the asymmetry of the cost function and found that it was related to individual differences in the additional time needed to program secondary saccades for correcting hypermetric errors, relative to hypometric ones. Taken together, these findings reveal that the saccadic system uses a probabilistic-Bayesian control strategy to compensate for uncertainty in a statistically principled way and to minimize the expected cost of saccadic errors.
Collapse
|
33
|
Eggert T, Straube A. Saccade variability in healthy subjects and cerebellar patients. PROGRESS IN BRAIN RESEARCH 2019; 249:141-152. [PMID: 31325974 DOI: 10.1016/bs.pbr.2019.03.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2023]
Abstract
In a previous study we developed a model for the inter-trial variance of saccade trajectories in the rhesus macaque. The analysis of that model showed that signal-dependent noise results in different effector variabilities depending on whether the noise is propagated feedforward through the system (accumulating noise) or whether the noise originates from inside of a premotor feedback loop (feedback noise). This allowed the gain of the premotor feedback loop to be estimated directly from behavioral data. In the present study, we applied the model in healthy human subjects and in patients with chronic isolated cerebellar lesions due to ischemic stroke. Humans showed smaller noise coefficients of variation for both accumulating noise and feedback noise and smaller feedback gain than the monkeys. Despite these differences in the model parameters, the qualitative differences between the two noise types were similar in both species. Cerebellar patients showed larger inter-trial variance of saccade amplitude compared to controls, but saccade metrics and dynamics were well compensated. The parameters of the noise model did not differ significantly between groups. The variance of the saccade amplitude correlated highly (r=0.95) with the coefficient of variation of accumulating noise but not with the other model parameters. The results suggest that the cerebellum plays a role not only in premotor feedback but also in feedforward saccade control and that the latter is responsible for increased endpoint variance in cerebellar patients.
Collapse
Affiliation(s)
- Thomas Eggert
- Department of Neurology, University Hospital, LMU Munich, Munich, Germany.
| | - Andreas Straube
- Department of Neurology, University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
34
|
Kangasrääsiö A, Jokinen JPP, Oulasvirta A, Howes A, Kaski S. Parameter Inference for Computational Cognitive Models with Approximate Bayesian Computation. Cogn Sci 2019; 43:e12738. [PMID: 31204797 PMCID: PMC6593436 DOI: 10.1111/cogs.12738] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 04/09/2019] [Accepted: 04/11/2019] [Indexed: 11/28/2022]
Abstract
This paper addresses a common challenge with computational cognitive models: identifying parameter values that are both theoretically plausible and generate predictions that match well with empirical data. While computational models can offer deep explanations of cognition, they are computationally complex and often out of reach of traditional parameter fitting methods. Weak methodology may lead to premature rejection of valid models or to acceptance of models that might otherwise be falsified. Mathematically robust fitting methods are, therefore, essential to the progress of computational modeling in cognitive science. In this article, we investigate the capability and role of modern fitting methods—including Bayesian optimization and approximate Bayesian computation—and contrast them to some more commonly used methods: grid search and Nelder–Mead optimization. Our investigation consists of a reanalysis of the fitting of two previous computational models: an Adaptive Control of Thought—Rational model of skill acquisition and a computational rationality model of visual search. The results contrast the efficiency and informativeness of the methods. A key advantage of the Bayesian methods is the ability to estimate the uncertainty of fitted parameter values. We conclude that approximate Bayesian computation is (a) efficient, (b) informative, and (c) offers a path to reproducible results.
Collapse
Affiliation(s)
| | | | | | - Andrew Howes
- School of Computer Science, University of Birmingham
| | - Samuel Kaski
- Department of Computer Science, Aalto University
| |
Collapse
|
35
|
Reuter EM, Marinovic W, Welsh TN, Carroll TJ. Increased preparation time reduces, but does not abolish, action history bias of saccadic eye movements. J Neurophysiol 2019; 121:1478-1490. [PMID: 30785812 DOI: 10.1152/jn.00512.2018] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The characteristics of movements are strongly history-dependent. Marinovic et al. (Marinovic W, Poh E, de Rugy A, Carroll TJ. eLife 6: e26713, 2017) showed that past experience influences the execution of limb movements through a combination of temporally stable processes that are strictly use dependent and dynamically evolving and context-dependent processes that reflect prediction of future actions. Here we tested the basis of history-dependent biases for multiple spatiotemporal features of saccadic eye movements under two preparation time conditions (long and short). Twenty people performed saccades to visual targets. To prompt context-specific expectations of most likely target locations, 1 of 12 potential target locations was specified on ~85% of the trials and each remaining target was presented on ~1% trials. In long preparation trials participants were shown the location of the next target 1 s before its presentation onset, whereas in short preparation trials each target was first specified as the cue to move. Saccade reaction times and direction were biased by recent saccade history but according to distinct spatial tuning profiles. Biases were purely expectation related for saccadic reaction times, which increased linearly as the distance from the repeated target location increased when preparation time was short but were similar to all targets when preparation time was long. By contrast, the directions of saccades were biased toward the repeated target in both preparation time conditions, although to a lesser extent when the target location was precued (long preparation). The results suggest that saccade history affects saccade dynamics via both use- and expectation-dependent mechanisms and that movement history has dissociable effects on reaction time and saccadic direction. NEW & NOTEWORTHY The characteristics of our movements are influenced not only by concurrent sensory inputs but also by how we have moved in the past. For limb movements, history effects involve both use-dependent processes due strictly to movement repetition and processes that reflect prediction of future actions. Here we show that saccade history also affects saccade dynamics via use- and expectation-dependent mechanisms but that movement history has dissociable effects on saccade reaction time and direction.
Collapse
Affiliation(s)
- Eva-Maria Reuter
- Centre for Sensorimotor Performance, School of Human Movement and Nutrition Sciences, The University of Queensland , Brisbane, Queensland , Australia
| | - Welber Marinovic
- School of Psychology, Curtin University , Perth, Western Australia , Australia
| | - Timothy N Welsh
- Faculty of Kinesiology and Physical Education, University of Toronto , Toronto, Ontario , Canada
| | - Timothy J Carroll
- Centre for Sensorimotor Performance, School of Human Movement and Nutrition Sciences, The University of Queensland , Brisbane, Queensland , Australia
| |
Collapse
|
36
|
Abstract
The capability of directing gaze to relevant parts in the environment is crucial for our survival. Computational models have proposed quantitative accounts of human gaze selection in a range of visual search tasks. Initially, models suggested that gaze is directed to the locations in a visual scene at which some criterion such as the probability of target location, the reduction of uncertainty or the maximization of reward appear to be maximal. But subsequent studies established, that in some tasks humans instead direct their gaze to locations, such that after the single next look the criterion is expected to become maximal. However, in tasks going beyond a single action, the entire action sequence may determine future rewards thereby necessitating planning beyond a single next gaze shift. While previous empirical studies have suggested that human gaze sequences are planned, quantitative evidence for whether the human visual system is capable of finding optimal eye movement sequences according to probabilistic planning is missing. Here we employ a series of computational models to investigate whether humans are capable of looking ahead more than the next single eye movement. We found clear evidence that subjects' behavior was better explained by the model of a planning observer compared to a myopic, greedy observer, which selects only a single saccade at a time. In particular, the location of our subjects' first fixation differed depending on the stimulus and the time available for the search, which was well predicted quantitatively by a probabilistic planning model. Overall, our results are the first evidence that the human visual system's gaze selection agrees with optimal planning under uncertainty.
Collapse
|
37
|
Costela FM, Woods RL. When Watching Video, Many Saccades Are Curved and Deviate From a Velocity Profile Model. Front Neurosci 2019; 12:960. [PMID: 30666178 PMCID: PMC6330331 DOI: 10.3389/fnins.2018.00960] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Accepted: 12/03/2018] [Indexed: 12/20/2022] Open
Abstract
Commonly, saccades are thought to be ballistic eye movements, not modified during flight, with a straight path and a well-described velocity profile. However, they do not always follow a straight path and studies of saccade curvature have been reported previously. In a prior study, we developed a real-time, saccade-trajectory prediction algorithm to improve the updating of gaze-contingent displays and found that saccades with a curved path or that deviated from the expected velocity profile were not well fit by our saccade-prediction algorithm (velocity-profile deviation), and thus had larger updating errors than saccades that had a straight path and had a velocity profile that was fit well by the model. Further, we noticed that the curved saccades and saccades with high velocity-profile deviations were more common than we had expected when participants performed a natural-viewing task. Since those saccades caused larger display updating errors, we sought a better understanding of them. Here we examine factors that could affect curvature and velocity profile of saccades using a pool of 218,744 saccades from 71 participants watching “Hollywood” video clips. Those factors included characteristics of the participants (e.g., age), of the videos (importance of faces for following the story, genre), of the saccade (e.g., magnitude, direction), time during the session (e.g., fatigue) and presence and timing of scene cuts. While viewing the video clips, saccades were most likely horizontal or vertical over oblique. Measured curvature and velocity-profile deviation had continuous, skewed frequency distributions. We used mixed-effects regression models that included cubic terms and found a complex relationship between curvature, velocity-profile deviation and saccade duration (or magnitude). Curvature and velocity-profile deviation were related to some video-dependent features such as lighting, face presence, or nature and human figure content. Time during the session was a predictor for velocity profile deviations. Further, we found a relationship for saccades that were in flight at the time of a scene cut to have higher velocity-profile deviations and lower curvature in univariable models. Saccades characteristics vary with a variety of factors, which suggests complex interactions between oculomotor control and scene content that could be explored further.
Collapse
Affiliation(s)
- Francisco M Costela
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA, United States.,Department of Ophthalmology, Harvard Medical School, Boston, MA, United States
| | - Russell L Woods
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA, United States.,Department of Ophthalmology, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
38
|
Smalianchuk I, Jagadisan UK, Gandhi NJ. Instantaneous Midbrain Control of Saccade Velocity. J Neurosci 2018; 38:10156-10167. [PMID: 30291204 PMCID: PMC6246878 DOI: 10.1523/jneurosci.0962-18.2018] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2018] [Revised: 09/18/2018] [Accepted: 09/22/2018] [Indexed: 01/19/2023] Open
Abstract
The ability to interact with our environment requires the brain to transform spatially represented sensory signals into temporally encoded motor commands for appropriate control of the relevant effectors. For visually guided eye movements, or saccades, the superior colliculus (SC) is assumed to be the final stage of spatial representation, and instantaneous control of the movement is achieved through a rate code representation in the lower brain stem. We investigated whether SC activity in nonhuman primates (Macaca mulatta, 2 male and 1 female) also uses a dynamic rate code, in addition to the spatial representation. Noting that the kinematics of amplitude-matched movements exhibit trial-to-trial variability, we regressed instantaneous SC activity with instantaneous eye velocity and found a robust correlation throughout saccade duration. Peak correlation was tightly linked to time of peak velocity, the optimal efferent delay between SC activity and eye velocity was constant at ∼12 ms both at onset and during the saccade, and SC neurons with higher firing rates exhibited stronger correlations. Moreover, the strong correlative relationship and constant efferent delay observation were preserved when eye movement profiles were substantially altered by a blink-induced perturbation. These results indicate that the rate code of individual SC neurons can control instantaneous eye velocity and argue against a serial process of spatial-to-temporal transformation. They also motivated us to consider a new framework of saccade control that does not incorporate traditionally accepted elements, such as the comparator and resettable integrator, whose neural correlates have remained elusive.SIGNIFICANCE STATEMENT All movements exhibit time-varying features that are under instantaneous control of the innervating neural command. At what stage in the brain is dynamical control present? It is well known that, in the skeletomotor system, neurons in the motor cortex use dynamical control. In the oculomotor system, in contrast, instantaneous velocity control of saccadic eye movements is not thought to be enforced until the lower brainstem. Using correlations between residual signals across trials, we show that instantaneous control of saccade velocity is present earlier in the visuo-oculomotor neuraxis, at the level of superior colliculus. The results require us to consider alternate frameworks of the neural control of saccades.
Collapse
Affiliation(s)
- Ivan Smalianchuk
- Department of Bioengineering
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Uday K Jagadisan
- Department of Bioengineering
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Neeraj J Gandhi
- Department of Bioengineering,
- Department of Neuroscience, and
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| |
Collapse
|
39
|
Motion Extrapolation for Eye Movements Predicts Perceived Motion-Induced Position Shifts. J Neurosci 2018; 38:8243-8250. [PMID: 30104339 DOI: 10.1523/jneurosci.0736-18.2018] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Revised: 07/23/2018] [Accepted: 07/24/2018] [Indexed: 11/21/2022] Open
Abstract
Transmission delays in the nervous system pose challenges for the accurate localization of moving objects as the brain must rely on outdated information to determine their position in space. Acting effectively in the present requires that the brain compensates not only for the time lost in the transmission and processing of sensory information, but also for the expected time that will be spent preparing and executing motor programs. Failure to account for these delays will result in the mislocalization and mistargeting of moving objects. In the visuomotor system, where sensory and motor processes are tightly coupled, this predicts that the perceived position of an object should be related to the latency of saccadic eye movements aimed at it. Here we use the flash-grab effect, a mislocalization of briefly flashed stimuli in the direction of a reversing moving background, to induce shifts of perceived visual position in human observers (male and female). We find a linear relationship between saccade latency and perceived position shift, challenging the classic dissociation between "vision for action" and "vision for perception" for tasks of this kind and showing that oculomotor position representations are either shared with or tightly coupled to perceptual position representations. Altogether, we show that the visual system uses both the spatial and temporal characteristics of an upcoming saccade to localize visual objects for both action and perception.SIGNIFICANCE STATEMENT Accurately localizing moving objects is a computational challenge for the brain due to the inevitable delays that result from neural transmission. To solve this, the brain might implement motion extrapolation, predicting where an object ought to be at the present moment. Here, we use the flash-grab effect to induce perceptual position shifts and show that the latency of imminent saccades predicts the perceived position of the objects they target. This counterintuitive finding is important because it not only shows that motion extrapolation mechanisms indeed work to reduce the behavioral impact of neural transmission delays in the human brain, but also that these mechanisms are closely matched in the perceptual and oculomotor systems.
Collapse
|
40
|
Manohar SG, Muhammed K, Fallon SJ, Husain M. Motivation dynamically increases noise resistance by internal feedback during movement. Neuropsychologia 2018; 123:19-29. [PMID: 30005926 PMCID: PMC6363982 DOI: 10.1016/j.neuropsychologia.2018.07.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Revised: 06/19/2018] [Accepted: 07/09/2018] [Indexed: 12/12/2022]
Abstract
Motivation improves performance, pushing us beyond our normal limits. One general explanation for this is that the effects of neural noise can be reduced, at a cost. If this were possible, reward would promote investment in resisting noise. But how could the effects of noise be attenuated, and why should this be costly? Negative feedback may be employed to compensate for disturbances in a neural representation. Such feedback would increase the robustness of neural representations to internal signal fluctuations, producing a stable attractor. We propose that encoding this negative feedback in neural signals would incur additional costs proportional to the strength of the feedback signal. We use eye movements to test the hypothesis that motivation by reward improves precision by increasing the strength of internal negative feedback. We find that reward simultaneously increases the amplitude, velocity and endpoint precision of saccades, indicating true improvement in oculomotor performance. Analysis of trajectories demonstrates that variation in the eye position during the course of saccades is predictive of the variation of endpoints, but this relation is reduced by reward. This indicates that motivation permits more aggressive correction of errors during the saccade, so that they no longer affect the endpoint. We suggest that such increases in internal negative feedback allow attractor stability, albeit at a cost, and therefore may explain how motivation improves cognitive as well as motor precision. Motivation can increase speed and reduce behavioural variability. This requires stabilising neural representations so they are robust to noise. Stable representations or attractors in neural systems may come at the cost of stronger negative feedback. Examination of trajectory correlations demonstrates that reward increases negative feedback. We propose that the cost of stabilising signals explain why effort is expensive.
Collapse
Affiliation(s)
- Sanjay G Manohar
- Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Level 6 West Wing, OX3 9DU, United Kingdom; Department of Experimental Psychology, 15 Parks Road, Oxford, United Kingdom.
| | - Kinan Muhammed
- Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Level 6 West Wing, OX3 9DU, United Kingdom
| | - Sean J Fallon
- Department of Experimental Psychology, 15 Parks Road, Oxford, United Kingdom
| | - Masud Husain
- Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Level 6 West Wing, OX3 9DU, United Kingdom; Department of Experimental Psychology, 15 Parks Road, Oxford, United Kingdom
| |
Collapse
|
41
|
Billino J, Hennig J, Gegenfurtner KR. Association between COMT genotype and the control of memory guided saccades: Individual differences in healthy adults reveal a detrimental role of dopamine. Vision Res 2017; 141:170-180. [DOI: 10.1016/j.visres.2016.10.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2016] [Revised: 10/10/2016] [Accepted: 10/11/2016] [Indexed: 10/20/2022]
|
42
|
Itaguchi Y, Fukuzawa K. Influence of Speed and Accuracy Constraints on Motor Learning for a Trajectory-Based Movement. J Mot Behav 2017; 50:653-663. [PMID: 29190186 DOI: 10.1080/00222895.2017.1400946] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
This study investigated the influences of task constraint on motor learning for a trajectory-based movement considering the speed-accuracy relationship. In the experiment, participants practiced trajectory-based movements for five consecutive days. The participants were engaged in training with time-minimization or time-matching constraints. The results demonstrated that the speed-accuracy tradeoff was not apparent or was weak in the training situation. When the participants practiced the movement with a time-minimization constraint, movement errors did not vary, whereas the movement time decreased. With the time-matching constraint, the errors decreased as a session proceeded. These results were discussed in terms of the combination of signal-dependent noises and exploratory search noises. It is suggested that updating spatial and temporal factors does not appear to occur simultaneously in motor learning.
Collapse
Affiliation(s)
- Yoshihiro Itaguchi
- a Department of System Design Engineering , Keio University , Yokohama , Kanagawa , Japan.,b Japan Society for the Promotion of Science , Tokyo , Japan
| | | |
Collapse
|
43
|
Kristensen E, Rivet B, Guérin-Dugué A. Estimation of overlapped Eye Fixation Related Potentials: The General Linear Model, a more flexible framework than the ADJAR algorithm. J Eye Mov Res 2017; 10:JEMR-10-1. [PMID: 33828644 PMCID: PMC7141057 DOI: 10.16910/jemr.10.1.7] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
The Eye Fixation Related Potential (EFRP) estimation is the average of EEG signals across epochs at ocular fixation onset. Its main limitation is the overlapping issue. Inter Fixation Intervals (IFI) - typically around 300 ms in the case of unrestricted eye movement- depend on participants’ oculomotor patterns, and can be shorter than the latency of the components of the evoked potential. If the duration of an epoch is longer than the IFI value, more than one fixation can occur, and some overlapping between adjacent neural responses ensues. The classical average does not take into account either the presence of several fixations during an epoch or overlapping. The Adjacent Response algorithm (ADJAR), which is popular for event-related potential estimation, was compared to the General Linear Model (GLM) on a real dataset from a conjoint EEG and eye-tracking experiment to address the overlapping issue. The results showed that the ADJAR algorithm was based on assumptions that were too restrictive for EFRP estimation. The General Linear Model appeared to be more robust and efficient. Different configurations of this model were compared to estimate the potential elicited at image onset, as well as EFRP at the beginning of exploration. These configurations took into account the overlap between the event-related potential at stimulus presentation and the following EFRP, and the distinction between the potential elicited by the first fixation onset and subsequent ones. The choice of the General Linear Model configuration was a tradeoff between assumptions about expected behavior and the quality of the EFRP estimation: the number of different potentials estimated by a given model must be controlled to avoid erroneous estimations with large variances.
Collapse
Affiliation(s)
- Emmanuelle Kristensen
- Univ. Grenoble Alpes, GIPSA-Lab, F-38000 Grenoble France CNRS, GIPSA-Lab, F-38000 Grenoble France; Univ. Grenoble Alpes, GIPSA-Lab, 11 rue des Mathématiques Grenoble Campus, BP 46, 38000 Grenoble France
| | - Bertrand Rivet
- Univ. Grenoble Alpes, GIPSA-Lab, F-38000 Grenoble France CNRS, GIPSA-Lab, F-38000 Grenoble France; Univ. Grenoble Alpes, GIPSA-Lab, 11 rue des Mathématiques Grenoble Campus, BP 46, 38000 Grenoble France
| | - Anne Guérin-Dugué
- Univ. Grenoble Alpes, GIPSA-Lab, F-38000 Grenoble France CNRS, GIPSA-Lab, F-38000 Grenoble France; Univ. Grenoble Alpes, GIPSA-Lab, 11 rue des Mathématiques Grenoble Campus, BP 46, 38000 Grenoble France
| |
Collapse
|
44
|
Abstract
Trial-to-trial variability in the execution of movements and motor skills is ubiquitous and widely considered to be the unwanted consequence of a noisy nervous system. However, recent studies have suggested that motor variability may also be a feature of how sensorimotor systems operate and learn. This view, rooted in reinforcement learning theory, equates motor variability with purposeful exploration of motor space that, when coupled with reinforcement, can drive motor learning. Here we review studies that explore the relationship between motor variability and motor learning in both humans and animal models. We discuss neural circuit mechanisms that underlie the generation and regulation of motor variability and consider the implications that this work has for our understanding of motor learning.
Collapse
Affiliation(s)
- Ashesh K Dhawale
- Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, Massachusetts 02138;
- Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138
| | - Maurice A Smith
- Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138
| | - Bence P Ölveczky
- Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, Massachusetts 02138;
- Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138
| |
Collapse
|
45
|
Keenan KG, Huddleston WE, Ernest BE. Altered visual strategies and attention are related to increased force fluctuations during a pinch grip task in older adults. J Neurophysiol 2017; 118:2537-2548. [PMID: 28701549 DOI: 10.1152/jn.00928.2016] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2016] [Revised: 06/19/2017] [Accepted: 07/08/2017] [Indexed: 11/22/2022] Open
Abstract
The purpose of the study was to determine the visual strategies used by older adults during a pinch grip task and to assess the relations between visual strategy, deficits in attention, and increased force fluctuations in older adults. Eye movements of 23 older adults (>65 yr) were monitored during a low-force pinch grip task while subjects viewed three common visual feedback displays. Performance on the Grooved Pegboard test and an attention task (which required no concurrent hand movements) was also measured. Visual strategies varied across subjects and depended on the type of visual feedback provided to the subjects. First, while viewing a high-gain compensatory feedback display (horizontal bar moving up and down with force), 9 of 23 older subjects adopted a strategy of performing saccades during the task, which resulted in 2.5 times greater force fluctuations in those that exhibited saccades compared with those who maintained fixation near the target line. Second, during pursuit feedback displays (force trace moving left to right across screen and up and down with force), all subjects exhibited multiple saccades, and increased force fluctuations were associated (rs = 0.6; P = 0.002) with fewer saccades during the pursuit task. Also, decreased low-frequency (<4 Hz) force fluctuations and Grooved Pegboard times were significantly related (P = 0.033 and P = 0.005, respectively) with higher (i.e., better) attention z scores. Comparison of these results with our previously published results in young subjects indicates that saccadic eye movements and attention are related to force control in older adults.NEW & NOTEWORTHY The significant contributions of the study are the addition of eye movement data and an attention task to explain differences in hand motor control across different visual displays in older adults. Older participants used different visual strategies across varying feedback displays, and saccadic eye movements were related with motor performance. In addition, those older individuals with deficits in attention had impaired motor performance on two different hand motor control tasks, including the Grooved Pegboard test.
Collapse
Affiliation(s)
- Kevin G Keenan
- Department of Kinesiology, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin; and .,Center for Aging and Translational Research, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin
| | - Wendy E Huddleston
- Department of Kinesiology, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin; and.,Center for Aging and Translational Research, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin
| | - Bradley E Ernest
- Department of Kinesiology, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin; and
| |
Collapse
|
46
|
Variations in crowding, saccadic precision, and spatial localization reveal the shared topology of spatial vision. Proc Natl Acad Sci U S A 2017; 114:E3573-E3582. [PMID: 28396415 DOI: 10.1073/pnas.1615504114] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Visual sensitivity varies across the visual field in several characteristic ways. For example, sensitivity declines sharply in peripheral (vs. foveal) vision and is typically worse in the upper (vs. lower) visual field. These variations can affect processes ranging from acuity and crowding (the deleterious effect of clutter on object recognition) to the precision of saccadic eye movements. Here we examine whether these variations can be attributed to a common source within the visual system. We first compared the size of crowding zones with the precision of saccades using an oriented clock target and two adjacent flanker elements. We report that both saccade precision and crowded-target reports vary idiosyncratically across the visual field with a strong correlation across tasks for all participants. Nevertheless, both group-level and trial-by-trial analyses reveal dissociations that exclude a common representation for the two processes. We therefore compared crowding with two measures of spatial localization: Landolt-C gap resolution and three-dot bisection. Here we observe similar idiosyncratic variations with strong interparticipant correlations across tasks despite considerably finer precision. Hierarchical regression analyses further show that variations in spatial precision account for much of the variation in crowding, including the correlation between crowding and saccades. Altogether, we demonstrate that crowding, spatial localization, and saccadic precision show clear dissociations, indicative of independent spatial representations, whilst nonetheless sharing idiosyncratic variations in spatial topology. We propose that these topological idiosyncrasies are established early in the visual system and inherited throughout later stages to affect a range of higher-level representations.
Collapse
|
47
|
Federighi P, Wong AL, Shelhamer M. Inter-Trial Correlations in Predictive-Saccade Endpoints: Fractal Scaling Reflects Differential Control along Task-Relevant and Orthogonal Directions. Front Hum Neurosci 2017; 11:100. [PMID: 28326028 PMCID: PMC5339309 DOI: 10.3389/fnhum.2017.00100] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2015] [Accepted: 02/20/2017] [Indexed: 11/23/2022] Open
Abstract
Saccades exhibit variation in performance from one trial to the next, even when paced at a constant rate by targets at two fixed locations. We previously showed that amplitude fluctuations in consecutive predictive saccades have fractal structure: the spectrum of the sequence of consecutive amplitudes has a power-law (f−α) form, indicative of inter-trial correlations that reflect the storage of prior performance information to guide the planning of subsequent movements. More gradual decay of these inter-trial correlations coincides with a larger magnitude of spectral slope α, and indicates stronger information storage over longer times. We have previously demonstrated that larger decay exponents (α) are associated with faster adaptation in a saccadic double-step task. Here, we extend this line of investigation to predictive saccade endpoints (i.e., movement errors). Subjects made predictive, paced saccades between two fixed targets along a horizontal or vertical axis. Endpoint fluctuations both along (on-axis) and orthogonal to (off-axis) the direction of target motion were examined for correlations and fractal structure. Endpoints in the direction of target motion had little or no correlation or power-law scaling, suggesting that successive movements were uncorrelated (white noise). In the orthogonal direction, however, the sequence of endpoints did exhibit inter-trial correlations and scaling. In contrast, in our previous work the scaling of saccade amplitudes is strong along the target direction. This may reflect the fact that while saccade amplitudes are neurally programmed, endpoints are not directly controlled but instead serve as a source of error feedback. Hence, the lack of correlations in on-axis endpoint errors suggests that maximum information has been extracted from previous movement errors to plan subsequent movement amplitudes. In contrast, correlations in the off-axis component indicate that useful information still remains in this error (residual) sequence, suggesting that saccades are less tightly controlled along the orthogonal direction.
Collapse
Affiliation(s)
- Pamela Federighi
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of MedicineBaltimore, MD, USA; University of FirenzeFirenze, Italy
| | - Aaron L Wong
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of MedicineBaltimore, MD, USA; Department of Neurology, Johns Hopkins University School of MedicineBaltimore, MD, USA
| | - Mark Shelhamer
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine Baltimore, MD, USA
| |
Collapse
|
48
|
Boutsen FA, Dvorak JD, Pulusu VK, Ross ED. Altered saccadic targets when processing facial expressions under different attentional and stimulus conditions. Vision Res 2017; 133:150-160. [PMID: 28279711 DOI: 10.1016/j.visres.2016.07.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2015] [Revised: 05/16/2016] [Accepted: 07/09/2016] [Indexed: 10/20/2022]
Abstract
Depending on a subject's attentional bias, robust changes in emotional perception occur when facial blends (different emotions expressed on upper/lower face) are presented tachistoscopically. If no instructions are given, subjects overwhelmingly identify the lower facial expression when blends are presented to either visual field. If asked to attend to the upper face, subjects overwhelmingly identify the upper facial expression in the left visual field but remain slightly biased to the lower facial expression in the right visual field. The current investigation sought to determine whether differences in initial saccadic targets could help explain the perceptual biases described above. Ten subjects were presented with full and blend facial expressions under different attentional conditions. No saccadic differences were found for left versus right visual field presentations or for full facial versus blend stimuli. When asked to identify the presented emotion, saccades were directed to the lower face. When asked to attend to the upper face, saccades were directed to the upper face. When asked to attend to the upper face and try to identify the emotion, saccades were directed to the upper face but to a lesser degree. Thus, saccadic behavior supports the concept that there are cognitive-attentional pre-attunements when subjects visually process facial expressions. However, these pre-attunements do not fully explain the perceptual superiority of the left visual field for identifying the upper facial expression when facial blends are presented tachistoscopically. Hence other perceptual factors must be in play, such as the phenomenon of virtual scanning.
Collapse
Affiliation(s)
- Frank A Boutsen
- Department of Communication Sciences and Disorders, University of Oklahoma Health Sciences, 1200 North Stonewall Ave., Oklahoma City, OK 73117, USA
| | - Justin D Dvorak
- Department of Communication Sciences and Disorders, University of Oklahoma Health Sciences, 1200 North Stonewall Ave., Oklahoma City, OK 73117, USA
| | - Vinay K Pulusu
- Department of Neurology, University of Oklahoma Health Sciences Center, and the VA Medical Center (127), 921 NE 13th Street, Oklahoma City, OK 73104, USA
| | - Elliott D Ross
- Department of Neurology, University of Oklahoma Health Sciences Center, and the VA Medical Center (127), 921 NE 13th Street, Oklahoma City, OK 73104, USA; Department of Communication Sciences and Disorders, University of Oklahoma Health Sciences, 1200 North Stonewall Ave., Oklahoma City, OK 73117, USA.
| |
Collapse
|
49
|
Hooge I, Holmqvist K, Nyström M. The pupil is faster than the corneal reflection (CR): Are video based pupil-CR eye trackers suitable for studying detailed dynamics of eye movements? Vision Res 2016; 128:6-18. [PMID: 27656785 DOI: 10.1016/j.visres.2016.09.002] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Revised: 09/01/2016] [Accepted: 09/03/2016] [Indexed: 10/21/2022]
Abstract
Most modern video eye trackers use the p-CR (pupil minus CR) technique to deal with small relative movements between the eye tracker camera and the eye. We question whether the p-CR technique is appropriate to investigate saccade dynamics. In two experiments we investigated the dynamics of pupil, CR and gaze signals obtained from a standard SMI Hi-Speed eye tracker. We found many differences between the pupil and the CR signals. Differences concern timing of the saccade onset, saccade peak velocity and post-saccadic oscillation (PSO). We also obtained that pupil peak velocities were higher than CR peak velocities. Saccades in the eye trackers' gaze signal (that is constructed from p-CR) appear to be excessive versions of saccades in the pupil signal. We conclude that the pupil-CR technique is not suitable for studying detailed dynamics of eye movements.
Collapse
Affiliation(s)
- Ignace Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands.
| | - Kenneth Holmqvist
- Lund University Humanities Lab, Lund University, Helgonabacken 12, 22362 Lund, Sweden; School of Languages and Academic Literacy, Vaal Triangle Campus, North-West University, Vanderbijlpark 1900, South Africa
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Helgonabacken 12, 22362 Lund, Sweden
| |
Collapse
|
50
|
Bremmer F, Kaminiarz A, Klingenhoefer S, Churan J. Decoding Target Distance and Saccade Amplitude from Population Activity in the Macaque Lateral Intraparietal Area (LIP). Front Integr Neurosci 2016; 10:30. [PMID: 27630547 PMCID: PMC5005376 DOI: 10.3389/fnint.2016.00030] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2016] [Accepted: 08/19/2016] [Indexed: 11/13/2022] Open
Abstract
Primates perform saccadic eye movements in order to bring the image of an interesting target onto the fovea. Compared to stationary targets, saccades toward moving targets are computationally more demanding since the oculomotor system must use speed and direction information about the target as well as knowledge about its own processing latency to program an adequate, predictive saccade vector. In monkeys, different brain regions have been implicated in the control of voluntary saccades, among them the lateral intraparietal area (LIP). Here we asked, if activity in area LIP reflects the distance between fovea and saccade target, or the amplitude of an upcoming saccade, or both. We recorded single unit activity in area LIP of two macaque monkeys. First, we determined for each neuron its preferred saccade direction. Then, monkeys performed visually guided saccades along the preferred direction toward either stationary or moving targets in pseudo-randomized order. LIP population activity allowed to decode both, the distance between fovea and saccade target as well as the size of an upcoming saccade. Previous work has shown comparable results for saccade direction (Graf and Andersen, 2014a,b). Hence, LIP population activity allows to predict any two-dimensional saccade vector. Functional equivalents of macaque area LIP have been identified in humans. Accordingly, our results provide further support for the concept of activity from area LIP as neural basis for the control of an oculomotor brain-machine interface.
Collapse
Affiliation(s)
- Frank Bremmer
- Department of Neurophysics, Philipps-Universität Marburg Marburg, Germany
| | - Andre Kaminiarz
- Department of Neurophysics, Philipps-Universität Marburg Marburg, Germany
| | | | - Jan Churan
- Department of Neurophysics, Philipps-Universität Marburg Marburg, Germany
| |
Collapse
|