1
|
Abstract
For over 100 years, eye movements have been studied and used as indicators of human sensory and cognitive functions. This review evaluates how eye movements contribute to our understanding of the processes that underlie decision-making. Eye movement metrics signify the visual and task contexts in which information is accumulated and weighed. They indicate the efficiency with which we evaluate the instructions for decision tasks, the timing and duration of decision formation, the expected reward associated with a decision, the accuracy of the decision outcome, and our ability to predict and feel confident about a decision. Because of their continuous nature, eye movements provide an exciting opportunity to probe decision processes noninvasively in real time. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Miriam Spering
- Department of Ophthalmology & Visual Sciences and the Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, Canada;
| |
Collapse
|
2
|
Jana S, Gopal A, Murthy A. Computational Mechanisms Mediating Inhibitory Control of Coordinated Eye-Hand Movements. Brain Sci 2021; 11:607. [PMID: 34068477 PMCID: PMC8150398 DOI: 10.3390/brainsci11050607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 05/02/2021] [Accepted: 05/04/2021] [Indexed: 11/17/2022] Open
Abstract
Significant progress has been made in understanding the computational and neural mechanisms that mediate eye and hand movements made in isolation. However, less is known about the mechanisms that control these movements when they are coordinated. Here, we outline our computational approaches using accumulation-to-threshold and race-to-threshold models to elucidate the mechanisms that initiate and inhibit these movements. We suggest that, depending on the behavioral context, the initiation and inhibition of coordinated eye-hand movements can operate in two modes-coupled and decoupled. The coupled mode operates when the task context requires a tight coupling between the effectors; a common command initiates both effectors, and a unitary inhibitory process is responsible for stopping them. Conversely, the decoupled mode operates when the task context demands weaker coupling between the effectors; separate commands initiate the eye and hand, and separate inhibitory processes are responsible for stopping them. We hypothesize that the higher-order control processes assess the behavioral context and choose the most appropriate mode. This computational mechanism can explain the heterogeneous results observed across many studies that have investigated the control of coordinated eye-hand movements and may also serve as a general framework to understand the control of complex multi-effector movements.
Collapse
Affiliation(s)
- Sumitash Jana
- Department of Psychology, University of California San Diego, La Jolla, CA 92093, USA
| | - Atul Gopal
- Laboratory of Sensorimotor Research, National Eye Institute, Bethesda, MD 20814, USA
| | - Aditya Murthy
- Centre for Neuroscience, Indian Institute of Science, Bangalore, Karnataka 560012, India;
| |
Collapse
|
3
|
Coutinho JD, Lefèvre P, Blohm G. Confidence in predicted position error explains saccadic decisions during pursuit. J Neurophysiol 2020; 125:748-767. [PMID: 33356899 DOI: 10.1152/jn.00492.2019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A fundamental problem in motor control is the coordination of complementary movement types to achieve a common goal. As a common example, humans view moving objects through coordinated pursuit and saccadic eye movements. Pursuit is initiated and continuously controlled by retinal image velocity. During pursuit, eye position may lag behind the target. This can be compensated by the discrete execution of a catch-up saccade. The decision to trigger a saccade is influenced by both position and velocity errors, and the timing of saccades can be highly variable. The observed distributions of saccade frequency and trigger time remain poorly understood, and this decision process remains imprecisely quantified. Here, we propose a predictive, probabilistic model explaining the decision to trigger saccades during pursuit to foveate moving targets. In this model, expected position error and its associated uncertainty are predicted through Bayesian inference across noisy, delayed sensory observations (Kalman filtering). This probabilistic prediction is used to estimate the confidence that a saccade is needed (quantified through log-probability ratio), triggering a saccade upon accumulating to a fixed threshold. The model qualitatively explains behavioral observations on the frequency and trigger time distributions of saccades during pursuit over a range of target motion trajectories. Furthermore, this model makes novel predictions that saccade decisions are highly sensitive to uncertainty for small predicted position errors, but this influence diminishes as the magnitude of predicted position error increases. We suggest that this predictive, confidence-based decision-making strategy represents a fundamental principle for the probabilistic neural control of coordinated movements.NEW & NOTEWORTHY This is the first stochastic dynamical systems model of pursuit-saccade coordination accounting for noise and delays in the sensorimotor system. The model uses Bayesian inference to predictively estimate visual motion, triggering saccades when confidence in predicted position error accumulates to a threshold. This model explains saccade frequency and trigger time distributions across target trajectories and makes novel predictions about the influence of sensory uncertainty in saccade decisions during pursuit.
Collapse
Affiliation(s)
- Jonathan D Coutinho
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Philippe Lefèvre
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Université catholique de Louvain, Louvain-la-Neuve, Belgium.,Institute of Neuroscience, Université catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
4
|
Fooken J, Spering M. Eye movements as a readout of sensorimotor decision processes. J Neurophysiol 2020; 123:1439-1447. [PMID: 32159423 DOI: 10.1152/jn.00622.2019] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Real-world tasks, such as avoiding obstacles, require a sequence of interdependent choices to reach accurate motor actions. Yet, most studies on primate decision making involve simple one-step choices. Here we analyze motor actions to investigate how sensorimotor decisions develop over time. In a go/no-go interception task human observers (n = 42) judged whether a briefly presented moving target would pass (interceptive hand movement required) or miss (no hand movement required) a strike box while their eye and hand movements were recorded. Go/no-go decision formation had to occur within the first few hundred milliseconds to allow time-critical interception. We found that the earliest time point at which eye movements started to differentiate actions (go versus no-go) preceded hand movement onset. Moreover, eye movements were related to different stages of decision making. Whereas higher eye velocity during smooth pursuit initiation was related to more accurate interception decisions (whether or not to act), faster pursuit maintenance was associated with more accurate timing decisions (when to act). These results indicate that pursuit initiation and maintenance are continuously linked to ongoing sensorimotor decision formation.NEW & NOTEWORTHY Here we show that eye movements are a continuous indicator of decision processes underlying go/no-go actions. We link different stages of decision formation to distinct oculomotor events during open- and closed-loop smooth pursuit. Critically, the earliest time point at which eye movements differentiate actions preceded hand movement onset, suggesting shared sensorimotor processing for eye and hand movements. These results emphasize the potential of studying eye movements as a readout of cognitive processes.
Collapse
Affiliation(s)
- Jolande Fooken
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada.,Center for Brain Health, University of British Columbia, Vancouver, Canada.,Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, Canada
| |
Collapse
|
5
|
A neural circuit model of decision uncertainty and change-of-mind. Nat Commun 2019; 10:2287. [PMID: 31123260 PMCID: PMC6533317 DOI: 10.1038/s41467-019-10316-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 04/30/2019] [Indexed: 01/15/2023] Open
Abstract
Decision-making is often accompanied by a degree of confidence on whether a choice is correct. Decision uncertainty, or lack in confidence, may lead to change-of-mind. Studies have identified the behavioural characteristics associated with decision confidence or change-of-mind, and their neural correlates. Although several theoretical accounts have been proposed, there is no neural model that can compute decision uncertainty and explain its effects on change-of-mind. We propose a neuronal circuit model that computes decision uncertainty while accounting for a variety of behavioural and neural data of decision confidence and change-of-mind, including testable model predictions. Our theoretical analysis suggests that change-of-mind occurs due to the presence of a transient uncertainty-induced choice-neutral stable steady state and noisy fluctuation within the neuronal network. Our distributed network model indicates that the neural basis of change-of-mind is more distinctively identified in motor-based neurons. Overall, our model provides a framework that unifies decision confidence and change-of-mind. We make decisions with varying degrees of confidence and, if our confidence in a decision falls, we may change our mind. Here, the authors present a neuronal circuit model to account for how change of mind occurs under particular low-confidence conditions.
Collapse
|
6
|
Affiliation(s)
- Jolande Fooken
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada
- Center for Brain Health, University of British Columbia, Vancouver, Canada
- Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, Canada
| |
Collapse
|
7
|
Working Memory and Decision-Making in a Frontoparietal Circuit Model. J Neurosci 2017; 37:12167-12186. [PMID: 29114071 DOI: 10.1523/jneurosci.0343-17.2017] [Citation(s) in RCA: 85] [Impact Index Per Article: 12.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Revised: 08/24/2017] [Accepted: 09/19/2017] [Indexed: 12/25/2022] Open
Abstract
Working memory (WM) and decision-making (DM) are fundamental cognitive functions involving a distributed interacting network of brain areas, with the posterior parietal cortex (PPC) and prefrontal cortex (PFC) at the core. However, the shared and distinct roles of these areas and the nature of their coordination in cognitive function remain poorly understood. Biophysically based computational models of cortical circuits have provided insights into the mechanisms supporting these functions, yet they have primarily focused on the local microcircuit level, raising questions about the principles for distributed cognitive computation in multiregional networks. To examine these issues, we developed a distributed circuit model of two reciprocally interacting modules representing PPC and PFC circuits. The circuit architecture includes hierarchical differences in local recurrent structure and implements reciprocal long-range projections. This parsimonious model captures a range of behavioral and neuronal features of frontoparietal circuits across multiple WM and DM paradigms. In the context of WM, both areas exhibit persistent activity, but, in response to intervening distractors, PPC transiently encodes distractors while PFC filters distractors and supports WM robustness. With regard to DM, the PPC module generates graded representations of accumulated evidence supporting target selection, while the PFC module generates more categorical responses related to action or choice. These findings suggest computational principles for distributed, hierarchical processing in cortex during cognitive function and provide a framework for extension to multiregional models.SIGNIFICANCE STATEMENT Working memory and decision-making are fundamental "building blocks" of cognition, and deficits in these functions are associated with neuropsychiatric disorders such as schizophrenia. These cognitive functions engage distributed networks with prefrontal cortex (PFC) and posterior parietal cortex (PPC) at the core. It is not clear, however, what the contributions of PPC and PFC are in light of the computations that subserve working memory and decision-making. We constructed a biophysical model of a reciprocally connected frontoparietal circuit that revealed shared and distinct functions for the PFC and PPC across working memory and decision-making tasks. Our parsimonious model connects circuit-level properties to cognitive functions and suggests novel design principles beyond those of local circuits for cognitive processing in multiregional brain networks.
Collapse
|
8
|
Purcell BA, Palmeri TJ. RELATING ACCUMULATOR MODEL PARAMETERS AND NEURAL DYNAMICS. JOURNAL OF MATHEMATICAL PSYCHOLOGY 2017; 76:156-171. [PMID: 28392584 PMCID: PMC5381950 DOI: 10.1016/j.jmp.2016.07.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Accumulator models explain decision-making as an accumulation of evidence to a response threshold. Specific model parameters are associated with specific model mechanisms, such as the time when accumulation begins, the average rate of evidence accumulation, and the threshold. These mechanisms determine both the within-trial dynamics of evidence accumulation and the predicted behavior. Cognitive modelers usually infer what mechanisms vary during decision-making by seeing what parameters vary when a model is fitted to observed behavior. The recent identification of neural activity with evidence accumulation suggests that it may be possible to directly infer what mechanisms vary from an analysis of how neural dynamics vary. However, evidence accumulation is often noisy, and noise complicates the relationship between accumulator dynamics and the underlying mechanisms leading to those dynamics. To understand what kinds of inferences can be made about decision-making mechanisms based on measures of neural dynamics, we measured simulated accumulator model dynamics while systematically varying model parameters. In some cases, decision- making mechanisms can be directly inferred from dynamics, allowing us to distinguish between models that make identical behavioral predictions. In other cases, however, different parameterized mechanisms produce surprisingly similar dynamics, limiting the inferences that can be made based on measuring dynamics alone. Analyzing neural dynamics can provide a powerful tool to resolve model mimicry at the behavioral level, but we caution against drawing inferences based solely on neural analyses. Instead, simultaneous modeling of behavior and neural dynamics provides the most powerful approach to understand decision-making and likely other aspects of cognition and perception.
Collapse
|
9
|
Target Selection Signals Influence Perceptual Decisions by Modulating the Onset and Rate of Evidence Accumulation. Curr Biol 2016; 26:496-502. [DOI: 10.1016/j.cub.2015.12.049] [Citation(s) in RCA: 65] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2015] [Revised: 11/06/2015] [Accepted: 12/14/2015] [Indexed: 11/23/2022]
|
10
|
Chen X, Stuphorn V. Sequential selection of economic good and action in medial frontal cortex of macaques during value-based decisions. eLife 2015; 4. [PMID: 26613409 PMCID: PMC4760954 DOI: 10.7554/elife.09418] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2015] [Accepted: 11/26/2015] [Indexed: 01/14/2023] Open
Abstract
Value-based decisions could rely either on the selection of desired economic goods or on the selection of the actions that will obtain the goods. We investigated this question by recording from the supplementary eye field (SEF) of monkeys during a gambling task that allowed us to distinguish chosen good from chosen action signals. Analysis of the individual neuron activity, as well as of the population state-space dynamic, showed that SEF encodes first the chosen gamble option (the desired economic good) and only ~100 ms later the saccade that will obtain it (the chosen action). The action selection is likely driven by inhibitory interactions between different SEF neurons. Our results suggest that during value-based decisions, the selection of economic goods precedes and guides the selection of actions. The two selection steps serve different functions and can therefore not compensate for each other, even when information guiding both processes is given simultaneously. DOI:http://dx.doi.org/10.7554/eLife.09418.001 Much of our decision making seems to involve selecting the best option from among those currently available, and then working out how to attain that particular outcome. However, while this might sound straightforward in principle, exactly how this process is organized within the brain is not entirely clear. One possibility is that the brain compares all the possible outcomes of a decision with each other before constructing a plan of action to achieve the most desirable of these. This is known as the 'goods-based' model of decision making. However, an alternative possibility is that the brain instead considers all the possible actions that could be performed at any given time. One specific action is then chosen based on a range of factors, including the potential outcomes that might result from each. This is an 'action-based' model of decision making. Chen and Stuphorn have now distinguished between these possibilities by training two monkeys to perform a gambling task. The animals learned to make eye movements to one of two targets on a screen to earn a reward. The identity of the targets varied between trials, with some associated with larger rewards or a higher likelihood of receiving a reward than others. The location of the targets also changed in different trials, which meant that the choice of 'action' (moving the eyes to the left or right) could be distinguished from the choice of 'goods' (the reward). By using electrodes to record from a region of the brain called the supplementary eye field, which helps to control eye movements, Chen and Stuphorn showed that the activity of neurons in this region predicted the monkeys’ decision-making behavior. Crucially, it did so in two stages: neurons first encoded the reward chosen by the monkey, before subsequently encoding the action that the monkey selected to obtain that outcome. These data argue against an action-based model of decision making because outcomes are encoded before actions. However, they also argue against a purely goods-based model. This is because all possible actions are encoded by the brain (including those that are subsequently rejected), with the highest levels of activity seen for the action that is ultimately selected. The data instead support a new model of decision making, in which outcomes and actions are selected sequentially via two independent brain circuits. DOI:http://dx.doi.org/10.7554/eLife.09418.002
Collapse
Affiliation(s)
- Xiaomo Chen
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, United States
| | - Veit Stuphorn
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, United States.,Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, United States.,Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University School of Medicine, Baltimore, United States
| |
Collapse
|
11
|
Forstmann BU, Ratcliff R, Wagenmakers EJ. Sequential Sampling Models in Cognitive Neuroscience: Advantages, Applications, and Extensions. Annu Rev Psychol 2015; 67:641-66. [PMID: 26393872 DOI: 10.1146/annurev-psych-122414-033645] [Citation(s) in RCA: 262] [Impact Index Per Article: 29.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Sequential sampling models assume that people make speeded decisions by gradually accumulating noisy information until a threshold of evidence is reached. In cognitive science, one such model--the diffusion decision model--is now regularly used to decompose task performance into underlying processes such as the quality of information processing, response caution, and a priori bias. In the cognitive neurosciences, the diffusion decision model has recently been adopted as a quantitative tool to study the neural basis of decision making under time pressure. We present a selective overview of several recent applications and extensions of the diffusion decision model in the cognitive neurosciences.
Collapse
Affiliation(s)
- B U Forstmann
- Amsterdam Brain and Cognition Center, University of Amsterdam, 1018 WS Amsterdam, The Netherlands;
| | - R Ratcliff
- Department of Psychology, Ohio State University, Columbus, Ohio 43210
| | - E-J Wagenmakers
- Department of Methodology, University of Amsterdam, 1018 WV Amsterdam, The Netherlands
| |
Collapse
|
12
|
Sajad A, Sadeh M, Keith GP, Yan X, Wang H, Crawford JD. Visual-Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey. Cereb Cortex 2014; 25:3932-52. [PMID: 25491118 PMCID: PMC4585524 DOI: 10.1093/cercor/bhu279] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual–motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas.
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program Department of Biology
| | - Morteza Sadeh
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program School of Kinesiology and Health Sciences
| | - Gerald P Keith
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| | - Xiaogang Yan
- Centre for Vision Research Canadian Action and Perception Network (CAPnet)
| | - Hongying Wang
- Centre for Vision Research Canadian Action and Perception Network (CAPnet)
| | - John Douglas Crawford
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program Department of Biology School of Kinesiology and Health Sciences Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| |
Collapse
|
13
|
|
14
|
Affiliation(s)
- Christian C. Ruff
- Laboratory for Social and Neural Systems Research (SNS Lab); Department of Economics, University of Zurich; Zurich Switzerland
| |
Collapse
|