1
|
Egger SW, Keemink SW, Goldman MS, Britten KH. Context-dependence of deterministic and nondeterministic contributions to closed-loop steering control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.26.605325. [PMID: 39131368 PMCID: PMC11312469 DOI: 10.1101/2024.07.26.605325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
In natural circumstances, sensory systems operate in a closed loop with motor output, whereby actions shape subsequent sensory experiences. A prime example of this is the sensorimotor processing required to align one's direction of travel, or heading, with one's goal, a behavior we refer to as steering. In steering, motor outputs work to eliminate errors between the direction of heading and the goal, modifying subsequent errors in the process. The closed-loop nature of the behavior makes it challenging to determine how deterministic and nondeterministic processes contribute to behavior. We overcome this by applying a nonparametric, linear kernel-based analysis to behavioral data of monkeys steering through a virtual environment in two experimental contexts. In a given context, the results were consistent with previous work that described the transformation as a second-order linear system. Classically, the parameters of such second-order models are associated with physical properties of the limb such as viscosity and stiffness that are commonly assumed to be approximately constant. By contrast, we found that the fit kernels differed strongly across tasks in these and other parameters, suggesting context-dependent changes in neural and biomechanical processes. We additionally fit residuals to a simple noise model and found that the form of the noise was highly conserved across both contexts and animals. Strikingly, the fitted noise also closely matched that found previously in a human steering task. Altogether, this work presents a kernel-based analysis that characterizes the context-dependence of deterministic and non-deterministic components of a closed-loop sensorimotor task.
Collapse
Affiliation(s)
- Seth W Egger
- Center for Neuroscience, University of California, Davis
| | - Sander W Keemink
- Department of Neurobiology, Physiology and Behavior, University of California, Davis
| | - Mark S Goldman
- Center for Neuroscience, University of California, Davis
- Department of Neurobiology, Physiology and Behavior, University of California, Davis
- Department of Ophthalmology and Vision Science, University of California, Davis
| | - Kenneth H Britten
- Center for Neuroscience, University of California, Davis
- Department of Neurobiology, Physiology and Behavior, University of California, Davis
| |
Collapse
|
2
|
van Helvert MJL, Selen LPJ, van Beers RJ, Medendorp WP. Predictive steering: integration of artificial motor signals in self-motion estimation. J Neurophysiol 2022; 128:1395-1408. [PMID: 36350058 DOI: 10.1152/jn.00248.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
The brain's computations for active and passive self-motion estimation can be unified with a single model that optimally combines vestibular and visual signals with sensory predictions based on efference copies. It is unknown whether this theoretical framework also applies to the integration of artificial motor signals, such as those that occur when driving a car, or whether self-motion estimation in this situation relies on sole feedback control. Here, we examined if training humans to control a self-motion platform leads to the construction of an accurate internal model of the mapping between the steering movement and the vestibular reafference. Participants (n = 15) sat on a linear motion platform and actively controlled the platform's velocity using a steering wheel to translate their body to a memorized visual target (motion condition). We compared their steering behavior to that of participants (n = 15) who remained stationary and instead aligned a nonvisible line with the target (stationary condition). To probe learning, the gain between the steering wheel angle and the platform or line velocity changed abruptly twice during the experiment. These gain changes were virtually undetectable in the displacement error in the motion condition, whereas clear deviations were observed in the stationary condition, showing that participants in the motion condition made within-trial changes to their steering behavior. We conclude that vestibular feedback allows not only the online control of steering but also a rapid adaptation to the gain changes to update the brain's internal model of the mapping between the steering movement and the vestibular reafference.NEW & NOTEWORTHY Perception of self-motion is known to depend on the integration of sensory signals and, when the motion is self-generated, the predicted sensory reafference based on motor efference copies. Here we show, using a closed-loop steering experiment with a direct coupling between the steering movement and the vestibular self-motion feedback, that humans are also able to integrate artificial motor signals, like the motor signals that occur when driving a car.
Collapse
Affiliation(s)
- Milou J L van Helvert
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Luc P J Selen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Robert J van Beers
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - W Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
3
|
Pasternak T, Tadin D. Linking Neuronal Direction Selectivity to Perceptual Decisions About Visual Motion. Annu Rev Vis Sci 2021; 6:335-362. [PMID: 32936737 DOI: 10.1146/annurev-vision-121219-081816] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Psychophysical and neurophysiological studies of responses to visual motion have converged on a consistent set of general principles that characterize visual processing of motion information. Both types of approaches have shown that the direction and speed of target motion are among the most important encoded stimulus properties, revealing many parallels between psychophysical and physiological responses to motion. Motivated by these parallels, this review focuses largely on more direct links between the key feature of the neuronal response to motion, direction selectivity, and its utilization in memory-guided perceptual decisions. These links were established during neuronal recordings in monkeys performing direction discriminations, but also by examining perceptual effects of widespread elimination of cortical direction selectivity produced by motion deprivation during development. Other approaches, such as microstimulation and lesions, have documented the importance of direction-selective activity in the areas that are active during memory-guided direction comparisons, area MT and the prefrontal cortex, revealing their likely interactions during behavioral tasks.
Collapse
Affiliation(s)
- Tatiana Pasternak
- Department of Neuroscience, University of Rochester, Rochester, New York 14642, USA; , .,Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA.,Center for Visual Science, University of Rochester, Rochester, New York 14627, USA.,Del Monte Institute for Neuroscience, University of Rochester, Rochester, New York 14642, USA
| | - Duje Tadin
- Department of Neuroscience, University of Rochester, Rochester, New York 14642, USA; , .,Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA.,Center for Visual Science, University of Rochester, Rochester, New York 14627, USA.,Del Monte Institute for Neuroscience, University of Rochester, Rochester, New York 14642, USA.,Department of Ophthalmology, University of Rochester, Rochester, New York 14642, USA
| |
Collapse
|
4
|
Virtual reality method to analyze visual recognition in mice. PLoS One 2018; 13:e0196563. [PMID: 29768429 PMCID: PMC5955493 DOI: 10.1371/journal.pone.0196563] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2018] [Accepted: 04/16/2018] [Indexed: 12/15/2022] Open
Abstract
Behavioral tests have been extensively used to measure the visual function of mice. To determine how precisely mice perceive certain visual cues, it is necessary to have a quantifiable measurement of their behavioral responses. Recently, virtual reality tests have been utilized for a variety of purposes, from analyzing hippocampal cell functionality to identifying visual acuity. Despite the widespread use of these tests, the training requirement for the recognition of a variety of different visual targets, and the performance of the behavioral tests has not been thoroughly characterized. We have developed a virtual reality behavior testing approach that can essay a variety of different aspects of visual perception, including color/luminance and motion detection. When tested for the ability to detect a color/luminance target or a moving target, mice were able to discern the designated target after 9 days of continuous training. However, the quality of their performance is significantly affected by the complexity of the visual target, and their ability to navigate on a spherical treadmill. Importantly, mice retained memory of their visual recognition for at least three weeks after the end of their behavioral training.
Collapse
|
5
|
Makin JG, Dichter BK, Sabes PN. Learning to Estimate Dynamical State with Probabilistic Population Codes. PLoS Comput Biol 2015; 11:e1004554. [PMID: 26540152 PMCID: PMC4634970 DOI: 10.1371/journal.pcbi.1004554] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2015] [Accepted: 08/26/2015] [Indexed: 12/03/2022] Open
Abstract
Tracking moving objects, including one’s own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, “probabilistic population codes.” We show that a recurrent neural network—a modified form of an exponential family harmonium (EFH)—that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states. A basic task for animals is to track objects—predators, prey, even their own limbs—as they move through the world. Because the position estimates provided by the senses are not error-free, higher levels of performance can be, and are, achieved when the velocity and acceleration, as well as the position, of the object are taken into account. Likewise, tracking of limbs under voluntary control can be improved by considering the motor command that is (partially) responsible for its trajectory. Engineers have built tools to solve precisely these problems, and even to learn dynamical features of the object to be tracked. How does the brain do it? We show how artificial networks of neurons can learn to solve this task, simply by trying to become good predictive models of their incoming data—as long as some of those data are the activities of the neurons themselves at a fixed time delay, while the remainder (imperfectly) report the current position. The tracking scheme the network learns to use—keeping track of past positions; the corresponding receptive fields; and the manner in which they are learned, provide predictions for brain areas involved in tracking, like the posterior parietal cortex.
Collapse
Affiliation(s)
- Joseph G. Makin
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, California, United States of America
- Department of Physiology, University of California, San Francisco, San Francisco, California, United States of America
- * E-mail:
| | - Benjamin K. Dichter
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, California, United States of America
- UC Berkeley-UCSF Graduate Program in Bioengineering, University of California, San Francisco, San Francisco, California, United States of America
| | - Philip N. Sabes
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, California, United States of America
- Department of Physiology, University of California, San Francisco, San Francisco, California, United States of America
- UC Berkeley-UCSF Graduate Program in Bioengineering, University of California, San Francisco, San Francisco, California, United States of America
| |
Collapse
|