1
|
Langlois T, Charlton JA, Goris RLT. Bayesian inference by visuomotor neurons in prefrontal cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.23.614567. [PMID: 39386660 PMCID: PMC11463605 DOI: 10.1101/2024.09.23.614567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/12/2024]
Abstract
Perceptual judgements of the environment emerge from the concerted activity of neural populations in decision-making areas downstream of sensory cortex [1, 2, 3]. When the sensory input is ambiguous, perceptual judgements can be biased by prior expectations shaped by environmental regularities [4, 5, 6, 7, 8, 9, 10, 11]. These effects are examples of Bayesian inference, a reasoning method in which prior knowledge is leveraged to optimize uncertain decisions [12, 13]. However, it is not known how decision-making circuits combine sensory signals and prior expectations to form a perceptual decision. Here, we study neural population activity in the prefrontal cortex of macaque monkeys trained to report perceptual judgments of ambiguous visual stimuli under two different stimulus distributions. We analyze the component of the neural population response that represents the formation of the perceptual decision (the decision variable, DV), and find that its dynamical evolution reflects the integration of sensory signals and prior expectations. Prior expectations impact the DV's trajectory both before and during stimulus presentation such that DV trajectories with a smaller dynamic range result in more biased and less sensitive perceptual decisions. These results reveal a mechanism by which prefrontal circuits can execute Bayesian inference.
Collapse
|
2
|
Kim J, Gim S, Yoo SBM, Woo CW. A computational mechanism of cue-stimulus integration for pain in the brain. SCIENCE ADVANCES 2024; 10:eado8230. [PMID: 39259795 PMCID: PMC11389792 DOI: 10.1126/sciadv.ado8230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Accepted: 08/02/2024] [Indexed: 09/13/2024]
Abstract
The brain integrates information from pain-predictive cues and noxious inputs to construct the pain experience. Although previous studies have identified neural encodings of individual pain components, how they are integrated remains elusive. Here, using a cue-induced pain task, we examined temporal functional magnetic resonance imaging activities within the state space, where axes represent individual voxel activities. By analyzing the features of these activities at the large-scale network level, we demonstrated that overall brain networks preserve both cue and stimulus information in their respective subspaces within the state space. However, only higher-order brain networks, including limbic and default mode networks, could reconstruct the pattern of participants' reported pain by linear summation of subspace activities, providing evidence for the integration of cue and stimulus information. These results suggest a hierarchical organization of the brain for processing pain components and elucidate the mechanism for their integration underlying our pain perception.
Collapse
Affiliation(s)
- Jungwoo Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, South Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon, South Korea
| | - Suhwan Gim
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, South Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon, South Korea
| | - Seng Bum Michael Yoo
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, South Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon, South Korea
- Department of Neurosurgery and McNair Scholar Program, Baylor College of Medicine, Houston, TX 77030, USA
| | - Choong-Wan Woo
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, South Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon, South Korea
- Life-inspired Neural Network for Prediction and Optimization Research Group, Suwon, South Korea
| |
Collapse
|
3
|
Eisen AJ, Kozachkov L, Bastos AM, Donoghue JA, Mahnke MK, Brincat SL, Chandra S, Tauber J, Brown EN, Fiete IR, Miller EK. Propofol anesthesia destabilizes neural dynamics across cortex. Neuron 2024; 112:2799-2813.e9. [PMID: 39013467 DOI: 10.1016/j.neuron.2024.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 05/13/2024] [Accepted: 06/14/2024] [Indexed: 07/18/2024]
Abstract
Every day, hundreds of thousands of people undergo general anesthesia. One hypothesis is that anesthesia disrupts dynamic stability-the ability of the brain to balance excitability with the need to be stable and controllable. To test this hypothesis, we developed a method for quantifying changes in population-level dynamic stability in complex systems: delayed linear analysis for stability estimation (DeLASE). Propofol was used to transition animals between the awake state and anesthetized unconsciousness. DeLASE was applied to macaque cortex local field potentials (LFPs). We found that neural dynamics were more unstable in unconsciousness compared with the awake state. Cortical trajectories mirrored predictions from destabilized linear systems. We mimicked the effect of propofol in simulated neural networks by increasing inhibitory tone. This in turn destabilized the networks, as observed in the neural data. Our results suggest that anesthesia disrupts dynamical stability that is required for consciousness.
Collapse
Affiliation(s)
- Adam J Eisen
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; The K. Lisa Yang Integrative Computational Neuroscience Center, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Leo Kozachkov
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; The K. Lisa Yang Integrative Computational Neuroscience Center, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - André M Bastos
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37235, USA
| | - Jacob A Donoghue
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Beacon Biosignals, Boston, MA 02114, USA
| | - Meredith K Mahnke
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Scott L Brincat
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Sarthak Chandra
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; The K. Lisa Yang Integrative Computational Neuroscience Center, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - John Tauber
- Department of Mathematics and Statistics, Boston University, Boston, MA 02215, USA
| | - Emery N Brown
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA 02114, USA; Division of Sleep Medicine, Harvard Medical School, Boston, MA 02115, USA
| | - Ila R Fiete
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; The K. Lisa Yang Integrative Computational Neuroscience Center, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Earl K Miller
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| |
Collapse
|
4
|
Sabatini DA, Kaufman MT. Reach-dependent reorientation of rotational dynamics in motor cortex. Nat Commun 2024; 15:7007. [PMID: 39143078 PMCID: PMC11325044 DOI: 10.1038/s41467-024-51308-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Accepted: 08/05/2024] [Indexed: 08/16/2024] Open
Abstract
During reaching, neurons in motor cortex exhibit complex, time-varying activity patterns. Though single-neuron activity correlates with movement parameters, movement correlations explain neural activity only partially. Neural responses also reflect population-level dynamics thought to generate outputs. These dynamics have previously been described as "rotational," such that activity orbits in neural state space. Here, we reanalyze reaching datasets from male Rhesus macaques and find two essential features that cannot be accounted for with standard dynamics models. First, the planes in which rotations occur differ for different reaches. Second, this variation in planes reflects the overall location of activity in neural state space. Our "location-dependent rotations" model fits nearly all motor cortex activity during reaching, and high-quality decoding of reach kinematics reveals a quasilinear relationship with spiking. Varying rotational planes allows motor cortex to produce richer outputs than possible under previous models. Finally, our model links representational and dynamical ideas: representation is present in the state space location, which dynamics then convert into time-varying command signals.
Collapse
Affiliation(s)
- David A Sabatini
- Department of Organismal Biology and Anatomy, The University of Chicago, Chicago, IL, 60637, USA
- Neuroscience Institute, The University of Chicago, Chicago, IL, 60637, USA
| | - Matthew T Kaufman
- Department of Organismal Biology and Anatomy, The University of Chicago, Chicago, IL, 60637, USA.
- Neuroscience Institute, The University of Chicago, Chicago, IL, 60637, USA.
| |
Collapse
|
5
|
Jurewicz K, Sleezer BJ, Mehta PS, Hayden BY, Ebitz RB. Irrational choices via a curvilinear representational geometry for value. Nat Commun 2024; 15:6424. [PMID: 39080250 PMCID: PMC11289086 DOI: 10.1038/s41467-024-49568-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 06/06/2024] [Indexed: 08/02/2024] Open
Abstract
We make decisions by comparing values, but it is not yet clear how value is represented in the brain. Many models assume, if only implicitly, that the representational geometry of value is linear. However, in part due to a historical focus on noisy single neurons, rather than neuronal populations, this hypothesis has not been rigorously tested. Here, we examine the representational geometry of value in the ventromedial prefrontal cortex (vmPFC), a part of the brain linked to economic decision-making, in two male rhesus macaques. We find that values are encoded along a curved manifold in vmPFC. This curvilinear geometry predicts a specific pattern of irrational decision-making: that decision-makers will make worse choices when an irrelevant, decoy option is worse in value, compared to when it is better. We observe this type of irrational choices in behavior. Together, these results not only suggest that the representational geometry of value is nonlinear, but that this nonlinearity could impose bounds on rational decision-making.
Collapse
Affiliation(s)
- Katarzyna Jurewicz
- Department of Neurosciences, Faculté de médecine, and Centre interdisciplinaire de recherche sur le cerveau et l'apprentissage, Université de Montréal, Montréal, QC, Canada
- Department of Physiology, Faculty of Medicine and Health Sciences, McGill University, Montréal, QC, Canada
| | - Brianna J Sleezer
- Department of Neuroscience, Center for Magnetic Resonance Research, and Center for Neuroengineering, University of Minnesota, Minneapolis, MN, USA
| | - Priyanka S Mehta
- Department of Neuroscience, Center for Magnetic Resonance Research, and Center for Neuroengineering, University of Minnesota, Minneapolis, MN, USA
- Psychology Program, Department of Human Behavior, Justice, and Diversity, University of Wisconsin, Superior, Superior, WI, USA
| | - Benjamin Y Hayden
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - R Becket Ebitz
- Department of Neurosciences, Faculté de médecine, and Centre interdisciplinaire de recherche sur le cerveau et l'apprentissage, Université de Montréal, Montréal, QC, Canada.
| |
Collapse
|
6
|
Horrocks EAB, Rodrigues FR, Saleem AB. Flexible neural population dynamics govern the speed and stability of sensory encoding in mouse visual cortex. Nat Commun 2024; 15:6415. [PMID: 39080254 PMCID: PMC11289260 DOI: 10.1038/s41467-024-50563-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 07/15/2024] [Indexed: 08/02/2024] Open
Abstract
Time courses of neural responses underlie real-time sensory processing and perception. How these temporal dynamics change may be fundamental to how sensory systems adapt to different perceptual demands. By simultaneously recording from hundreds of neurons in mouse primary visual cortex, we examined neural population responses to visual stimuli at sub-second timescales, during different behavioural states. We discovered that during active behavioural states characterised by locomotion, single-neurons shift from transient to sustained response modes, facilitating rapid emergence of visual stimulus tuning. Differences in single-neuron response dynamics were associated with changes in temporal dynamics of neural correlations, including faster stabilisation of stimulus-evoked changes in the structure of correlations during locomotion. Using Factor Analysis, we examined temporal dynamics of latent population responses and discovered that trajectories of population activity make more direct transitions between baseline and stimulus-encoding neural states during locomotion. This could be partly explained by dampening of oscillatory dynamics present during stationary behavioural states. Functionally, changes in temporal response dynamics collectively enabled faster, more stable and more efficient encoding of new visual information during locomotion. These findings reveal a principle of how sensory systems adapt to perceptual demands, where flexible neural population dynamics govern the speed and stability of sensory encoding.
Collapse
Affiliation(s)
- Edward A B Horrocks
- Institute of Behavioural Neuroscience, University College London, London, WC1V 0AP, UK.
| | - Fabio R Rodrigues
- Institute of Behavioural Neuroscience, University College London, London, WC1V 0AP, UK
| | - Aman B Saleem
- Institute of Behavioural Neuroscience, University College London, London, WC1V 0AP, UK.
| |
Collapse
|
7
|
Serrano-Fernández L, Beirán M, Parga N. Emergent perceptual biases from state-space geometry in trained spiking recurrent neural networks. Cell Rep 2024; 43:114412. [PMID: 38968075 DOI: 10.1016/j.celrep.2024.114412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 04/08/2024] [Accepted: 06/12/2024] [Indexed: 07/07/2024] Open
Abstract
A stimulus held in working memory is perceived as contracted toward the average stimulus. This contraction bias has been extensively studied in psychophysics, but little is known about its origin from neural activity. By training recurrent networks of spiking neurons to discriminate temporal intervals, we explored the causes of this bias and how behavior relates to population firing activity. We found that the trained networks exhibited animal-like behavior. Various geometric features of neural trajectories in state space encoded warped representations of the durations of the first interval modulated by sensory history. Formulating a normative model, we showed that these representations conveyed a Bayesian estimate of the interval durations, thus relating activity and behavior. Importantly, our findings demonstrate that Bayesian computations already occur during the sensory phase of the first stimulus and persist throughout its maintenance in working memory, until the time of stimulus comparison.
Collapse
Affiliation(s)
- Luis Serrano-Fernández
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain; Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| | - Manuel Beirán
- Center for Theoretical Neuroscience, Zuckerman Institute, Columbia University, New York, NY, USA
| | - Néstor Parga
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain; Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain.
| |
Collapse
|
8
|
Englitz B, Akram S, Elhilali M, Shamma S. Decoding contextual influences on auditory perception from primary auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.24.573229. [PMID: 38187523 PMCID: PMC10769425 DOI: 10.1101/2023.12.24.573229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
Perception can be highly dependent on stimulus context, but whether and how sensory areas encode the context remains uncertain. We used an ambiguous auditory stimulus - a tritone pair - to investigate the neural activity associated with a preceding contextual stimulus that strongly influenced the tritone pair's perception: either as an ascending or a descending step in pitch. We recorded single-unit responses from a population of auditory cortical cells in awake ferrets listening to the tritone pairs preceded by the contextual stimulus. We find that the responses adapt locally to the contextual stimulus, consistent with human MEG recordings from the auditory cortex under the same conditions. Decoding the population responses demonstrates that cells responding to pitch-class-changes are able to predict well the context-sensitive percept of the tritone pairs. Conversely, decoding the individual pitch-class representations and taking their distance in the circular Shepard tone space predicts the opposite of the percept. The various percepts can be readily captured and explained by a neural model of cortical activity based on populations of adapting, pitch-class and pitch-class-direction cells, aligned with the neurophysiological responses. Together, these decoding and model results suggest that contextual influences on perception may well be already encoded at the level of the primary sensory cortices, reflecting basic neural response properties commonly found in these areas.
Collapse
|
9
|
Ostojic S, Fusi S. Computational role of structure in neural activity and connectivity. Trends Cogn Sci 2024; 28:677-690. [PMID: 38553340 DOI: 10.1016/j.tics.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/05/2024]
Abstract
One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005 Paris, France.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| |
Collapse
|
10
|
Bredenberg C, Savin C. Desiderata for Normative Models of Synaptic Plasticity. Neural Comput 2024; 36:1245-1285. [PMID: 38776950 DOI: 10.1162/neco_a_01671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 02/06/2024] [Indexed: 05/25/2024]
Abstract
Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed to ensure that a given model demonstrates a clear link between plasticity and adaptive behavior, is consistent with known biological evidence about neural plasticity and yields specific testable predictions. As a prototype, we include a detailed analysis of the REINFORCE algorithm. We also discuss how new models have begun to improve on the identified criteria and suggest avenues for further development. Overall, we provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Mila-Quebec AI Institute, Montréal, QC H2S 3H1, Canada
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Center for Data Science, New York University, New York, NY 10011, U.S.A.
| |
Collapse
|
11
|
Fischer BJ, Shadron K, Ferger R, Peña JL. Single trial Bayesian inference by population vector readout in the barn owl's sound localization system. PLoS One 2024; 19:e0303843. [PMID: 38771860 PMCID: PMC11108143 DOI: 10.1371/journal.pone.0303843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 05/01/2024] [Indexed: 05/23/2024] Open
Abstract
Bayesian models have proven effective in characterizing perception, behavior, and neural encoding across diverse species and systems. The neural implementation of Bayesian inference in the barn owl's sound localization system and behavior has been previously explained by a non-uniform population code model. This model specifies the neural population activity pattern required for a population vector readout to match the optimal Bayesian estimate. While prior analyses focused on trial-averaged comparisons of model predictions with behavior and single-neuron responses, it remains unknown whether this model can accurately approximate Bayesian inference on single trials under varying sensory reliability, a fundamental condition for natural perception and behavior. In this study, we utilized mathematical analysis and simulations to demonstrate that decoding a non-uniform population code via a population vector readout approximates the Bayesian estimate on single trials for varying sensory reliabilities. Our findings provide additional support for the non-uniform population code model as a viable explanation for the barn owl's sound localization pathway and behavior.
Collapse
Affiliation(s)
- Brian J. Fischer
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
| | - Keanu Shadron
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - Roland Ferger
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - José L. Peña
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| |
Collapse
|
12
|
Peviani VC, Miller LE, Medendorp WP. Biases in hand perception are driven by somatosensory computations, not a distorted hand model. Curr Biol 2024; 34:2238-2246.e5. [PMID: 38718799 DOI: 10.1016/j.cub.2024.04.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/09/2024] [Accepted: 04/04/2024] [Indexed: 05/23/2024]
Abstract
To sense and interact with objects in the environment, we effortlessly configure our fingertips at desired locations. It is therefore reasonable to assume that the underlying control mechanisms rely on accurate knowledge about the structure and spatial dimensions of our hand and fingers. This intuition, however, is challenged by years of research showing drastic biases in the perception of finger geometry.1,2,3,4,5 This perceptual bias has been taken as evidence that the brain's internal representation of the body's geometry is distorted,6 leading to an apparent paradox regarding the skillfulness of our actions.7 Here, we propose an alternative explanation of the biases in hand perception-they are the result of the Bayesian integration of noisy, but unbiased, somatosensory signals about finger geometry and posture. To address this hypothesis, we combined Bayesian reverse engineering with behavioral experimentation on joint and fingertip localization of the index finger. We modeled the Bayesian integration either in sensory or in space-based coordinates, showing that the latter model variant led to biases in finger perception despite accurate representation of finger length. Behavioral measures of joint and fingertip localization responses showed similar biases, which were well fitted by the space-based, but not the sensory-based, model variant. The space-based model variant also outperformed a distorted hand model with built-in geometric biases. In total, our results suggest that perceptual distortions of finger geometry do not reflect a distorted hand model but originate from near-optimal Bayesian inference on somatosensory signals.
Collapse
Affiliation(s)
- Valeria C Peviani
- Donders Institute for Cognition and Behavior, Radboud University, Nijmegen 6525 GD, the Netherlands.
| | - Luke E Miller
- Donders Institute for Cognition and Behavior, Radboud University, Nijmegen 6525 GD, the Netherlands
| | - W Pieter Medendorp
- Donders Institute for Cognition and Behavior, Radboud University, Nijmegen 6525 GD, the Netherlands
| |
Collapse
|
13
|
Matsumura Y, Roach NW, Heron J, Miyazaki M. Body-part specificity for learning of multiple prior distributions in human coincidence timing. NPJ SCIENCE OF LEARNING 2024; 9:34. [PMID: 38698023 PMCID: PMC11066023 DOI: 10.1038/s41539-024-00241-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 04/02/2024] [Indexed: 05/05/2024]
Abstract
During timing tasks, the brain learns the statistical distribution of target intervals and integrates this prior knowledge with sensory inputs to optimise task performance. Daily events can have different temporal statistics (e.g., fastball/slowball in baseball batting), making it important to learn and retain multiple priors. However, the rules governing this process are not yet understood. Here, we demonstrate that the learning of multiple prior distributions in a coincidence timing task is characterised by body-part specificity. In our experiments, two prior distributions (short and long intervals) were imposed on participants. When using only one body part for timing responses, regardless of the priors, participants learned a single prior by generalising over the two distributions. However, when the two priors were assigned to different body parts, participants concurrently learned the two independent priors. Moreover, body-part specific prior acquisition was faster when the priors were assigned to anatomically distant body parts (e.g., hand/foot) than when they were assigned to close body parts (e.g., index/middle fingers). This suggests that the body-part specific learning of priors is organised according to somatotopy.
Collapse
Affiliation(s)
- Yoshiki Matsumura
- Graduate School of Integrated Science and Technology, Shizuoka University, Hamamatsu, Japan
| | - Neil W Roach
- School of Psychology, University of Nottingham, Nottingham, UK
| | - James Heron
- School of Optometry and Vision Science, University of Bradford, Bradford, UK
| | - Makoto Miyazaki
- Graduate School of Integrated Science and Technology, Shizuoka University, Hamamatsu, Japan.
- Faculty of Informatics, Shizuoka University, Hamamatsu, Japan.
| |
Collapse
|
14
|
Terada Y, Toyoizumi T. Chaotic neural dynamics facilitate probabilistic computations through sampling. Proc Natl Acad Sci U S A 2024; 121:e2312992121. [PMID: 38648479 PMCID: PMC11067032 DOI: 10.1073/pnas.2312992121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 02/13/2024] [Indexed: 04/25/2024] Open
Abstract
Cortical neurons exhibit highly variable responses over trials and time. Theoretical works posit that this variability arises potentially from chaotic network dynamics of recurrently connected neurons. Here, we demonstrate that chaotic neural dynamics, formed through synaptic learning, allow networks to perform sensory cue integration in a sampling-based implementation. We show that the emergent chaotic dynamics provide neural substrates for generating samples not only of a static variable but also of a dynamical trajectory, where generic recurrent networks acquire these abilities with a biologically plausible learning rule through trial and error. Furthermore, the networks generalize their experience in the stimulus-evoked samples to the inference without partial or all sensory information, which suggests a computational role of spontaneous activity as a representation of the priors as well as a tractable biological computation for marginal distributions. These findings suggest that chaotic neural dynamics may serve for the brain function as a Bayesian generative model.
Collapse
Affiliation(s)
- Yu Terada
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Neurobiology, University of California, San Diego, La Jolla, CA92093
- The Institute for Physics of Intelligence, The University of Tokyo, Tokyo113-0033, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo113-8656, Japan
| |
Collapse
|
15
|
Rodriguez-Larios J, Rassi E, Mendoza G, Merchant H, Haegens S. Common neural mechanisms supporting time judgements in humans and monkeys. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.25.591075. [PMID: 38712259 PMCID: PMC11071527 DOI: 10.1101/2024.04.25.591075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
There has been an increasing interest in identifying the biological underpinnings of human time perception, for which purpose research in non-human primates (NHP) is common. Although previous work, based on behaviour, suggests that similar mechanisms support time perception across species, the neural correlates of time estimation in humans and NHP have not been directly compared. In this study, we assess whether brain evoked responses during a time categorization task are similar across species. Specifically, we assess putative differences in post-interval evoked potentials as a function of perceived duration in human EEG (N = 24) and local field potential (LFP) and spike recordings in pre-supplementary motor area (pre-SMA) of one monkey. Event-related potentials (ERPs) differed significantly after the presentation of the temporal interval between "short" and "long" perceived durations in both species, even when the objective duration of the stimuli was the same. Interestingly, the polarity of the reported ERPs was reversed for incorrect trials (i.e., the ERP of a "long" stimulus looked like the ERP of a "short" stimulus when a time categorization error was made). Hence, our results show that post-interval potentials reflect the perceived (rather than the objective) duration of the presented time interval in both NHP and humans. In addition, firing rates in monkey's pre-SMA also differed significantly between short and long perceived durations and were reversed in incorrect trials. Together, our results show that common neural mechanisms support time categorization in NHP and humans, thereby suggesting that NHP are a good model for investigating human time perception.
Collapse
Affiliation(s)
| | - Elie Rassi
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Department of Psychology, Centre for Cognitive Neuroscience, Paris-Lodron-University of Salzburg, Salzburg, Austria
| | - Germán Mendoza
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Queretaro, Mexico
| | - Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Queretaro, Mexico
| | - Saskia Haegens
- Department of Psychiatry, Columbia University, New York, USA
- Division of Systems Neuroscience, New York State Psychiatric Institute, New York, USA
| |
Collapse
|
16
|
Hasnain MA, Birnbaum JE, Nunez JLU, Hartman EK, Chandrasekaran C, Economo MN. Separating cognitive and motor processes in the behaving mouse. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.23.554474. [PMID: 37662199 PMCID: PMC10473744 DOI: 10.1101/2023.08.23.554474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
The cognitive processes supporting complex animal behavior are closely associated with ubiquitous movements responsible for our posture, facial expressions, ability to actively sample our sensory environments, and other critical processes. These movements are strongly related to neural activity across much of the brain and are often highly correlated with ongoing cognitive processes, making it challenging to dissociate the neural dynamics that support cognitive processes from those supporting related movements. In such cases, a critical issue is whether cognitive processes are separable from related movements, or if they are driven by common neural mechanisms. Here, we demonstrate how the separability of cognitive and motor processes can be assessed, and, when separable, how the neural dynamics associated with each component can be isolated. We establish a novel two-context behavioral task in mice that involves multiple cognitive processes and show that commonly observed dynamics taken to support cognitive processes are strongly contaminated by movements. When cognitive and motor components are isolated using a novel approach for subspace decomposition, we find that they exhibit distinct dynamical trajectories. Further, properly accounting for movement revealed that largely separate populations of cells encode cognitive and motor variables, in contrast to the 'mixed selectivity' often reported. Accurately isolating the dynamics associated with particular cognitive and motor processes will be essential for developing conceptual and computational models of neural circuit function and evaluating the function of the cell types of which neural circuits are composed.
Collapse
Affiliation(s)
- Munib A. Hasnain
- Department of Biomedical Engineering, Boston University, Boston, MA
- Center for Neurophotonics, Boston University, Boston, MA
| | - Jaclyn E. Birnbaum
- Graduate Program for Neuroscience, Boston University, Boston, MA
- Center for Neurophotonics, Boston University, Boston, MA
| | | | - Emma K. Hartman
- Department of Biomedical Engineering, Boston University, Boston, MA
| | - Chandramouli Chandrasekaran
- Department of Psychological and Brain Sciences, Boston University, Boston, MA
- Department of Neurobiology & Anatomy, Boston University, Boston, MA
- Center for Systems Neuroscience, Boston University, Boston, MA
| | - Michael N. Economo
- Department of Biomedical Engineering, Boston University, Boston, MA
- Center for Neurophotonics, Boston University, Boston, MA
- Center for Systems Neuroscience, Boston University, Boston, MA
| |
Collapse
|
17
|
Maggi S, Hock RM, O'Neill M, Buckley M, Moran PM, Bast T, Sami M, Humphries MD. Tracking subjects' strategies in behavioural choice experiments at trial resolution. eLife 2024; 13:e86491. [PMID: 38426402 PMCID: PMC10959529 DOI: 10.7554/elife.86491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 02/23/2024] [Indexed: 03/02/2024] Open
Abstract
Investigating how, when, and what subjects learn during decision-making tasks requires tracking their choice strategies on a trial-by-trial basis. Here, we present a simple but effective probabilistic approach to tracking choice strategies at trial resolution using Bayesian evidence accumulation. We show this approach identifies both successful learning and the exploratory strategies used in decision tasks performed by humans, non-human primates, rats, and synthetic agents. Both when subjects learn and when rules change the exploratory strategies of win-stay and lose-shift, often considered complementary, are consistently used independently. Indeed, we find the use of lose-shift is strong evidence that subjects have latently learnt the salient features of a new rewarded rule. Our approach can be extended to any discrete choice strategy, and its low computational cost is ideally suited for real-time analysis and closed-loop control.
Collapse
Affiliation(s)
- Silvia Maggi
- School of Psychology, University of NottinghamNottinghamUnited Kingdom
| | - Rebecca M Hock
- School of Psychology, University of NottinghamNottinghamUnited Kingdom
| | - Martin O'Neill
- School of Psychology, University of NottinghamNottinghamUnited Kingdom
- Department of Health & Nutritional Sciences, Atlantic Technological UniversitySligoIreland
| | - Mark Buckley
- Department of Experimental Psychology, University of OxfordOxfordUnited Kingdom
| | - Paula M Moran
- School of Psychology, University of NottinghamNottinghamUnited Kingdom
- Department of Neuroscience, University of NottinghamNottinghamUnited Kingdom
| | - Tobias Bast
- School of Psychology, University of NottinghamNottinghamUnited Kingdom
- Department of Neuroscience, University of NottinghamNottinghamUnited Kingdom
| | - Musa Sami
- Institute of Mental Health, University of NottinghamNottinghamUnited Kingdom
| | - Mark D Humphries
- School of Psychology, University of NottinghamNottinghamUnited Kingdom
| |
Collapse
|
18
|
Bader F, Wiener M. Neuroimaging Signatures of Metacognitive Improvement in Sensorimotor Timing. J Neurosci 2024; 44:e1789222023. [PMID: 38129131 PMCID: PMC10904090 DOI: 10.1523/jneurosci.1789-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 11/03/2023] [Accepted: 12/13/2023] [Indexed: 12/23/2023] Open
Abstract
Error monitoring is an essential human ability underlying learning and metacognition. In the time domain, humans possess a remarkable ability to learn and adapt to temporal intervals, yet the neural mechanisms underlying this are not clear. Recently, we demonstrated that humans improve sensorimotor time estimates when given the chance to incorporate previous trial feedback ( Bader and Wiener, 2021), suggesting that humans are metacognitively aware of their own timing errors. To test the neural basis of this metacognitive ability, human participants of both sexes underwent fMRI while they performed a visual temporal reproduction task with randomized supra-second intervals (1.5-6 s). Crucially, each trial was repeated following feedback, allowing a "re-do" to learn from the successes or errors in the initial trial. Behaviorally, we replicated our previous finding of improved re-do trial performance despite temporally uninformative (i.e., early or late) feedback. For neuroimaging, we observed a dissociation between estimating and reproducing time intervals. Estimation engaged the default mode network (DMN), including the superior frontal gyri, precuneus, and posterior cingulate, whereas reproduction activated regions associated traditionally with the "timing network" (TN), including the supplementary motor area (SMA), precentral gyrus, and right supramarginal gyrus. Notably, greater and more extensive DMN involvement was observed in re-do trials, whereas for the TN, it was more constrained. Task-based connectivity between these networks demonstrated higher inter-network correlation primarily when estimating initial trials, while re-do trial communication was higher during reproduction. Overall, these results suggest that the DMN and TN jointly mediate subjective self-awareness to improve timing performance.
Collapse
Affiliation(s)
- Farah Bader
- Department of Psychology, George Mason University, Fairfax, Virginia, 22030
| | - Martin Wiener
- Department of Psychology, George Mason University, Fairfax, Virginia, 22030
| |
Collapse
|
19
|
Sánchez-Moncada I, Concha L, Merchant H. Pre-supplementary Motor Cortex Mediates Learning Transfer from Perceptual to Motor Timing. J Neurosci 2024; 44:e3191202023. [PMID: 38123361 PMCID: PMC10883661 DOI: 10.1523/jneurosci.3191-20.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 09/30/2023] [Accepted: 11/21/2023] [Indexed: 12/23/2023] Open
Abstract
When we intensively train a timing skill, such as learning to play the piano, we not only produce brain changes associated with task-specific learning but also improve our performance in other temporal behaviors that depend on these tuned neural resources. Since the neural basis of time learning and generalization is still unknown, we measured the changes in neural activity associated with the transfer of learning from perceptual to motor timing in a large sample of subjects (n = 65; 39 women). We found that intense training in an interval discrimination task increased the acuity of time perception in a group of subjects that also exhibited learning transfer, expressed as a reduction in inter-tap interval variability during an internally driven periodic motor task. In addition, we found subjects with no learning and/or generalization effects. Notably, functional imaging showed an increase in pre-supplementary motor area and caudate-putamen activity between the post- and pre-training sessions of the tapping task. This increase was specific to the subjects that generalized their timing acuity from the perceptual to the motor context. These results emphasize the central role of the cortico-basal ganglia circuit in the generalization of timing abilities between tasks.
Collapse
Affiliation(s)
| | - Luis Concha
- Instituto de Neurobiología, Querétaro 76230, México
- International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec H2V 2S9, Canada
| | | |
Collapse
|
20
|
Kuzmina E, Kriukov D, Lebedev M. Neuronal travelling waves explain rotational dynamics in experimental datasets and modelling. Sci Rep 2024; 14:3566. [PMID: 38347042 PMCID: PMC10861525 DOI: 10.1038/s41598-024-53907-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 02/06/2024] [Indexed: 02/15/2024] Open
Abstract
Spatiotemporal properties of neuronal population activity in cortical motor areas have been subjects of experimental and theoretical investigations, generating numerous interpretations regarding mechanisms for preparing and executing limb movements. Two competing models, representational and dynamical, strive to explain the relationship between movement parameters and neuronal activity. A dynamical model uses the jPCA method that holistically characterizes oscillatory activity in neuron populations by maximizing the data rotational dynamics. Different rotational dynamics interpretations revealed by the jPCA approach have been proposed. Yet, the nature of such dynamics remains poorly understood. We comprehensively analyzed several neuronal-population datasets and found rotational dynamics consistently accounted for by a traveling wave pattern. For quantifying rotation strength, we developed a complex-valued measure, the gyration number. Additionally, we identified parameters influencing rotation extent in the data. Our findings suggest that rotational dynamics and traveling waves are typically the same phenomena, so reevaluation of the previous interpretations where they were considered separate entities is needed.
Collapse
Affiliation(s)
- Ekaterina Kuzmina
- Skolkovo Institute of Science and Technology, Vladimir Zelman Center for Neurobiology and Brain Rehabilitation, Moscow, Russia, 121205.
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia.
| | - Dmitrii Kriukov
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia
- Skolkovo Institute of Science and Technology, Center for Molecular and Cellular Biology, Moscow, Russia, 121205
| | - Mikhail Lebedev
- Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, Russia, 119992
- Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, Saint-Petersburg, Russia, 194223
| |
Collapse
|
21
|
Zimnik AJ, Cora Ames K, An X, Driscoll L, Lara AH, Russo AA, Susoy V, Cunningham JP, Paninski L, Churchland MM, Glaser JI. Identifying Interpretable Latent Factors with Sparse Component Analysis. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.05.578988. [PMID: 38370650 PMCID: PMC10871230 DOI: 10.1101/2024.02.05.578988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
In many neural populations, the computationally relevant signals are posited to be a set of 'latent factors' - signals shared across many individual neurons. Understanding the relationship between neural activity and behavior requires the identification of factors that reflect distinct computational roles. Methods for identifying such factors typically require supervision, which can be suboptimal if one is unsure how (or whether) factors can be grouped into distinct, meaningful sets. Here, we introduce Sparse Component Analysis (SCA), an unsupervised method that identifies interpretable latent factors. SCA seeks factors that are sparse in time and occupy orthogonal dimensions. With these simple constraints, SCA facilitates surprisingly clear parcellations of neural activity across a range of behaviors. We applied SCA to motor cortex activity from reaching and cycling monkeys, single-trial imaging data from C. elegans, and activity from a multitask artificial network. SCA consistently identified sets of factors that were useful in describing network computations.
Collapse
Affiliation(s)
- Andrew J Zimnik
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - K Cora Ames
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Xinyue An
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Interdepartmental Neuroscience Program, Northwestern University, Chicago, IL, USA
| | - Laura Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Allen Institute for Neural Dynamics, Allen Institute, Seattle, CA, USA
| | - Antonio H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Abigail A Russo
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Vladislav Susoy
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - John P Cunningham
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Liam Paninski
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
| | - Joshua I Glaser
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Department of Computer Science, Northwestern University, Evanston, IL, USA
| |
Collapse
|
22
|
Rolando F, Kononowicz TW, Duhamel JR, Doyère V, Wirth S. Distinct neural adaptations to time demand in the striatum and the hippocampus. Curr Biol 2024; 34:156-170.e7. [PMID: 38141617 DOI: 10.1016/j.cub.2023.11.066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 10/18/2023] [Accepted: 11/30/2023] [Indexed: 12/25/2023]
Abstract
How do neural codes adjust to track time across a range of resolutions, from milliseconds to multi-seconds, as a function of the temporal frequency at which events occur? To address this question, we studied time-modulated cells in the striatum and the hippocampus, while macaques categorized three nested intervals within the sub-second or the supra-second range (up to 1, 2, 4, or 8 s), thereby modifying the temporal resolution needed to solve the task. Time-modulated cells carried more information for intervals with explicit timing demand, than for any other interval. The striatum, particularly the caudate, supported the most accurate temporal prediction throughout all time ranges. Strikingly, its temporal readout adjusted non-linearly to the time range, suggesting that the striatal resolution shifted from a precise millisecond to a coarse multi-second range as a function of demand. This is in line with monkey's behavioral latencies, which indicated that they tracked time until 2 s but employed a coarse categorization strategy for durations beyond. By contrast, the hippocampus discriminated only the beginning from the end of intervals, regardless of the range. We propose that the hippocampus may provide an overall poor signal marking an event's beginning, whereas the striatum optimizes neural resources to process time throughout an interval adapting to the ongoing timing necessity.
Collapse
Affiliation(s)
- Felipe Rolando
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, Université Lyon 1, 67 boulevard Pinel, 69500 Bron, France
| | - Tadeusz W Kononowicz
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, Université Lyon 1, 67 boulevard Pinel, 69500 Bron, France; Université Paris-Saclay, CNRS, Institut des Neurosciences Paris-Saclay (NeuroPSI), 91400 Saclay, France; Institute of Psychology, The Polish Academy of Sciences, ul. Jaracza 1, 00-378 Warsaw, Poland
| | - Jean-René Duhamel
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, Université Lyon 1, 67 boulevard Pinel, 69500 Bron, France
| | - Valérie Doyère
- Université Paris-Saclay, CNRS, Institut des Neurosciences Paris-Saclay (NeuroPSI), 91400 Saclay, France
| | - Sylvia Wirth
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, Université Lyon 1, 67 boulevard Pinel, 69500 Bron, France.
| |
Collapse
|
23
|
Merchant H, de Lafuente V. A Second Introduction to the Neurobiology of Interval Timing. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:3-23. [PMID: 38918343 DOI: 10.1007/978-3-031-60183-5_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Time is a critical variable that organisms must be able to measure in order to survive in a constantly changing environment. Initially, this paper describes the myriad of contexts where time is estimated or predicted and suggests that timing is not a single process and probably depends on a set of different neural mechanisms. Consistent with this hypothesis, the explosion of neurophysiological and imaging studies in the last 10 years suggests that different brain circuits and neural mechanisms are involved in the ability to tell and use time to control behavior across contexts. Then, we develop a conceptual framework that defines time as a family of different phenomena and propose a taxonomy with sensory, perceptual, motor, and sensorimotor timing as the pillars of temporal processing in the range of hundreds of milliseconds.
Collapse
Affiliation(s)
- Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Querétaro, Mexico.
| | - Victor de Lafuente
- Institute of Neurobiology National Autonomous University of Mexico, Querétaro, Mexico
| |
Collapse
|
24
|
Merchant H, Mendoza G, Pérez O, Betancourt A, García-Saldivar P, Prado L. Diverse Time Encoding Strategies Within the Medial Premotor Areas of the Primate. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:117-140. [PMID: 38918349 DOI: 10.1007/978-3-031-60183-5_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
The measurement of time in the subsecond scale is critical for many sophisticated behaviors, yet its neural underpinnings are largely unknown. Recent neurophysiological experiments from our laboratory have shown that the neural activity in the medial premotor areas (MPC) of macaques can represent different aspects of temporal processing. During single interval categorization, we found that preSMA encodes a subjective category limit by reaching a peak of activity at a time that divides the set of test intervals into short and long. We also observed neural signals associated with the category selected by the subjects and the reward outcomes of the perceptual decision. On the other hand, we have studied the behavioral and neurophysiological basis of rhythmic timing. First, we have shown in different tapping tasks that macaques are able to produce predictively and accurately intervals that are cued by auditory or visual metronomes or when intervals are produced internally without sensory guidance. In addition, we found that the rhythmic timing mechanism in MPC is governed by different layers of neural clocks. Next, the instantaneous activity of single cells shows ramping activity that encodes the elapsed or remaining time for a tapping movement. In addition, we found MPC neurons that build neural sequences, forming dynamic patterns of activation that flexibly cover all the produced interval depending on the tapping tempo. This rhythmic neural clock resets on every interval providing an internal representation of pulse. Furthermore, the MPC cells show mixed selectivity, encoding not only elapsed time, but also the tempo of the tapping and the serial order element in the rhythmic sequence. Hence, MPC can map different task parameters, including the passage of time, using different cell populations. Finally, the projection of the time varying activity of MPC hundreds of cells into a low dimensional state space showed circular neural trajectories whose geometry represented the internal pulse and the tapping tempo. Overall, these findings support the notion that MPC is part of the core timing mechanism for both single interval and rhythmic timing, using neural clocks with different encoding principles, probably to flexibly encode and mix the timing representation with other task parameters.
Collapse
Affiliation(s)
- Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Querétaro, Mexico.
| | - Germán Mendoza
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Querétaro, Mexico
| | - Oswaldo Pérez
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Querétaro, Mexico
| | | | | | - Luis Prado
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Querétaro, Mexico
| |
Collapse
|
25
|
De Zeeuw CI, Koppen J, Bregman GG, Runge M, Narain D. Heterogeneous encoding of temporal stimuli in the cerebellar cortex. Nat Commun 2023; 14:7581. [PMID: 37989740 PMCID: PMC10663630 DOI: 10.1038/s41467-023-43139-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 11/01/2023] [Indexed: 11/23/2023] Open
Abstract
Local feedforward and recurrent connectivity are rife in the frontal areas of the cerebral cortex, which gives rise to rich heterogeneous dynamics observed in such areas. Recently, similar local connectivity motifs have been discovered among Purkinje and molecular layer interneurons of the cerebellar cortex, however, task-related activity in these neurons has often been associated with relatively simple facilitation and suppression dynamics. Here, we show that the rodent cerebellar cortex supports heterogeneity in task-related neuronal activity at a scale similar to the cerebral cortex. We provide a computational model that inculcates recent anatomical insights into local microcircuit motifs to show the putative basis for such heterogeneity. We also use cell-type specific chronic viral lesions to establish the involvement of cerebellar lobules in associative learning behaviors. Functional heterogeneity in neuronal profiles may not merely be the remit of the associative cerebral cortex, similar principles may be at play in subcortical areas, even those with seemingly crystalline and homogenous cytoarchitectures like the cerebellum.
Collapse
Affiliation(s)
- Chris I De Zeeuw
- Department of Neuroscience, Erasmus University Medical Center, Rotterdam, The Netherlands
- Netherlands Institute of Neuroscience, Amsterdam, The Netherlands
| | - Julius Koppen
- Department of Neuroscience, Erasmus University Medical Center, Rotterdam, The Netherlands
| | - George G Bregman
- Department of Neuroscience, Erasmus University Medical Center, Rotterdam, The Netherlands
| | - Marit Runge
- Department of Neuroscience, Erasmus University Medical Center, Rotterdam, The Netherlands
| | - Devika Narain
- Department of Neuroscience, Erasmus University Medical Center, Rotterdam, The Netherlands.
| |
Collapse
|
26
|
Wang BA, Drammis S, Hummos A, Halassa MM, Pleger B. Modulation of prefrontal couplings by prior belief-related responses in ventromedial prefrontal cortex. Front Neurosci 2023; 17:1278096. [PMID: 38033544 PMCID: PMC10684683 DOI: 10.3389/fnins.2023.1278096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 10/30/2023] [Indexed: 12/02/2023] Open
Abstract
Humans and other animals can maintain constant payoffs in an uncertain environment by steadily re-evaluating and flexibly adjusting current strategy, which largely depends on the interactions between the prefrontal cortex (PFC) and mediodorsal thalamus (MD). While the ventromedial PFC (vmPFC) represents the level of uncertainty (i.e., prior belief about external states), it remains unclear how the brain recruits the PFC-MD network to re-evaluate decision strategy based on the uncertainty. Here, we leverage non-linear dynamic causal modeling on fMRI data to test how prior belief-dependent activity in vmPFC gates the information flow in the PFC-MD network when individuals switch their decision strategy. We show that the prior belief-related responses in vmPFC had a modulatory influence on the connections from dorsolateral PFC (dlPFC) to both, lateral orbitofrontal (lOFC) and MD. Bayesian parameter averaging revealed that only the connection from the dlPFC to lOFC surpassed the significant threshold, which indicates that the weaker the prior belief, the less was the inhibitory influence of the vmPFC on the strength of effective connections from dlPFC to lOFC. These findings suggest that the vmPFC acts as a gatekeeper for the recruitment of processing resources to re-evaluate the decision strategy in situations of high uncertainty.
Collapse
Affiliation(s)
- Bin A. Wang
- Department of Neurology, BG University Hospital Bergmannsheil, Ruhr-University Bochum, Bochum, Germany
- Collaborative Research Centre 874 "Integration and Representation of Sensory Processes", Ruhr-University Bochum, Bochum, Germany
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Ministry of Education Key Laboratory of Brain Cognition and Educational Science, School of Psychology, Center for Studies of Psychological Application, South China Normal University, Guangzhou, China
| | - Sabrina Drammis
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, United States
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Ali Hummos
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, United States
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Michael M. Halassa
- Department of Neuroscience, Tufts University School of Medicine, Boston, MA, United States
| | - Burkhard Pleger
- Department of Neurology, BG University Hospital Bergmannsheil, Ruhr-University Bochum, Bochum, Germany
- Collaborative Research Centre 874 "Integration and Representation of Sensory Processes", Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
27
|
Jarne C, Laje R. Exploring weight initialization, diversity of solutions, and degradation in recurrent neural networks trained for temporal and decision-making tasks. J Comput Neurosci 2023; 51:407-431. [PMID: 37561278 DOI: 10.1007/s10827-023-00857-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 05/26/2023] [Accepted: 06/27/2023] [Indexed: 08/11/2023]
Abstract
Recurrent Neural Networks (RNNs) are frequently used to model aspects of brain function and structure. In this work, we trained small fully-connected RNNs to perform temporal and flow control tasks with time-varying stimuli. Our results show that different RNNs can solve the same task by converging to different underlying dynamics and also how the performance gracefully degrades as either network size is decreased, interval duration is increased, or connectivity damage is induced. For the considered tasks, we explored how robust the network obtained after training can be according to task parameterization. In the process, we developed a framework that can be useful to parameterize other tasks of interest in computational neuroscience. Our results are useful to quantify different aspects of the models, which are normally used as black boxes and need to be understood in order to model the biological response of cerebral cortex areas.
Collapse
Affiliation(s)
- Cecilia Jarne
- Universidad Nacional de Quilmes, Departamento de Ciencia y Tecnología, Bernal, Buenos Aires, Argentina.
- CONICET, Buenos Aires, Argentina.
- Center for Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.
| | - Rodrigo Laje
- Universidad Nacional de Quilmes, Departamento de Ciencia y Tecnología, Bernal, Buenos Aires, Argentina
- CONICET, Buenos Aires, Argentina
| |
Collapse
|
28
|
Durstewitz D, Koppe G, Thurm MI. Reconstructing computational system dynamics from neural data with recurrent neural networks. Nat Rev Neurosci 2023; 24:693-710. [PMID: 37794121 DOI: 10.1038/s41583-023-00740-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/18/2023] [Indexed: 10/06/2023]
Abstract
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
- Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany.
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany.
| | - Georgia Koppe
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Dept. of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Hector Institute for Artificial Intelligence in Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Max Ingo Thurm
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
29
|
Doelling KB, Arnal LH, Assaneo MF. Adaptive oscillators support Bayesian prediction in temporal processing. PLoS Comput Biol 2023; 19:e1011669. [PMID: 38011225 PMCID: PMC10703266 DOI: 10.1371/journal.pcbi.1011669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 12/07/2023] [Accepted: 11/07/2023] [Indexed: 11/29/2023] Open
Abstract
Humans excel at predictively synchronizing their behavior with external rhythms, as in dance or music performance. The neural processes underlying rhythmic inferences are debated: whether predictive perception relies on high-level generative models or whether it can readily be implemented locally by hard-coded intrinsic oscillators synchronizing to rhythmic input remains unclear and different underlying computational mechanisms have been proposed. Here we explore human perception for tone sequences with some temporal regularity at varying rates, but with considerable variability. Next, using a dynamical systems perspective, we successfully model the participants behavior using an adaptive frequency oscillator which adjusts its spontaneous frequency based on the rate of stimuli. This model better reflects human behavior than a canonical nonlinear oscillator and a predictive ramping model-both widely used for temporal estimation and prediction-and demonstrate that the classical distinction between absolute and relative computational mechanisms can be unified under this framework. In addition, we show that neural oscillators may constitute hard-coded physiological priors-in a Bayesian sense-that reduce temporal uncertainty and facilitate the predictive processing of noisy rhythms. Together, the results show that adaptive oscillators provide an elegant and biologically plausible means to subserve rhythmic inference, reconciling previously incompatible frameworks for temporal inferential processes.
Collapse
Affiliation(s)
- Keith B. Doelling
- Institut Pasteur, Université Paris Cité, Inserm UA06, Institut de l’Audition, Paris, France
- Center for Language Music and Emotion, New York University, New York, New York, United States of America
| | - Luc H. Arnal
- Institut Pasteur, Université Paris Cité, Inserm UA06, Institut de l’Audition, Paris, France
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Santiago de Querétaro, México
| |
Collapse
|
30
|
Betancourt A, Pérez O, Gámez J, Mendoza G, Merchant H. Amodal population clock in the primate medial premotor system for rhythmic tapping. Cell Rep 2023; 42:113234. [PMID: 37838944 DOI: 10.1016/j.celrep.2023.113234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 08/09/2023] [Accepted: 09/24/2023] [Indexed: 10/17/2023] Open
Abstract
The neural substrate for beat extraction and response entrainment to rhythms is not fully understood. Here we analyze the activity of medial premotor neurons in monkeys performing isochronous tapping guided by brief flashing stimuli or auditory tones. The population dynamics shared the following properties across modalities: the circular dynamics of the neural trajectories form a regenerating loop for every produced interval; the trajectories converge in similar state space at tapping times resetting the clock; and the tempo of the synchronized tapping is encoded in the trajectories by a combination of amplitude modulation and temporal scaling. Notably, the modality induces displacement in the neural trajectories in the auditory and visual subspaces without greatly altering the time-keeping mechanism. These results suggest that the interaction between the medial premotor cortex's amodal internal representation of pulse and a modality-specific external input generates a neural rhythmic clock whose dynamics govern rhythmic tapping execution across senses.
Collapse
Affiliation(s)
- Abraham Betancourt
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| | - Oswaldo Pérez
- Escuela Nacional de Estudios Superiores, Unidad Juriquilla, UNAM, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| | - Jorge Gámez
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| | - Germán Mendoza
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| | - Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México.
| |
Collapse
|
31
|
Robbe D. Lost in time: Relocating the perception of duration outside the brain. Neurosci Biobehav Rev 2023; 153:105312. [PMID: 37467906 DOI: 10.1016/j.neubiorev.2023.105312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 07/08/2023] [Indexed: 07/21/2023]
Abstract
It is well-accepted in neuroscience that animals process time internally to estimate the duration of intervals lasting between one and several seconds. More than 100 years ago, Henri Bergson nevertheless remarked that, because animals have memory, their inner experience of time is ever-changing, making duration impossible to measure internally and time a source of change. Bergson proposed that quantifying the inner experience of time requires its externalization in movements (observed or self-generated), as their unfolding leaves measurable traces in space. Here, studies across species are reviewed and collectively suggest that, in line with Bergson's ideas, animals spontaneously solve time estimation tasks through a movement-based spatialization of time. Moreover, the well-known scalable anticipatory responses of animals to regularly spaced rewards can be explained by the variable pressure of time on reward-oriented actions. Finally, the brain regions linked with time perception overlap with those implicated in motor control, spatial navigation and motivation. Thus, instead of considering time as static information processed by the brain, it might be fruitful to conceptualize it as a kind of force to which animals are more or less sensitive depending on their internal state and environment.
Collapse
Affiliation(s)
- David Robbe
- Institut de Neurobiologie de la Méditerranée (INMED), INSERM, Marseille, France; Aix-Marseille Université, Marseille, France.
| |
Collapse
|
32
|
De A, Chaudhuri R. Common population codes produce extremely nonlinear neural manifolds. Proc Natl Acad Sci U S A 2023; 120:e2305853120. [PMID: 37733742 PMCID: PMC10523500 DOI: 10.1073/pnas.2305853120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 08/03/2023] [Indexed: 09/23/2023] Open
Abstract
Populations of neurons represent sensory, motor, and cognitive variables via patterns of activity distributed across the population. The size of the population used to encode a variable is typically much greater than the dimension of the variable itself, and thus, the corresponding neural population activity occupies lower-dimensional subsets of the full set of possible activity states. Given population activity data with such lower-dimensional structure, a fundamental question asks how close the low-dimensional data lie to a linear subspace. The linearity or nonlinearity of the low-dimensional structure reflects important computational features of the encoding, such as robustness and generalizability. Moreover, identifying such linear structure underlies common data analysis methods such as Principal Component Analysis (PCA). Here, we show that for data drawn from many common population codes the resulting point clouds and manifolds are exceedingly nonlinear, with the dimension of the best-fitting linear subspace growing at least exponentially with the true dimension of the data. Consequently, linear methods like PCA fail dramatically at identifying the true underlying structure, even in the limit of arbitrarily many data points and no noise.
Collapse
Affiliation(s)
- Anandita De
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Physics, University of California, Davis, CA95616
| | - Rishidev Chaudhuri
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, CA95616
- Department of Mathematics, University of California, Davis, CA95616
| |
Collapse
|
33
|
Pérez O, Delle Monache S, Lacquaniti F, Bosco G, Merchant H. Rhythmic tapping to a moving beat motion kinematics overrules natural gravity. iScience 2023; 26:107543. [PMID: 37744410 PMCID: PMC10517406 DOI: 10.1016/j.isci.2023.107543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 05/30/2023] [Accepted: 08/01/2023] [Indexed: 09/26/2023] Open
Abstract
Beat induction is the cognitive ability that allows humans to listen to a regular pulse in music and move in synchrony with it. Although auditory rhythmic cues induce more consistent synchronization than flashing visual metronomes, this auditory-visual asymmetry can be canceled by visual moving stimuli. Here, we investigated whether the naturalness of visual motion or its kinematics could provide a synchronization advantage over flashing metronomes. Subjects were asked to tap in sync with visual metronomes defined by vertically accelerating/decelerating motion, either congruent or not with natural gravity; horizontally accelerating/decelerating motion; or flashing stimuli. We found that motion kinematics was the predominant factor determining rhythm synchronization, as accelerating moving metronomes in any cardinal direction produced more precise and predictive tapping than decelerating or flashing conditions. Our results support the notion that accelerating visual metronomes convey a strong sense of beat, as seen in the cueing movements of an orchestra director.
Collapse
Affiliation(s)
- Oswaldo Pérez
- Escuela Nacional de Estudios Superiores Unidad Juriquilla, Universidad Nacional Autónoma de México, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| | - Sergio Delle Monache
- Laboratory of Visuomotor Control and Gravitational Physiology, IRCCS Fondazione Santa Lucia, 00179 Rome, Italy
- Department of Civil Engineering and Computer Science Engineering, University of Rome Tor Vergata, 00133 Rome, Italy
| | - Francesco Lacquaniti
- Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
- Centre of Space Bio-medicine, University of Rome “Tor Vergata”, Rome, Italy
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Gianfranco Bosco
- Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
- Centre of Space Bio-medicine, University of Rome “Tor Vergata”, Rome, Italy
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Hugo Merchant
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| |
Collapse
|
34
|
Johnston WJ, Fine JM, Yoo SBM, Ebitz RB, Hayden BY. Semi-orthogonal subspaces for value mediate a tradeoff between binding and generalization. ARXIV 2023:arXiv:2309.07766v1. [PMID: 37744462 PMCID: PMC10516109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
When choosing between options, we must associate their values with the action needed to select them. We hypothesize that the brain solves this binding problem through neural population subspaces. To test this hypothesis, we examined neuronal responses in five reward-sensitive regions in macaques performing a risky choice task with sequential offers. Surprisingly, in all areas, the neural population encoded the values of offers presented on the left and right in distinct subspaces. We show that the encoding we observe is sufficient to bind the values of the offers to their respective positions in space while preserving abstract value information, which may be important for rapid learning and generalization to novel contexts. Moreover, after both offers have been presented, all areas encode the value of the first and second offers in orthogonal subspaces. In this case as well, the orthogonalization provides binding. Our binding-by-subspace hypothesis makes two novel predictions borne out by the data. First, behavioral errors should correlate with putative spatial (but not temporal) misbinding in the neural representation. Second, the specific representational geometry that we observe across animals also indicates that behavioral errors should increase when offers have low or high values, compared to when they have medium values, even when controlling for value difference. Together, these results support the idea that the brain makes use of semi-orthogonal subspaces to bind features together.
Collapse
Affiliation(s)
- W. Jeffrey Johnston
- Center for Theoretical Neuroscience and Mortimer B. Zuckerman Mind, Brain, and Behavior Institute, Columbia University, New York, New York, United States of America
| | - Justin M. Fine
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas, United States of America
| | - Seng Bum Michael Yoo
- Department of Biomedical Engineering, Sunkyunkwan University, and Center for Neuroscience Imaging Research, Institute of Basic Sciences, Suwon, South Korea, Republic of Korea, 16419
| | - R. Becket Ebitz
- Department of Neuroscience, Université de Montréal, Montréal, Quebec, Canada
| | - Benjamin Y. Hayden
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas, United States of America
| |
Collapse
|
35
|
Bredenberg C, Savin C. Desiderata for normative models of synaptic plasticity. ARXIV 2023:arXiv:2308.04988v1. [PMID: 37608931 PMCID: PMC10441445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Normative models of synaptic plasticity use a combination of mathematics and computational simulations to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work on these models, but experimental confirmation is relatively limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata which, when satisfied, are designed to guarantee that a model has a clear link between plasticity and adaptive behavior, consistency with known biological evidence about neural plasticity, and specific testable predictions. We then discuss how new models have begun to improve on these criteria and suggest avenues for further development. As prototypes, we provide detailed analyses of two specific models - REINFORCE and the Wake-Sleep algorithm. We provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, USA
- Mila-Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, QC H2S 3H1
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, USA
- Center for Data Science, New York University, New York, NY 10011, USA
| |
Collapse
|
36
|
Schlichting N, Fritz C, Zimmermann E. Motor variability modulates calibration of precisely timed movements. iScience 2023; 26:107204. [PMID: 37519900 PMCID: PMC10384242 DOI: 10.1016/j.isci.2023.107204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 02/23/2023] [Accepted: 06/21/2023] [Indexed: 08/01/2023] Open
Abstract
Interacting with the environment often requires precisely timed movements, challenging the brain to minimize the detrimental impact of neural noise. Recent research demonstrates that the brain exploits the variability of its temporal estimates and recalibrates perception accordingly. Time-critical movements, however, contain a sensory measurement and a motor stage. The brain must have knowledge of both in order to avoid maladapted behavior. By manipulating sensory and motor variability, we show that the sensorimotor system recalibrates sensory and motor uncertainty separately. Serial dependencies between observed interval durations in the previous and motor reproductions in the current trial were weighted by the variability of movements. These serial dependencies generalized across different effectors, but not to a visual discrimination task. Our results suggest that the brain has accurate knowledge about contributions of motor uncertainty to errors in temporal movements. This knowledge about motor uncertainty seems to be processed separately from knowledge about sensory uncertainty.
Collapse
Affiliation(s)
- Nadine Schlichting
- Institute for Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf 40225, Germany
| | - Clara Fritz
- Institute for Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf 40225, Germany
| | - Eckart Zimmermann
- Institute for Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf 40225, Germany
| |
Collapse
|
37
|
Heald JB, Wolpert DM, Lengyel M. The Computational and Neural Bases of Context-Dependent Learning. Annu Rev Neurosci 2023; 46:233-258. [PMID: 36972611 PMCID: PMC10348919 DOI: 10.1146/annurev-neuro-092322-100402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Flexible behavior requires the creation, updating, and expression of memories to depend on context. While the neural underpinnings of each of these processes have been intensively studied, recent advances in computational modeling revealed a key challenge in context-dependent learning that had been largely ignored previously: Under naturalistic conditions, context is typically uncertain, necessitating contextual inference. We review a theoretical approach to formalizing context-dependent learning in the face of contextual uncertainty and the core computations it requires. We show how this approach begins to organize a large body of disparate experimental observations, from multiple levels of brain organization (including circuits, systems, and behavior) and multiple brain regions (most prominently the prefrontal cortex, the hippocampus, and motor cortices), into a coherent framework. We argue that contextual inference may also be key to understanding continual learning in the brain. This theory-driven perspective places contextual inference as a core component of learning.
Collapse
Affiliation(s)
- James B Heald
- Department of Neuroscience and Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; ,
| | - Daniel M Wolpert
- Department of Neuroscience and Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; ,
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom;
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom;
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
38
|
Park J, Kim S, Kim HR, Lee J. Prior expectation enhances sensorimotor behavior by modulating population tuning and subspace activity in sensory cortex. SCIENCE ADVANCES 2023; 9:eadg4156. [PMID: 37418521 PMCID: PMC10328413 DOI: 10.1126/sciadv.adg4156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 06/07/2023] [Indexed: 07/09/2023]
Abstract
Prior knowledge facilitates our perception and goal-directed behaviors, particularly when sensory input is lacking or noisy. However, the neural mechanisms underlying the improvement in sensorimotor behavior by prior expectations remain unknown. In this study, we examine the neural activity in the middle temporal (MT) area of visual cortex while monkeys perform a smooth pursuit eye movement task with prior expectation of the visual target's motion direction. Prior expectations discriminately reduce the MT neural responses depending on their preferred directions, when the sensory evidence is weak. This response reduction effectively sharpens neural population direction tuning. Simulations with a realistic MT population demonstrate that sharpening the tuning can explain the biases and variabilities in smooth pursuit, suggesting that neural computations in the sensory area alone can underpin the integration of prior knowledge and sensory evidence. State-space analysis further supports this by revealing neural signals of prior expectations in the MT population activity that correlate with behavioral changes.
Collapse
Affiliation(s)
- JeongJun Park
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Seolmin Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - HyungGoo R. Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Joonyeol Lee
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
39
|
Libedinsky C. Comparing representations and computations in single neurons versus neural networks. Trends Cogn Sci 2023; 27:517-527. [PMID: 37005114 DOI: 10.1016/j.tics.2023.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 04/03/2023]
Abstract
Single-neuron-level explanations have been the gold standard in neuroscience for decades. Recently, however, neural-network-level explanations have become increasingly popular. This increase in popularity is driven by the fact that the analysis of neural networks can solve problems that cannot be addressed by analyzing neurons independently. In this opinion article, I argue that while both frameworks employ the same general logic to link physical and mental phenomena, in many cases the neural network framework provides better explanatory objects to understand representations and computations related to mental phenomena. I discuss what constitutes a mechanistic explanation in neural systems, provide examples, and conclude by highlighting a number of the challenges and considerations associated with the use of analyses of neural networks to study brain function.
Collapse
|
40
|
Soda T, Ahmadi A, Tani J, Honda M, Hanakawa T, Yamashita Y. Simulating developmental diversity: Impact of neural stochasticity on atypical flexibility and hierarchy. Front Psychiatry 2023; 14:1080668. [PMID: 37009124 PMCID: PMC10050443 DOI: 10.3389/fpsyt.2023.1080668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 02/21/2023] [Indexed: 03/17/2023] Open
Abstract
Introduction Investigating the pathological mechanisms of developmental disorders is a challenge because the symptoms are a result of complex and dynamic factors such as neural networks, cognitive behavior, environment, and developmental learning. Recently, computational methods have started to provide a unified framework for understanding developmental disorders, enabling us to describe the interactions among those multiple factors underlying symptoms. However, this approach is still limited because most studies to date have focused on cross-sectional task performance and lacked the perspectives of developmental learning. Here, we proposed a new research method for understanding the mechanisms of the acquisition and its failures in hierarchical Bayesian representations using a state-of-the-art computational model, referred to as in silico neurodevelopment framework for atypical representation learning. Methods Simple simulation experiments were conducted using the proposed framework to examine whether manipulating the neural stochasticity and noise levels in external environments during the learning process can lead to the altered acquisition of hierarchical Bayesian representation and reduced flexibility. Results Networks with normal neural stochasticity acquired hierarchical representations that reflected the underlying probabilistic structures in the environment, including higher-order representation, and exhibited good behavioral and cognitive flexibility. When the neural stochasticity was high during learning, top-down generation using higher-order representation became atypical, although the flexibility did not differ from that of the normal stochasticity settings. However, when the neural stochasticity was low in the learning process, the networks demonstrated reduced flexibility and altered hierarchical representation. Notably, this altered acquisition of higher-order representation and flexibility was ameliorated by increasing the level of noises in external stimuli. Discussion These results demonstrated that the proposed method assists in modeling developmental disorders by bridging between multiple factors, such as the inherent characteristics of neural dynamics, acquisitions of hierarchical representation, flexible behavior, and external environment.
Collapse
Affiliation(s)
- Takafumi Soda
- Department of Information Medicine, National Institute of Neuroscience, National Center of Neurology and Psychiatry, Kodaira, Japan
- Department of NCNP Brain Physiology and Pathology, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, Tokyo, Japan
| | | | - Jun Tani
- Cognitive Neurorobotics Research Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Manabu Honda
- Department of Information Medicine, National Institute of Neuroscience, National Center of Neurology and Psychiatry, Kodaira, Japan
| | - Takashi Hanakawa
- Integrated Neuroanatomy and Neuroimaging, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Yuichi Yamashita
- Department of Information Medicine, National Institute of Neuroscience, National Center of Neurology and Psychiatry, Kodaira, Japan
| |
Collapse
|
41
|
DePasquale B, Sussillo D, Abbott LF, Churchland MM. The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 2023; 111:631-649.e10. [PMID: 36630961 PMCID: PMC10118067 DOI: 10.1016/j.neuron.2022.12.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/17/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023]
Abstract
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Collapse
Affiliation(s)
- Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Physiology and Cellular Biophysics, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| |
Collapse
|
42
|
Fu Z, Sajad A, Errington SP, Schall JD, Rutishauser U. Neurophysiological mechanisms of error monitoring in human and non-human primates. Nat Rev Neurosci 2023; 24:153-172. [PMID: 36707544 PMCID: PMC10231843 DOI: 10.1038/s41583-022-00670-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/16/2022] [Indexed: 01/29/2023]
Abstract
Performance monitoring is an important executive function that allows us to gain insight into our own behaviour. This remarkable ability relies on the frontal cortex, and its impairment is an aspect of many psychiatric diseases. In recent years, recordings from the macaque and human medial frontal cortex have offered a detailed understanding of the neurophysiological substrate that underlies performance monitoring. Here we review the discovery of single-neuron correlates of error monitoring, a key aspect of performance monitoring, in both species. These neurons are the generators of the error-related negativity, which is a non-invasive biomarker that indexes error detection. We evaluate a set of tasks that allows the synergistic elucidation of the mechanisms of cognitive control across the two species, consider differences in brain anatomy and testing conditions across species, and describe the clinical relevance of these findings for understanding psychopathology. Last, we integrate the body of experimental facts into a theoretical framework that offers a new perspective on how error signals are computed in both species and makes novel, testable predictions.
Collapse
Affiliation(s)
- Zhongzheng Fu
- Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
- Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA.
| | - Amirsaman Sajad
- Center for Integrative & Cognitive Neuroscience, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| | - Steven P Errington
- Center for Integrative & Cognitive Neuroscience, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| | - Jeffrey D Schall
- Center for Integrative & Cognitive Neuroscience, Vanderbilt University, Nashville, TN, USA.
- Department of Psychology, Vanderbilt University, Nashville, TN, USA.
- Centre for Vision Research, York University, Toronto, Ontario, Canada.
- Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.
- Department of Biology, Faculty of Science, York University, Toronto, Ontario, Canada.
| | - Ueli Rutishauser
- Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
- Center for Neural Science and Medicine, Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
| |
Collapse
|
43
|
Beiran M, Meirhaeghe N, Sohn H, Jazayeri M, Ostojic S. Parametric control of flexible timing through low-dimensional neural manifolds. Neuron 2023; 111:739-753.e8. [PMID: 36640766 PMCID: PMC9992137 DOI: 10.1016/j.neuron.2022.12.016] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 09/23/2022] [Accepted: 12/08/2022] [Indexed: 01/15/2023]
Abstract
Biological brains possess an unparalleled ability to adapt behavioral responses to changing stimuli and environments. How neural processes enable this capacity is a fundamental open question. Previous works have identified two candidate mechanisms: a low-dimensional organization of neural activity and a modulation by contextual inputs. We hypothesized that combining the two might facilitate generalization and adaptation in complex tasks. We tested this hypothesis in flexible timing tasks where dynamics play a key role. Examining trained recurrent neural networks, we found that confining the dynamics to a low-dimensional subspace allowed tonic inputs to parametrically control the overall input-output transform, enabling generalization to novel inputs and adaptation to changing conditions. Reverse-engineering and theoretical analyses demonstrated that this parametric control relies on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds while preserving their geometry. Comparisons with data from behaving monkeys confirmed the behavioral and neural signatures of this mechanism.
Collapse
Affiliation(s)
- Manuel Beiran
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL University, 75005 Paris, France; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Institut de Neurosciences de la Timone (INT), UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
| | - Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL University, 75005 Paris, France.
| |
Collapse
|
44
|
Meirhaeghe N, Riehle A, Brochier T. Parallel movement planning is achieved via an optimal preparatory state in motor cortex. Cell Rep 2023; 42:112136. [PMID: 36807145 DOI: 10.1016/j.celrep.2023.112136] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 12/16/2022] [Accepted: 02/01/2023] [Indexed: 02/22/2023] Open
Abstract
How do patterns of neural activity in the motor cortex contribute to the planning of a movement? A recent theory developed for single movements proposes that the motor cortex acts as a dynamical system whose initial state is optimized during the preparatory phase of the movement. This theory makes important yet untested predictions about preparatory dynamics in more complex behavioral settings. Here, we analyze preparatory activity in non-human primates planning not one but two movements simultaneously. As predicted by the theory, we find that parallel planning is achieved by adjusting preparatory activity within an optimal subspace to an intermediate state reflecting a trade-off between the two movements. The theory quantitatively accounts for the relationship between this intermediate state and fluctuations in the animals' behavior down at the trial level. These results uncover a simple mechanism for planning multiple movements in parallel and further point to motor planning as a controlled dynamical process.
Collapse
Affiliation(s)
- Nicolas Meirhaeghe
- Institut de Neurosciences de la Timone (INT), UMR 7289, CNRS, Aix-Marseille Université, 13005 Marseille, France.
| | - Alexa Riehle
- Institut de Neurosciences de la Timone (INT), UMR 7289, CNRS, Aix-Marseille Université, 13005 Marseille, France; Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, 52428 Jülich, Germany
| | - Thomas Brochier
- Institut de Neurosciences de la Timone (INT), UMR 7289, CNRS, Aix-Marseille Université, 13005 Marseille, France
| |
Collapse
|
45
|
Recurrent networks endowed with structural priors explain suboptimal animal behavior. Curr Biol 2023; 33:622-638.e7. [PMID: 36657448 DOI: 10.1016/j.cub.2022.12.044] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 10/03/2022] [Accepted: 12/16/2022] [Indexed: 01/19/2023]
Abstract
The strategies found by animals facing a new task are determined both by individual experience and by structural priors evolved to leverage the statistics of natural environments. Rats quickly learn to capitalize on the trial sequence correlations of two-alternative forced choice (2AFC) tasks after correct trials but consistently deviate from optimal behavior after error trials. To understand this outcome-dependent gating, we first show that recurrent neural networks (RNNs) trained in the same 2AFC task outperform rats as they can readily learn to use across-trial information both after correct and error trials. We hypothesize that, although RNNs can optimize their behavior in the 2AFC task without any a priori restrictions, rats' strategy is constrained by a structural prior adapted to a natural environment in which rewarded and non-rewarded actions provide largely asymmetric information. When pre-training RNNs in a more ecological task with more than two possible choices, networks develop a strategy by which they gate off the across-trial evidence after errors, mimicking rats' behavior. Population analyses show that the pre-trained networks form an accurate representation of the sequence statistics independently of the outcome in the previous trial. After error trials, gating is implemented by a change in the network dynamics that temporarily decouple the categorization of the stimulus from the across-trial accumulated evidence. Our results suggest that the rats' suboptimal behavior reflects the influence of a structural prior that reacts to errors by isolating the network decision dynamics from the context, ultimately constraining the performance in a 2AFC laboratory task.
Collapse
|
46
|
Johnston WJ, Fine JM, Yoo SBM, Ebitz RB, Hayden BY. Subspace orthogonalization as a mechanism for binding values to space. ARXIV 2023:arXiv:2205.06769v2. [PMID: 36776821 PMCID: PMC9915762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/14/2023]
Abstract
When choosing between options, we must solve an important binding problem. The values of the options must be associated with information about the action needed to select them. We hypothesize that the brain solves this binding problem through use of distinct population subspaces. To test this hypothesis, we examined the responses of single neurons in five reward-sensitive regions in rhesus macaques performing a risky choice task. In all areas, neurons encoded the value of the offers presented on both the left and the right side of the display in semi-orthogonal subspaces, which served to bind the values of the two offers to their positions in space. Supporting the idea that this orthogonalization is functionally meaningful, we observed a session-to-session covariation between choice behavior and the orthogonalization of the two value subspaces: trials with less orthogonalized subspaces were associated with greater likelihood of choosing the less valued option. Further inspection revealed that these semi-orthogonal subspaces arose from a combination of linear and nonlinear mixed selectivity in the neural population. We show this combination of selectivity balances reliable binding with an ability to generalize value across different spatial locations. These results support the hypothesis that semi-orthogonal subspaces support reliable binding, which is essential to flexible behavior in the face of multiple options.
Collapse
Affiliation(s)
- W. Jeffrey Johnston
- Center for Theoretical Neuroscience and Mortimer B. Zuckerman Mind, Brain, and Behavior Institute, Columbia University, New York, New York
| | - Justin M. Fine
- Department of Neuroscience, Center for Magnetic Resonance Research, and Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455
| | - Seng Bum Michael Yoo
- Department of Biomedical Engineering, Sunkyunkwan University, and Center for Neuroscience Imaging Research, Institute of Basic Sciences, Suwon, South Korea, Republic of Korea, 16419
- Current address: Department of Brain and Cognitive Sciences, Massachusetts Institution of Technology, Cambridge, Massachusetts, MA, 02139
| | - R. Becket Ebitz
- Department of Neuroscience, Université de Montréal, Montréal, Quebec, Canada
| | - Benjamin Y. Hayden
- Department of Neuroscience, Center for Magnetic Resonance Research, and Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455
| |
Collapse
|
47
|
Galgali AR, Sahani M, Mante V. Residual dynamics resolves recurrent contributions to neural computation. Nat Neurosci 2023; 26:326-338. [PMID: 36635498 DOI: 10.1038/s41593-022-01230-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 11/08/2022] [Indexed: 01/14/2023]
Abstract
Relating neural activity to behavior requires an understanding of how neural computations arise from the coordinated dynamics of distributed, recurrently connected neural populations. However, inferring the nature of recurrent dynamics from partial recordings of a neural circuit presents considerable challenges. Here we show that some of these challenges can be overcome by a fine-grained analysis of the dynamics of neural residuals-that is, trial-by-trial variability around the mean neural population trajectory for a given task condition. Residual dynamics in macaque prefrontal cortex (PFC) in a saccade-based perceptual decision-making task reveals recurrent dynamics that is time dependent, but consistently stable, and suggests that pronounced rotational structure in PFC trajectories during saccades is driven by inputs from upstream areas. The properties of residual dynamics restrict the possible contributions of PFC to decision-making and saccade generation and suggest a path toward fully characterizing distributed neural computations with large-scale neural recordings and targeted causal perturbations.
Collapse
Affiliation(s)
- Aniruddh R Galgali
- Institute of Neuroinformatics, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| | - Valerio Mante
- Institute of Neuroinformatics, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich & ETH Zurich, Zurich, Switzerland.
| |
Collapse
|
48
|
Codol O, Kashefi M, Forgaard CJ, Galea JM, Pruszynski JA, Gribble PL. Sensorimotor feedback loops are selectively sensitive to reward. eLife 2023; 12:81325. [PMID: 36637162 PMCID: PMC9910828 DOI: 10.7554/elife.81325] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 12/29/2022] [Indexed: 01/14/2023] Open
Abstract
Although it is well established that motivational factors such as earning more money for performing well improve motor performance, how the motor system implements this improvement remains unclear. For instance, feedback-based control, which uses sensory feedback from the body to correct for errors in movement, improves with greater reward. But feedback control encompasses many feedback loops with diverse characteristics such as the brain regions involved and their response time. Which specific loops drive these performance improvements with reward is unknown, even though their diversity makes it unlikely that they are contributing uniformly. We systematically tested the effect of reward on the latency (how long for a corrective response to arise?) and gain (how large is the corrective response?) of seven distinct sensorimotor feedback loops in humans. Only the fastest feedback loops were insensitive to reward, and the earliest reward-driven changes were consistently an increase in feedback gains, not a reduction in latency. Rather, a reduction of response latencies only tended to occur in slower feedback loops. These observations were similar across sensory modalities (vision and proprioception). Our results may have implications regarding feedback control performance in athletic coaching. For instance, coaching methodologies that rely on reinforcement or 'reward shaping' may need to specifically target aspects of movement that rely on reward-sensitive feedback responses.
Collapse
Affiliation(s)
- Olivier Codol
- Brain and Mind Institute, University of Western OntarioLondonCanada
- Department of Psychology, University of Western OntarioLondonCanada
- School of Psychology, University of BirminghamBirminghamUnited Kingdom
| | - Mehrdad Kashefi
- Brain and Mind Institute, University of Western OntarioLondonCanada
- Department of Psychology, University of Western OntarioLondonCanada
- Department of Physiology & Pharmacology, Schulich School of Medicine & Dentistry, University of Western OntarioOntarioCanada
- Robarts Research Institute, University of Western OntarioLondonCanada
| | - Christopher J Forgaard
- Brain and Mind Institute, University of Western OntarioLondonCanada
- Department of Psychology, University of Western OntarioLondonCanada
| | - Joseph M Galea
- School of Psychology, University of BirminghamBirminghamUnited Kingdom
| | - J Andrew Pruszynski
- Brain and Mind Institute, University of Western OntarioLondonCanada
- Department of Psychology, University of Western OntarioLondonCanada
- Department of Physiology & Pharmacology, Schulich School of Medicine & Dentistry, University of Western OntarioOntarioCanada
- Robarts Research Institute, University of Western OntarioLondonCanada
| | - Paul L Gribble
- Brain and Mind Institute, University of Western OntarioLondonCanada
- Department of Psychology, University of Western OntarioLondonCanada
- Department of Physiology & Pharmacology, Schulich School of Medicine & Dentistry, University of Western OntarioOntarioCanada
- Haskins LaboratoriesNew HavenUnited States
| |
Collapse
|
49
|
Thura D, Cabana JF, Feghaly A, Cisek P. Integrated neural dynamics of sensorimotor decisions and actions. PLoS Biol 2022; 20:e3001861. [PMID: 36520685 PMCID: PMC9754259 DOI: 10.1371/journal.pbio.3001861] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 09/29/2022] [Indexed: 12/23/2022] Open
Abstract
Recent theoretical models suggest that deciding about actions and executing them are not implemented by completely distinct neural mechanisms but are instead two modes of an integrated dynamical system. Here, we investigate this proposal by examining how neural activity unfolds during a dynamic decision-making task within the high-dimensional space defined by the activity of cells in monkey dorsal premotor (PMd), primary motor (M1), and dorsolateral prefrontal cortex (dlPFC) as well as the external and internal segments of the globus pallidus (GPe, GPi). Dimensionality reduction shows that the four strongest components of neural activity are functionally interpretable, reflecting a state transition between deliberation and commitment, the transformation of sensory evidence into a choice, and the baseline and slope of the rising urgency to decide. Analysis of the contribution of each population to these components shows meaningful differences between regions but no distinct clusters within each region, consistent with an integrated dynamical system. During deliberation, cortical activity unfolds on a two-dimensional "decision manifold" defined by sensory evidence and urgency and falls off this manifold at the moment of commitment into a choice-dependent trajectory leading to movement initiation. The structure of the manifold varies between regions: In PMd, it is curved; in M1, it is nearly perfectly flat; and in dlPFC, it is almost entirely confined to the sensory evidence dimension. In contrast, pallidal activity during deliberation is primarily defined by urgency. We suggest that these findings reveal the distinct functional contributions of different brain regions to an integrated dynamical system governing action selection and execution.
Collapse
Affiliation(s)
- David Thura
- Groupe de recherche sur la signalisation neurale et la circuiterie, Department of Neuroscience, Université de Montréal, Montréal, Québec, Canada
| | - Jean-François Cabana
- Groupe de recherche sur la signalisation neurale et la circuiterie, Department of Neuroscience, Université de Montréal, Montréal, Québec, Canada
| | - Albert Feghaly
- Groupe de recherche sur la signalisation neurale et la circuiterie, Department of Neuroscience, Université de Montréal, Montréal, Québec, Canada
| | - Paul Cisek
- Groupe de recherche sur la signalisation neurale et la circuiterie, Department of Neuroscience, Université de Montréal, Montréal, Québec, Canada
- * E-mail:
| |
Collapse
|
50
|
Christensen AJ, Ott T, Kepecs A. Cognition and the single neuron: How cell types construct the dynamic computations of frontal cortex. Curr Opin Neurobiol 2022; 77:102630. [PMID: 36209695 PMCID: PMC10375540 DOI: 10.1016/j.conb.2022.102630] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/18/2022] [Accepted: 08/23/2022] [Indexed: 01/10/2023]
Abstract
Frontal cortex is thought to underlie many advanced cognitive capacities, from self-control to long term planning. Reflecting these diverse demands, frontal neural activity is notoriously idiosyncratic, with tuning properties that are correlated with endless numbers of behavioral and task features. This menagerie of tuning has made it difficult to extract organizing principles that govern frontal neural activity. Here, we contrast two successful yet seemingly incompatible approaches that have begun to address this challenge. Inspired by the indecipherability of single-neuron tuning, the first approach casts frontal computations as dynamical trajectories traversed by arbitrary mixtures of neurons. The second approach, by contrast, attempts to explain the functional diversity of frontal activity with the biological diversity of cortical cell-types. Motivated by the recent discovery of functional clusters in frontal neurons, we propose a consilience between these population and cell-type-specific approaches to neural computations, advancing the conjecture that evolutionarily inherited cell-type constraints create the scaffold within which frontal population dynamics must operate.
Collapse
Affiliation(s)
- Amelia J Christensen
- Department of Neuroscience and Department of Psychiatry, Washington University in St. Louis, St. Louis, MO 63110, USA.
| | - Torben Ott
- Department of Neuroscience and Department of Psychiatry, Washington University in St. Louis, St. Louis, MO 63110, USA; Bernstein Center for Computational Neuroscience Berlin, Humboldt University of Berlin, Berlin, Germany.
| | - Adam Kepecs
- Department of Neuroscience and Department of Psychiatry, Washington University in St. Louis, St. Louis, MO 63110, USA.
| |
Collapse
|