1
|
Bolaños F, Orlandi JG, Aoki R, Jagadeesh AV, Gardner JL, Benucci A. Efficient coding of natural images in the mouse visual cortex. Nat Commun 2024; 15:2466. [PMID: 38503746 PMCID: PMC10951403 DOI: 10.1038/s41467-024-45919-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 02/06/2024] [Indexed: 03/21/2024] Open
Abstract
How the activity of neurons gives rise to natural vision remains a matter of intense investigation. The mid-level visual areas along the ventral stream are selective to a common class of natural images-textures-but a circuit-level understanding of this selectivity and its link to perception remains unclear. We addressed these questions in mice, first showing that they can perceptually discriminate between textures and statistically simpler spectrally matched stimuli, and between texture types. Then, at the neural level, we found that the secondary visual area (LM) exhibited a higher degree of selectivity for textures compared to the primary visual area (V1). Furthermore, textures were represented in distinct neural activity subspaces whose relative distances were found to correlate with the statistical similarity of the images and the mice's ability to discriminate between them. Notably, these dependencies were more pronounced in LM, where the texture-related subspaces were smaller than in V1, resulting in superior stimulus decoding capabilities. Together, our results demonstrate texture vision in mice, finding a linking framework between stimulus statistics, neural representations, and perceptual sensitivity-a distinct hallmark of efficient coding computations.
Collapse
Affiliation(s)
- Federico Bolaños
- University of British Columbia, Neuroimaging and NeuroComputation Centre, Vancouver, BC, V6T, Canada
| | - Javier G Orlandi
- University of Calgary, Department of Physics and Astronomy, Calgary, AB, T2N 1N4, Canada.
| | - Ryo Aoki
- RIKEN Center for Brain Science, Laboratory for Neural Circuits and Behavior, Wakoshi, Japan
| | | | - Justin L Gardner
- Stanford University, Wu Tsai Neurosciences Institute, Stanford, CA, USA
| | - Andrea Benucci
- RIKEN Center for Brain Science, Laboratory for Neural Circuits and Behavior, Wakoshi, Japan.
- Queen Mary, University of London, School of Biological and Behavioral Science, London, E1 4NS, UK.
| |
Collapse
|
2
|
Zimnik AJ, Cora Ames K, An X, Driscoll L, Lara AH, Russo AA, Susoy V, Cunningham JP, Paninski L, Churchland MM, Glaser JI. Identifying Interpretable Latent Factors with Sparse Component Analysis. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.05.578988. [PMID: 38370650 PMCID: PMC10871230 DOI: 10.1101/2024.02.05.578988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
In many neural populations, the computationally relevant signals are posited to be a set of 'latent factors' - signals shared across many individual neurons. Understanding the relationship between neural activity and behavior requires the identification of factors that reflect distinct computational roles. Methods for identifying such factors typically require supervision, which can be suboptimal if one is unsure how (or whether) factors can be grouped into distinct, meaningful sets. Here, we introduce Sparse Component Analysis (SCA), an unsupervised method that identifies interpretable latent factors. SCA seeks factors that are sparse in time and occupy orthogonal dimensions. With these simple constraints, SCA facilitates surprisingly clear parcellations of neural activity across a range of behaviors. We applied SCA to motor cortex activity from reaching and cycling monkeys, single-trial imaging data from C. elegans, and activity from a multitask artificial network. SCA consistently identified sets of factors that were useful in describing network computations.
Collapse
Affiliation(s)
- Andrew J Zimnik
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - K Cora Ames
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Xinyue An
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Interdepartmental Neuroscience Program, Northwestern University, Chicago, IL, USA
| | - Laura Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Allen Institute for Neural Dynamics, Allen Institute, Seattle, CA, USA
| | - Antonio H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Abigail A Russo
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Vladislav Susoy
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - John P Cunningham
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Liam Paninski
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
| | - Joshua I Glaser
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Department of Computer Science, Northwestern University, Evanston, IL, USA
| |
Collapse
|
3
|
Nakai S, Kitanishi T, Mizuseki K. Distinct manifold encoding of navigational information in the subiculum and hippocampus. SCIENCE ADVANCES 2024; 10:eadi4471. [PMID: 38295173 PMCID: PMC10830115 DOI: 10.1126/sciadv.adi4471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 12/29/2023] [Indexed: 02/02/2024]
Abstract
The subiculum (SUB) plays a crucial role in spatial navigation and encodes navigational information differently from the hippocampal CA1 area. However, the representation of subicular population activity remains unknown. Here, we investigated the neuronal population activity recorded extracellularly from the CA1 and SUB of rats performing T-maze and open-field tasks. The trajectory of population activity in both areas was confined to low-dimensional neural manifolds homoeomorphic to external space. The manifolds conveyed position, speed, and future path information with higher decoding accuracy in the SUB than in the CA1. The manifolds exhibited common geometry across rats and regions for the CA1 and SUB and between tasks in the SUB. During post-task ripples in slow-wave sleep, population activity represented reward locations/events more frequently in the SUB than in CA1. Thus, the CA1 and SUB encode information distinctly into the neural manifolds that underlie navigational information processing during wakefulness and sleep.
Collapse
Affiliation(s)
- Shinya Nakai
- Department of Physiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka 545-8585, Japan
- Department of Physiology, Graduate School of Medicine, Osaka City University, Osaka 545-8585, Japan
| | - Takuma Kitanishi
- Department of Physiology, Graduate School of Medicine, Osaka City University, Osaka 545-8585, Japan
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Meguro, Tokyo 153-8902, Japan
- Komaba Institute for Science, The University of Tokyo, Meguro, Tokyo 153-8902, Japan
- PRESTO, Japan Science and Technology Agency (JST), Kawaguchi, Saitama 332-0012, Japan
| | - Kenji Mizuseki
- Department of Physiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka 545-8585, Japan
- Department of Physiology, Graduate School of Medicine, Osaka City University, Osaka 545-8585, Japan
| |
Collapse
|
4
|
Oby ER, Degenhart AD, Grigsby EM, Motiwala A, McClain NT, Marino PJ, Yu BM, Batista AP. Dynamical constraints on neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.03.573543. [PMID: 38260549 PMCID: PMC10802336 DOI: 10.1101/2024.01.03.573543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The manner in which neural activity unfolds over time is thought to be central to sensory, motor, and cognitive functions in the brain. Network models have long posited that the brain's computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface (BCI) to challenge monkeys to violate the naturally-occurring time courses of neural population activity that we observed in motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.
Collapse
|
5
|
Gurnani H, Cayco Gajic NA. Signatures of task learning in neural representations. Curr Opin Neurobiol 2023; 83:102759. [PMID: 37708653 DOI: 10.1016/j.conb.2023.102759] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 06/28/2023] [Accepted: 07/20/2023] [Indexed: 09/16/2023]
Abstract
While neural plasticity has long been studied as the basis of learning, the growth of large-scale neural recording techniques provides a unique opportunity to study how learning-induced activity changes are coordinated across neurons within the same circuit. These distributed changes can be understood through an evolution of the geometry of neural manifolds and latent dynamics underlying new computations. In parallel, studies of multi-task and continual learning in artificial neural networks hint at a tradeoff between non-interference and compositionality as guiding principles to understand how neural circuits flexibly support multiple behaviors. In this review, we highlight recent findings from both biological and artificial circuits that together form a new framework for understanding task learning at the population level.
Collapse
Affiliation(s)
- Harsha Gurnani
- Department of Biology, University of Washington, Seattle, WA, USA. https://twitter.com/HarshaGurnani
| | - N Alex Cayco Gajic
- Laboratoire de Neuroscience Cognitives, Ecole Normale Supérieure, Université PSL, Paris, France.
| |
Collapse
|
6
|
Kim TD, Luo TZ, Can T, Krishnamurthy K, Pillow JW, Brody CD. Flow-field inference from neural data using deep recurrent networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.14.567136. [PMID: 38014290 PMCID: PMC10680687 DOI: 10.1101/2023.11.14.567136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Computations involved in processes such as decision-making, working memory, and motor control are thought to emerge from the dynamics governing the collective activity of neurons in large populations. But the estimation of these dynamics remains a significant challenge. Here we introduce Flow-field Inference from Neural Data using deep Recurrent networks (FINDR), an unsupervised deep learning method that can infer low-dimensional nonlinear stochastic dynamics underlying neural population activity. Using population spike train data from frontal brain regions of rats performing an auditory decision-making task, we demonstrate that FINDR outperforms existing methods in capturing the heterogeneous responses of individual neurons. We further show that FINDR can discover interpretable low-dimensional dynamics when it is trained to disentangle task-relevant and irrelevant components of the neural population activity. Importantly, the low-dimensional nature of the learned dynamics allows for explicit visualization of flow fields and attractor structures. We suggest FINDR as a powerful method for revealing the low-dimensional task-relevant dynamics of neural populations and their associated computations.
Collapse
Affiliation(s)
| | - Thomas Zhihao Luo
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Tankut Can
- School of Natural Sciences, Institute for Advanced Study, Princeton, NJ
| | - Kamesh Krishnamurthy
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
- Joseph Henry Laboratories of Physics, Princeton University, Princeton, NJ
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Carlos D Brody
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
- Howard Hughes Medical Institute, Princeton University, Princeton, NJ
| |
Collapse
|
7
|
Durstewitz D, Koppe G, Thurm MI. Reconstructing computational system dynamics from neural data with recurrent neural networks. Nat Rev Neurosci 2023; 24:693-710. [PMID: 37794121 DOI: 10.1038/s41583-023-00740-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/18/2023] [Indexed: 10/06/2023]
Abstract
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
- Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany.
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany.
| | - Georgia Koppe
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Dept. of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Hector Institute for Artificial Intelligence in Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Max Ingo Thurm
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
8
|
Ayar EC, Heusser MR, Bourrelly C, Gandhi NJ. Distinct context- and content-dependent population codes in superior colliculus during sensation and action. Proc Natl Acad Sci U S A 2023; 120:e2303523120. [PMID: 37748075 PMCID: PMC10556644 DOI: 10.1073/pnas.2303523120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Accepted: 08/23/2023] [Indexed: 09/27/2023] Open
Abstract
Sensorimotor transformation is the process of first sensing an object in the environment and then producing a movement in response to that stimulus. For visually guided saccades, neurons in the superior colliculus (SC) emit a burst of spikes to register the appearance of stimulus, and many of the same neurons discharge another burst to initiate the eye movement. We investigated whether the neural signatures of sensation and action in SC depend on context. Spiking activity along the dorsoventral axis was recorded with a laminar probe as Rhesus monkeys generated saccades to the same stimulus location in tasks that require either executive control to delay saccade onset until permission is granted or the production of an immediate response to a target whose onset is predictable. Using dimensionality reduction and discriminability methods, we show that the subspaces occupied during the visual and motor epochs were both distinct within each task and differentiable across tasks. Single-unit analyses, in contrast, show that the movement-related activity of SC neurons was not different between tasks. These results demonstrate that statistical features in neural activity of simultaneously recorded ensembles provide more insight than single neurons. They also indicate that cognitive processes associated with task requirements are multiplexed in SC population activity during both sensation and action and that downstream structures could use this activity to extract context. Additionally, the entire manifolds associated with sensory and motor responses, respectively, may be larger than the subspaces explored within a certain set of experiments.
Collapse
Affiliation(s)
- Eve C. Ayar
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA15213
- Program in Neural Computation, Carnegie Mellon University, Pittsburgh, PA15213
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA15213
| | - Michelle R. Heusser
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA15213
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA15213
| | - Clara Bourrelly
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA15213
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA15213
| | - Neeraj J. Gandhi
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA15213
- Program in Neural Computation, Carnegie Mellon University, Pittsburgh, PA15213
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA15213
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA15213
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA15213
| |
Collapse
|
9
|
Kass RE, Bong H, Olarinre M, Xin Q, Urban KN. Identification of interacting neural populations: methods and statistical considerations. J Neurophysiol 2023; 130:475-496. [PMID: 37465897 PMCID: PMC10642974 DOI: 10.1152/jn.00131.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 07/17/2023] [Accepted: 07/17/2023] [Indexed: 07/20/2023] Open
Abstract
As improved recording technologies have created new opportunities for neurophysiological investigation, emphasis has shifted from individual neurons to multiple populations that form circuits, and it has become important to provide evidence of cross-population coordinated activity. We review various methods for doing so, placing them in six major categories while avoiding technical descriptions and instead focusing on high-level motivations and concerns. Our aim is to indicate what the methods can achieve and the circumstances under which they are likely to succeed. Toward this end, we include a discussion of four cross-cutting issues: the definition of neural populations, trial-to-trial variability and Poisson-like noise, time-varying dynamics, and causality.
Collapse
Affiliation(s)
- Robert E Kass
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| | - Heejong Bong
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| | - Motolani Olarinre
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| | - Qi Xin
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| | - Konrad N Urban
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Department of Statistics & Data Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
10
|
Kirchherr S, Mildiner Moraga S, Coudé G, Bimbi M, Ferrari PF, Aarts E, Bonaiuto JJ. Bayesian multilevel hidden Markov models identify stable state dynamics in longitudinal recordings from macaque primary motor cortex. Eur J Neurosci 2023; 58:2787-2806. [PMID: 37382060 DOI: 10.1111/ejn.16065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 04/02/2023] [Accepted: 06/01/2023] [Indexed: 06/30/2023]
Abstract
Neural populations, rather than single neurons, may be the fundamental unit of cortical computation. Analysing chronically recorded neural population activity is challenging not only because of the high dimensionality of activity but also because of changes in the signal that may or may not be due to neural plasticity. Hidden Markov models (HMMs) are a promising technique for analysing such data in terms of discrete latent states, but previous approaches have not considered the statistical properties of neural spiking data, have not been adaptable to longitudinal data, or have not modelled condition-specific differences. We present a multilevel Bayesian HMM addresses these shortcomings by incorporating multivariate Poisson log-normal emission probability distributions, multilevel parameter estimation and trial-specific condition covariates. We applied this framework to multi-unit neural spiking data recorded using chronically implanted multi-electrode arrays from macaque primary motor cortex during a cued reaching, grasping and placing task. We show that, in line with previous work, the model identifies latent neural population states which are tightly linked to behavioural events, despite the model being trained without any information about event timing. The association between these states and corresponding behaviour is consistent across multiple days of recording. Notably, this consistency is not observed in the case of a single-level HMM, which fails to generalise across distinct recording sessions. The utility and stability of this approach is demonstrated using a previously learned task, but this multilevel Bayesian HMM framework would be especially suited for future studies of long-term plasticity in neural populations.
Collapse
Affiliation(s)
- Sebastien Kirchherr
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| | | | - Gino Coudé
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
- Inovarion, Paris, France
| | - Marco Bimbi
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| | - Pier F Ferrari
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| | - Emmeke Aarts
- Department of Methodology and Statistics, Universiteit Utrecht, Utrecht, Netherlands
| | - James J Bonaiuto
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| |
Collapse
|
11
|
Kikumoto A, Bhandari A, Shibata K, Badre D. A Transient High-dimensional Geometry Affords Stable Conjunctive Subspaces for Efficient Action Selection. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.09.544428. [PMID: 37333209 PMCID: PMC10274903 DOI: 10.1101/2023.06.09.544428] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Flexible action selection requires cognitive control mechanisms capable of mapping the same inputs to diverse output actions depending on goals and contexts. How the brain encodes information to enable this capacity remains one of the longstanding and fundamental problems in cognitive neuroscience. From a neural state-space perspective, solving this problem requires a control representation that can disambiguate similar input neural states, making task-critical dimensions separable depending on the context. Moreover, for action selection to be robust and time-invariant, control representations must be stable in time, thereby enabling efficient readout by downstream processing units. Thus, an ideal control representation should leverage geometry and dynamics that maximize the separability and stability of neural trajectories for task computations. Here, using novel EEG decoding methods, we investigated how the geometry and dynamics of control representations constrain flexible action selection in the human brain. Specifically, we tested the hypothesis that encoding a temporally stable conjunctive subspace that integrates stimulus, response, and context (i.e., rule) information in a high-dimensional geometry achieves the separability and stability needed for context-dependent action selection. Human participants performed a task that requires context-dependent action selection based on pre-instructed rules. Participants were cued to respond immediately at varying intervals following stimulus presentation, which forced responses at different states in neural trajectories. We discovered that in the moments before successful responses, there was a transient expansion of representational dimensionality that separated conjunctive subspaces. Further, we found that the dynamics stabilized in the same time window, and that the timing of entry into this stable and high-dimensional state predicted the quality of response selection on individual trials. These results establish the neural geometry and dynamics the human brain needs for flexible control over behavior.
Collapse
Affiliation(s)
- Atsushi Kikumoto
- Department of Cognitive, Linguistic, and Psychological Sciences
- RIKEN Center for Brain Science, Wako, Saitama, Japan
| | | | | | - David Badre
- Department of Cognitive, Linguistic, and Psychological Sciences
- Carney Institute for Brain Science Brown University, Providence, Rhode island, U.S
| |
Collapse
|
12
|
Gharesi N, Luneau L, Kalaska JF, Baillet S. Evaluation of abstract rule-based associations in the human premotor cortex during passive observation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.06.543581. [PMID: 37333191 PMCID: PMC10274620 DOI: 10.1101/2023.06.06.543581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Decision-making often manifests in behavior, typically yielding overt motor actions. This complex process requires the registration of sensory information with one's internal representation of the current context, before a categorical judgment of the most appropriate motor behavior can be issued. The construct concept of embodied decision-making encapsulates this sequence of complex processes, whereby behaviorally salient information from the environment is represented in an abstracted space of potential motor actions rather than only in an abstract cognitive "decision" space. Theoretical foundations and some empirical evidence account for support the involvement of premotor cortical circuits in embodied cognitive functions. Animal models show that premotor circuits participate in the registration and evaluation of actions performed by peers in social situations, that is, prior to controlling one's voluntary movements guided by arbitrary stimulus-response rules. However, such evidence from human data is currently limited. Here we used time-resolved magnetoencephalography imaging to characterize activations of the premotor cortex as human participants observed arbitrary, non-biological visual stimuli that either respected or violated a simple stimulus-response association rule. The participants had learned this rule previously, either actively, by performing a motor task (active learning), or passively, by observing a computer perform the same task (passive learning). We discovered that the human premotor cortex is activated during the passive observation of the correct execution of a sequence of events according to a rule learned previously. Premotor activation also differs when the subjects observe incorrect stimulus sequences. These premotor effects are present even when the observed events are of a non-motor, abstract nature, and even when the stimulus-response association rule was learned via passive observations of a computer agent performing the task, without requiring overt motor actions from the human participant. We found evidence of these phenomena by tracking cortical beta-band signaling in temporal alignment with the observation of task events and behavior. We conclude that premotor cortical circuits that are typically engaged during voluntary motor behavior are also involved in the interpretation of events of a non-ecological, unfamiliar nature but related to a learned abstract rule. As such, the present study provides the first evidence of neurophysiological processes of embodied decision-making in human premotor circuits when the observed events do not involve motor actions of a third party.
Collapse
Affiliation(s)
- Niloofar Gharesi
- McConnell Brain Imaging Centre, Montréal Neurological Institute, McGill University, Montréal, Canada
| | - Lucie Luneau
- Groupe de recherche sur la signalisation neuronale et la circuiterie, Département de Neurosciences, Université de Montréal, Montréal, QC, Canada
| | - John F Kalaska
- Groupe de recherche sur la signalisation neuronale et la circuiterie, Département de Neurosciences, Université de Montréal, Montréal, QC, Canada
| | - Sylvain Baillet
- McConnell Brain Imaging Centre, Montréal Neurological Institute, McGill University, Montréal, Canada
| |
Collapse
|
13
|
Langdon C, Genkin M, Engel TA. A unifying perspective on neural manifolds and circuits for cognition. Nat Rev Neurosci 2023; 24:363-377. [PMID: 37055616 PMCID: PMC11058347 DOI: 10.1038/s41583-023-00693-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/06/2023] [Indexed: 04/15/2023]
Abstract
Two different perspectives have informed efforts to explain the link between the brain and behaviour. One approach seeks to identify neural circuit elements that carry out specific functions, emphasizing connectivity between neurons as a substrate for neural computations. Another approach centres on neural manifolds - low-dimensional representations of behavioural signals in neural population activity - and suggests that neural computations are realized by emergent dynamics. Although manifolds reveal an interpretable structure in heterogeneous neuronal activity, finding the corresponding structure in connectivity remains a challenge. We highlight examples in which establishing the correspondence between low-dimensional activity and connectivity has been possible, unifying the neural manifold and circuit perspectives. This relationship is conspicuous in systems in which the geometry of neural responses mirrors their spatial layout in the brain, such as the fly navigational system. Furthermore, we describe evidence that, in systems in which neural responses are heterogeneous, the circuit comprises interactions between activity patterns on the manifold via low-rank connectivity. We suggest that unifying the manifold and circuit approaches is important if we are to be able to causally test theories about the neural computations that underlie behaviour.
Collapse
Affiliation(s)
- Christopher Langdon
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Mikhail Genkin
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Tatiana A Engel
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
| |
Collapse
|
14
|
DePasquale B, Sussillo D, Abbott LF, Churchland MM. The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 2023; 111:631-649.e10. [PMID: 36630961 PMCID: PMC10118067 DOI: 10.1016/j.neuron.2022.12.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/17/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023]
Abstract
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Collapse
Affiliation(s)
- Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Physiology and Cellular Biophysics, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| |
Collapse
|
15
|
Prilutski Y, Livneh Y. Physiological Needs: Sensations and Predictions in the Insular Cortex. Physiology (Bethesda) 2023; 38:0. [PMID: 36040864 DOI: 10.1152/physiol.00019.2022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
Physiological needs create powerful motivations (e.g., thirst and hunger). Studies in humans and animal models have implicated the insular cortex in the neural regulation of physiological needs and need-driven behavior. We review prominent mechanistic models of how the insular cortex might achieve this regulation and present a conceptual and analytical framework for testing these models in healthy and pathological conditions.
Collapse
Affiliation(s)
- Yael Prilutski
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Yoav Livneh
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| |
Collapse
|
16
|
Association between different sensory modalities based on concurrent time series data obtained by a collaborative reservoir computing model. Sci Rep 2023; 13:173. [PMID: 36600034 DOI: 10.1038/s41598-023-27385-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 01/02/2023] [Indexed: 01/06/2023] Open
Abstract
Humans perceive the external world by integrating information from different modalities, obtained through the sensory organs. However, the aforementioned mechanism is still unclear and has been a subject of widespread interest in the fields of psychology and brain science. A model using two reservoir computing systems, i.e., a type of recurrent neural network trained to mimic each other's output, can detect stimulus patterns that repeatedly appear in a time series signal. We applied this model for identifying specific patterns that co-occur between information from different modalities. The model was self-organized by specific fluctuation patterns that co-occurred between different modalities, and could detect each fluctuation pattern. Additionally, similarly to the case where perception is influenced by synchronous/asynchronous presentation of multimodal stimuli, the model failed to work correctly for signals that did not co-occur with corresponding fluctuation patterns. Recent experimental studies have suggested that direct interaction between different sensory systems is important for multisensory integration, in addition to top-down control from higher brain regions such as the association cortex. Because several patterns of interaction between sensory modules can be incorporated into the employed model, we were able to compare the performance between them; the original version of the employed model incorporated such an interaction as the teaching signals for learning. The performance of the original and alternative models was evaluated, and the original model was found to perform the best. Thus, we demonstrated that feedback of the outputs of appropriately learned sensory modules performed the best when compared to the other examined patterns of interaction. The proposed model incorporated information encoded by the dynamic state of the neural population and the interactions between different sensory modules, both of which were based on recent experimental observations; this allowed us to study the influence of the temporal relationship and frequency of occurrence of multisensory signals on sensory integration, as well as the nature of interaction between different sensory signals.
Collapse
|
17
|
Christensen AJ, Ott T, Kepecs A. Cognition and the single neuron: How cell types construct the dynamic computations of frontal cortex. Curr Opin Neurobiol 2022; 77:102630. [PMID: 36209695 PMCID: PMC10375540 DOI: 10.1016/j.conb.2022.102630] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/18/2022] [Accepted: 08/23/2022] [Indexed: 01/10/2023]
Abstract
Frontal cortex is thought to underlie many advanced cognitive capacities, from self-control to long term planning. Reflecting these diverse demands, frontal neural activity is notoriously idiosyncratic, with tuning properties that are correlated with endless numbers of behavioral and task features. This menagerie of tuning has made it difficult to extract organizing principles that govern frontal neural activity. Here, we contrast two successful yet seemingly incompatible approaches that have begun to address this challenge. Inspired by the indecipherability of single-neuron tuning, the first approach casts frontal computations as dynamical trajectories traversed by arbitrary mixtures of neurons. The second approach, by contrast, attempts to explain the functional diversity of frontal activity with the biological diversity of cortical cell-types. Motivated by the recent discovery of functional clusters in frontal neurons, we propose a consilience between these population and cell-type-specific approaches to neural computations, advancing the conjecture that evolutionarily inherited cell-type constraints create the scaffold within which frontal population dynamics must operate.
Collapse
Affiliation(s)
- Amelia J Christensen
- Department of Neuroscience and Department of Psychiatry, Washington University in St. Louis, St. Louis, MO 63110, USA.
| | - Torben Ott
- Department of Neuroscience and Department of Psychiatry, Washington University in St. Louis, St. Louis, MO 63110, USA; Bernstein Center for Computational Neuroscience Berlin, Humboldt University of Berlin, Berlin, Germany.
| | - Adam Kepecs
- Department of Neuroscience and Department of Psychiatry, Washington University in St. Louis, St. Louis, MO 63110, USA.
| |
Collapse
|
18
|
Driscoll LN, Duncker L, Harvey CD. Representational drift: Emerging theories for continual learning and experimental future directions. Curr Opin Neurobiol 2022; 76:102609. [PMID: 35939861 DOI: 10.1016/j.conb.2022.102609] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 06/08/2022] [Accepted: 06/23/2022] [Indexed: 11/03/2022]
Abstract
Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks-a phenomenon called representational drift. Here, we highlight recent observations of drift, how drift is unlikely to be explained by experimental confounds, and how the brain can likely compensate for drift to allow stable computation. We propose that drift might have important roles in neural computation to allow continual learning, both for separating and relating memories that occur at distinct times. Finally, we present an outlook on future experimental directions that are needed to further characterize drift and to test emerging theories for drift's role in computation.
Collapse
Affiliation(s)
- Laura N Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Lea Duncker
- Howard Hughes Medical Institute, Stanford University, Stanford, CA, USA.
| | | |
Collapse
|