1
|
Rajalingham R, Sohn H, Jazayeri M. Dynamic tracking of objects in the macaque dorsomedial frontal cortex. Nat Commun 2025; 16:346. [PMID: 39746908 PMCID: PMC11696028 DOI: 10.1038/s41467-024-54688-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 11/18/2024] [Indexed: 01/04/2025] Open
Abstract
A central tenet of cognitive neuroscience is that humans build an internal model of the external world and use mental simulation of the model to perform physical inferences. Decades of human experiments have shown that behaviors in many physical reasoning tasks are consistent with predictions from the mental simulation theory. However, evidence for the defining feature of mental simulation - that neural population dynamics reflect simulations of physical states in the environment - is limited. We test the mental simulation hypothesis by combining a naturalistic ball-interception task, large-scale electrophysiology in non-human primates, and recurrent neural network modeling. We find that neurons in the monkeys' dorsomedial frontal cortex (DMFC) represent task-relevant information about the ball position in a multiplexed fashion. At a population level, the activity pattern in DMFC comprises a low-dimensional neural embedding that tracks the ball both when it is visible and invisible, serving as a neural substrate for mental simulation. A systematic comparison of different classes of task-optimized RNN models with the DMFC data provides further evidence supporting the mental simulation hypothesis. Our findings provide evidence that neural dynamics in the frontal cortex are consistent with internal simulation of external states in the environment.
Collapse
Affiliation(s)
- Rishi Rajalingham
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Reality Labs, Meta; 390 9th Ave, New York, NY, USA
| | - Hansem Sohn
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University (SKKU), Suwon, Republic of Korea
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.
- Howard Hughes Medical Institute, Massachusetts Institute of Technology, Cambridge, USA.
| |
Collapse
|
2
|
Bardella G, Franchini S, Pani P, Ferraina S. Lattice physics approaches for neural networks. iScience 2024; 27:111390. [PMID: 39679297 PMCID: PMC11638618 DOI: 10.1016/j.isci.2024.111390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2024] Open
Abstract
Modern neuroscience has evolved into a frontier field that draws on numerous disciplines, resulting in the flourishing of novel conceptual frames primarily inspired by physics and complex systems science. Contributing in this direction, we recently introduced a mathematical framework to describe the spatiotemporal interactions of systems of neurons using lattice field theory, the reference paradigm for theoretical particle physics. In this note, we provide a concise summary of the basics of the theory, aiming to be intuitive to the interdisciplinary neuroscience community. We contextualize our methods, illustrating how to readily connect the parameters of our formulation to experimental variables using well-known renormalization procedures. This synopsis yields the key concepts needed to describe neural networks using lattice physics. Such classes of methods are attention-worthy in an era of blistering improvements in numerical computations, as they can facilitate relating the observation of neural activity to generative models underpinned by physical principles.
Collapse
Affiliation(s)
- Giampiero Bardella
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Simone Franchini
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Pierpaolo Pani
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Stefano Ferraina
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
3
|
Serrano-Fernández L, Beirán M, Romo R, Parga N. Representation of a perceptual bias in the prefrontal cortex. Proc Natl Acad Sci U S A 2024; 121:e2312831121. [PMID: 39636858 DOI: 10.1073/pnas.2312831121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 11/06/2024] [Indexed: 12/07/2024] Open
Abstract
Perception is influenced by sensory stimulation, prior knowledge, and contextual cues, which collectively contribute to the emergence of perceptual biases. However, the precise neural mechanisms underlying these biases remain poorly understood. This study aims to address this gap by analyzing neural recordings from the prefrontal cortex (PFC) of monkeys performing a vibrotactile frequency discrimination task. Our findings provide empirical evidence supporting the hypothesis that perceptual biases can be reflected in the neural activity of the PFC. We found that the state-space trajectories of PFC neuronal activity encoded a warped representation of the first frequency presented during the task. Remarkably, this distorted representation of the frequency aligned with the predictions of its Bayesian estimator. The identification of these neural correlates expands our understanding of the neural basis of perceptual biases and highlights the involvement of the PFC in shaping perceptual experiences. Similar analyses could be employed in other delayed comparison tasks and in various brain regions to explore where and how neural activity reflects perceptual biases during different stages of the trial.
Collapse
Affiliation(s)
- Luis Serrano-Fernández
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain
- Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| | - Manuel Beirán
- Center for Theoretical Neuroscience, Department of Neuroscience, Zuckerman Institute, Columbia University, New York, NY 10027
| | | | - Néstor Parga
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain
- Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| |
Collapse
|
4
|
Posani L, Wang S, Muscinelli S, Paninski L, Fusi S. Rarely categorical, always high-dimensional: how the neural code changes along the cortical hierarchy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.15.623878. [PMID: 39605683 PMCID: PMC11601379 DOI: 10.1101/2024.11.15.623878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
The brain is highly structured both at anatomical and functional levels. However, within individual brain areas, neurons often exhibit very diverse and seemingly disorganized responses. A more careful analysis shows that these neurons can sometimes be grouped together into specialized subpopulations (categorical representations). Organization can also be found at the level of the representational geometry in the activity space, typically in the form of low-dimensional structures. It is still unclear how the geometry in the activity space and the structure of the response profiles of individual neurons are related. Here, we systematically analyzed the geometric and selectivity structure of the neural population from 40+ cortical regions in mice performing a decision-making task (IBL public Brainwide Map data set). We used a reduced-rank regression approach to quantify the selectivity profiles of single neurons and multiple measures of dimensionality to characterize the representational geometry of task variables. We then related these measures within single areas to the position of each area in the sensory-cognitive cortical hierarchy. Our findings reveal that only a few regions (in primary sensory areas) are categorical. When multiple brain areas are considered, we observe clustering that reflects the brain's large-scale organization. The representational geometry of task variables also changed along the cortical hierarchy, with higher dimensionality in cognitive regions. These trends were explained by analytical computations linking the maximum dimensionality of representational geometry to the clustering of selectivity at the single neuron level. Finally, we computed the shattering dimensionality (SD), a measure of the linear separability of neural activity vectors; remarkably, the SD remained near maximal across all regions, suggesting that the observed variability in the selectivity profiles allows neural populations to maintain high computational flexibility. These results provide a new mathematical and empirical perspective on selectivity and representation geometry in the cortical neural code.
Collapse
Affiliation(s)
- Lorenzo Posani
- Center for Theoretical Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| | - Shuqi Wang
- School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland
| | - Samuel Muscinelli
- Center for Theoretical Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| | - Liam Paninski
- Center for Theoretical Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| |
Collapse
|
5
|
Zheng J, Meister M. The unbearable slowness of being: Why do we live at 10 bits/s? Neuron 2024:S0896-6273(24)00808-0. [PMID: 39694032 DOI: 10.1016/j.neuron.2024.11.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2024] [Revised: 10/31/2024] [Accepted: 11/12/2024] [Indexed: 12/20/2024]
Abstract
This article is about the neural conundrum behind the slowness of human behavior. The information throughput of a human being is about 10 bits/s. In comparison, our sensory systems gather data at ∼109 bits/s. The stark contrast between these numbers remains unexplained and touches on fundamental aspects of brain function: what neural substrate sets this speed limit on the pace of our existence? Why does the brain need billions of neurons to process 10 bits/s? Why can we only think about one thing at a time? The brain seems to operate in two distinct modes: the "outer" brain handles fast high-dimensional sensory and motor signals, whereas the "inner" brain processes the reduced few bits needed to control behavior. Plausible explanations exist for the large neuron numbers in the outer brain, but not for the inner brain, and we propose new research directions to remedy this.
Collapse
Affiliation(s)
- Jieyu Zheng
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| | - Markus Meister
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
6
|
St-Yves G, Kay K, Naselaris T. Variation in the geometry of concept manifolds across human visual cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.26.625280. [PMID: 39651255 PMCID: PMC11623644 DOI: 10.1101/2024.11.26.625280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2024]
Abstract
Brain activity patterns in high-level visual cortex support accurate linear classification of visual concepts (e.g., objects or scenes). It has long been appreciated that the accuracy of linear classification in any brain area depends on the geometry of its concept manifolds-sets of brain activity patterns that encode images of a concept. However, it is unclear how the geometry of concept manifolds differs between regions of visual cortex that support accurate classification and those that don't, or how it differs between visual cortex and deep neural networks (DNNs). We estimated geometric properties of concept manifolds that, per a recent theory, directly determine the accuracy of simple "few-shot" linear classifiers. Using a large fMRI dataset, we show that variation in classification accuracy across human visual cortex is driven by a variation in a single geometric property: the distance between manifold centers ("geometric Signal"). In contrast, variation in classification accuracy across most DNN layers is driven by an increase in the effective number of manifold dimensions ("Dimensionality"). Despite this difference in the geometric properties that affect few-shot classification performance in the brain and DNNs, we find that Signal and Dimensionality are strongly, negatively correlated: when Signal increases across brain regions or DNN layers, Dimensionality decreases, and vice versa. We conclude that visual cortex and DNNs deploy different geometric strategies for accurate linear classification of concepts, even though both are subject to the same constraint.
Collapse
|
7
|
Johnston WJ, Fine JM, Yoo SBM, Ebitz RB, Hayden BY. Semi-orthogonal subspaces for value mediate a binding and generalization trade-off. Nat Neurosci 2024; 27:2218-2230. [PMID: 39289564 DOI: 10.1038/s41593-024-01758-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 08/09/2024] [Indexed: 09/19/2024]
Abstract
When choosing between options, we must associate their values with the actions needed to select them. We hypothesize that the brain solves this binding problem through neural population subspaces. Here, in macaques performing a choice task, we show that neural populations in five reward-sensitive regions encode the values of offers presented on the left and right in distinct subspaces. This encoding is sufficient to bind offer values to their locations while preserving abstract value information. After offer presentation, all areas encode the value of the first and second offers in orthogonal subspaces; this orthogonalization also affords binding. Our binding-by-subspace hypothesis makes two new predictions confirmed by the data. First, behavioral errors should correlate with spatial, but not temporal, neural misbinding. Second, behavioral errors should increase when offers have low or high values, compared to medium values, even when controlling for value difference. Together, these results support the idea that the brain uses semi-orthogonal subspaces to bind features.
Collapse
Affiliation(s)
- W Jeffrey Johnston
- Center for Theoretical Neuroscience and Mortimer B. Zuckerman Mind, Brain, and Behavior Institute, Columbia University, New York, NY, USA.
| | - Justin M Fine
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Seng Bum Michael Yoo
- Department of Biomedical Engineering, Sunkyunkwan University, and Center for Neuroscience Imaging Research, Institute of Basic Sciences, Suwon, Republic of Korea
| | - R Becket Ebitz
- Department of Neuroscience, Université de Montréal, Montreal, Quebec, Canada
| | - Benjamin Y Hayden
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
8
|
Mathis MW, Perez Rotondo A, Chang EF, Tolias AS, Mathis A. Decoding the brain: From neural representations to mechanistic models. Cell 2024; 187:5814-5832. [PMID: 39423801 PMCID: PMC11637322 DOI: 10.1016/j.cell.2024.08.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 07/29/2024] [Accepted: 08/26/2024] [Indexed: 10/21/2024]
Abstract
A central principle in neuroscience is that neurons within the brain act in concert to produce perception, cognition, and adaptive behavior. Neurons are organized into specialized brain areas, dedicated to different functions to varying extents, and their function relies on distributed circuits to continuously encode relevant environmental and body-state features, enabling other areas to decode (interpret) these representations for computing meaningful decisions and executing precise movements. Thus, the distributed brain can be thought of as a series of computations that act to encode and decode information. In this perspective, we detail important concepts of neural encoding and decoding and highlight the mathematical tools used to measure them, including deep learning methods. We provide case studies where decoding concepts enable foundational and translational science in motor, visual, and language processing.
Collapse
Affiliation(s)
- Mackenzie Weygandt Mathis
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland.
| | - Adriana Perez Rotondo
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Edward F Chang
- Department of Neurological Surgery, UCSF, San Francisco, CA, USA
| | - Andreas S Tolias
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Stanford, CA, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Stanford BioX, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Alexander Mathis
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| |
Collapse
|
9
|
Kim JZ, Larsen B, Parkes L. Shaping dynamical neural computations using spatiotemporal constraints. Biochem Biophys Res Commun 2024; 728:150302. [PMID: 38968771 DOI: 10.1016/j.bbrc.2024.150302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 03/21/2024] [Accepted: 04/11/2024] [Indexed: 07/07/2024]
Abstract
Dynamics play a critical role in computation. The principled evolution of states over time enables both biological and artificial networks to represent and integrate information to make decisions. In the past few decades, significant multidisciplinary progress has been made in bridging the gap between how we understand biological versus artificial computation, including how insights gained from one can translate to the other. Research has revealed that neurobiology is a key determinant of brain network architecture, which gives rise to spatiotemporally constrained patterns of activity that underlie computation. Here, we discuss how neural systems use dynamics for computation, and claim that the biological constraints that shape brain networks may be leveraged to improve the implementation of artificial neural networks. To formalize this discussion, we consider a natural artificial analog of the brain that has been used extensively to model neural computation: the recurrent neural network (RNN). In both the brain and the RNN, we emphasize the common computational substrate atop which dynamics occur-the connectivity between neurons-and we explore the unique computational advantages offered by biophysical constraints such as resource efficiency, spatial embedding, and neurodevelopment.
Collapse
Affiliation(s)
- Jason Z Kim
- Department of Physics, Cornell University, Ithaca, NY, 14853, USA.
| | - Bart Larsen
- Department of Pediatrics, Masonic Institute for the Developing Brain, University of Minnesota, USA
| | - Linden Parkes
- Department of Psychiatry, Rutgers University, Piscataway, NJ, 08854, USA.
| |
Collapse
|
10
|
Kikumoto A, Bhandari A, Shibata K, Badre D. A transient high-dimensional geometry affords stable conjunctive subspaces for efficient action selection. Nat Commun 2024; 15:8513. [PMID: 39353961 PMCID: PMC11445473 DOI: 10.1038/s41467-024-52777-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 09/18/2024] [Indexed: 10/03/2024] Open
Abstract
Flexible action selection requires cognitive control mechanisms capable of mapping the same inputs to different output actions depending on the context. From a neural state-space perspective, this requires a control representation that separates similar input neural states by context. Additionally, for action selection to be robust and time-invariant, information must be stable in time, enabling efficient readout. Here, using EEG decoding methods, we investigate how the geometry and dynamics of control representations constrain flexible action selection in the human brain. Participants performed a context-dependent action selection task. A forced response procedure probed action selection different states in neural trajectories. The result shows that before successful responses, there is a transient expansion of representational dimensionality that separated conjunctive subspaces. Further, the dynamics stabilizes in the same time window, with entry into this stable, high-dimensional state predictive of individual trial performance. These results establish the neural geometry and dynamics the human brain needs for flexible control over behavior.
Collapse
Affiliation(s)
- Atsushi Kikumoto
- Department of Cognitive and Psychological Sciences, Brown University, Rhode Island, US.
- RIKEN Center for Brain Science, Wako, Saitama, Japan.
| | - Apoorva Bhandari
- Department of Cognitive and Psychological Sciences, Brown University, Rhode Island, US
| | | | - David Badre
- Department of Cognitive and Psychological Sciences, Brown University, Rhode Island, US
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, US
| |
Collapse
|
11
|
Tian Z, Chen J, Zhang C, Min B, Xu B, Wang L. Mental programming of spatial sequences in working memory in the macaque frontal cortex. Science 2024; 385:eadp6091. [PMID: 39325894 DOI: 10.1126/science.adp6091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 07/12/2024] [Indexed: 09/28/2024]
Abstract
How the brain mentally sorts a series of items in a specific order within working memory (WM) remains largely unknown. We investigated mental sorting using high-throughput electrophysiological recordings in the frontal cortex of macaque monkeys, who memorized and sorted spatial sequences in forward or backward orders according to visual cues. We discovered that items at each ordinal rank in WM were encoded in separate rank-WM subspaces and then, depending on cues, were maintained or reordered between the subspaces, accompanied by two extra temporary subspaces in two operation steps. Furthermore, the cue activity served as an indexical signal to trigger sorting processes. Thus, we propose a complete conceptual framework, where the neural landscape transitions in frontal neural states underlie the symbolic system for mental programming of sequence WM.
Collapse
Affiliation(s)
- Zhenghe Tian
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jingwen Chen
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Cong Zhang
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Bin Min
- Lingang Laboratory, Shanghai 200031, China
| | - Bo Xu
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Liping Wang
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
12
|
Kikumoto A, Shibata K, Nishio T, Badre D. Practice Reshapes the Geometry and Dynamics of Task-tailored Representations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.12.612718. [PMID: 39314386 PMCID: PMC11419051 DOI: 10.1101/2024.09.12.612718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Extensive practice makes task performance more efficient and precise, leading to automaticity. However, theories of automaticity differ on which levels of task representations (e.g., low-level features, stimulus-response mappings, or high-level conjunctive memories of individual events) change with practice, despite predicting the same pattern of improvement (e.g., power law of practice). To resolve this controversy, we built on recent theoretical advances in understanding computations through neural population dynamics. Specifically, we hypothesized that practice optimizes the neural representational geometry of task representations to minimally separate the highest-level task contingencies needed for successful performance. This involves efficiently reaching conjunctive neural states that integrate task-critical features nonlinearly while abstracting over non-critical dimensions. To test this hypothesis, human participants (n = 40) engaged in extensive practice of a simple, context-dependent action selection task over 3 days while recording EEG. During initial rapid improvement in task performance, representations of the highest-level, context-specific conjunctions of task-features were enhanced as a function of the number of successful episodes. Crucially, only enhancement of these conjunctive representations, and not lower-order representations, predicted the power-law improvement in performance. Simultaneously, over sessions, these conjunctive neural states became more stable earlier in time and more aligned, abstracting over redundant task features, which correlated with offline performance gain in reducing switch costs. Thus, practice optimizes the dynamic representational geometry as task-tailored neural states that minimally tesselate the task space, taming their high-dimensionality.
Collapse
Affiliation(s)
- Atsushi Kikumoto
- Department of Cognitive and Psychological Sciences, Brown University Providence, RI, U.S
- RIKEN Center for Brain Science, Wako, Saitama, Japan
| | | | | | - David Badre
- Department of Cognitive and Psychological Sciences, Brown University Providence, RI, U.S
- Carney Institute for Brain Science Brown University, Providence, RI, U.S
| |
Collapse
|
13
|
Kikumoto A, Bhandari A, Shibata K, Badre D. A Transient High-dimensional Geometry Affords Stable Conjunctive Subspaces for Efficient Action Selection. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.09.544428. [PMID: 37333209 PMCID: PMC10274903 DOI: 10.1101/2023.06.09.544428] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Flexible action selection requires cognitive control mechanisms capable of mapping the same inputs to different output actions depending on the context. From a neural state-space perspective, this requires a control representation that separates similar input neural states by context. Additionally, for action selection to be robust and time-invariant, information must be stable in time, enabling efficient readout. Here, using EEG decoding methods, we investigate how the geometry and dynamics of control representations constrain flexible action selection in the human brain. Participants performed a context-dependent action selection task. A forced response procedure probed action selection different states in neural trajectories. The result shows that before successful responses, there is a transient expansion of representational dimensionality that separated conjunctive subspaces. Further, the dynamics stabilizes in the same time window, with entry into this stable, high-dimensional state predictive of individual trial performance. These results establish the neural geometry and dynamics the human brain needs for flexible control over behavior.
Collapse
Affiliation(s)
- Atsushi Kikumoto
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Rhode Island, U.S
- RIKEN Center for Brain Science, Wako, Saitama, Japan
| | - Apoorva Bhandari
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Rhode Island, U.S
| | | | - David Badre
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Rhode Island, U.S
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, U.S
| |
Collapse
|
14
|
Kristensen SS, Kesgin K, Jörntell H. High-dimensional cortical signals reveal rich bimodal and working memory-like representations among S1 neuron populations. Commun Biol 2024; 7:1043. [PMID: 39179675 PMCID: PMC11344095 DOI: 10.1038/s42003-024-06743-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 08/16/2024] [Indexed: 08/26/2024] Open
Abstract
Complexity is important for flexibility of natural behavior and for the remarkably efficient learning of the brain. Here we assessed the signal complexity among neuron populations in somatosensory cortex (S1). To maximize our chances of capturing population-level signal complexity, we used highly repeatable resolvable visual, tactile, and visuo-tactile inputs and neuronal unit activity recorded at high temporal resolution. We found the state space of the spontaneous activity to be extremely high-dimensional in S1 populations. Their processing of tactile inputs was profoundly modulated by visual inputs and even fine nuances of visual input patterns were separated. Moreover, the dynamic activity states of the S1 neuron population signaled the preceding specific input long after the stimulation had terminated, i.e., resident information that could be a substrate for a working memory. Hence, the recorded high-dimensional representations carried rich multimodal and internal working memory-like signals supporting high complexity in cortical circuitry operation.
Collapse
Affiliation(s)
- Sofie S Kristensen
- Department of Experimental Medical Science, Neural Basis of Sensorimotor Control, Lund University, Lund, Sweden
| | - Kaan Kesgin
- Department of Experimental Medical Science, Neural Basis of Sensorimotor Control, Lund University, Lund, Sweden
| | - Henrik Jörntell
- Department of Experimental Medical Science, Neural Basis of Sensorimotor Control, Lund University, Lund, Sweden.
| |
Collapse
|
15
|
Lin Z, Huang H. Spiking mode-based neural networks. Phys Rev E 2024; 110:024306. [PMID: 39295018 DOI: 10.1103/physreve.110.024306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 07/22/2024] [Indexed: 09/21/2024]
Abstract
Spiking neural networks play an important role in brainlike neuromorphic computations and in studying working mechanisms of neural circuits. One drawback of training a large-scale spiking neural network is that updating all weights is quite expensive. Furthermore, after training, all information related to the computational task is hidden into the weight matrix, prohibiting us from a transparent understanding of circuit mechanisms. Therefore, in this work, we address these challenges by proposing a spiking mode-based training protocol, where the recurrent weight matrix is explained as a Hopfield-like multiplication of three matrices: input modes, output modes, and a score matrix. The first advantage is that the weight is interpreted by input and output modes and their associated scores characterizing the importance of each decomposition term. The number of modes is thus adjustable, allowing more degrees of freedom for modeling the experimental data. This significantly reduces the training cost because of significantly reduced space complexity for learning. Training spiking networks is thus carried out in the mode-score space. The second advantage is that one can project the high-dimensional neural activity (filtered spike train) in the state space onto the mode space which is typically of a low dimension, e.g., a few modes are sufficient to capture the shape of the underlying neural manifolds. We successfully apply our framework in two computational tasks-digit classification and selective sensory integration tasks. Our method thus accelerates the training of spiking neural networks by a Hopfield-like decomposition, and moreover this training leads to low-dimensional attractor structures of high-dimensional neural dynamics.
Collapse
|
16
|
Fascianelli V, Battista A, Stefanini F, Tsujimoto S, Genovesio A, Fusi S. Neural representational geometries reflect behavioral differences in monkeys and recurrent neural networks. Nat Commun 2024; 15:6479. [PMID: 39090091 PMCID: PMC11294567 DOI: 10.1038/s41467-024-50503-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 07/10/2024] [Indexed: 08/04/2024] Open
Abstract
Animals likely use a variety of strategies to solve laboratory tasks. Traditionally, combined analysis of behavioral and neural recording data across subjects employing different strategies may obscure important signals and give confusing results. Hence, it is essential to develop techniques that can infer strategy at the single-subject level. We analyzed an experiment in which two male monkeys performed a visually cued rule-based task. The analysis of their performance shows no indication that they used a different strategy. However, when we examined the geometry of stimulus representations in the state space of the neural activities recorded in dorsolateral prefrontal cortex, we found striking differences between the two monkeys. Our purely neural results induced us to reanalyze the behavior. The new analysis showed that the differences in representational geometry are associated with differences in the reaction times, revealing behavioral differences we were unaware of. All these analyses suggest that the monkeys are using different strategies. Finally, using recurrent neural network models trained to perform the same task, we show that these strategies correlate with the amount of training, suggesting a possible explanation for the observed neural and behavioral differences.
Collapse
Affiliation(s)
- Valeria Fascianelli
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
| | - Aldo Battista
- Center for Neural Science, New York University, New York, NY, USA
| | - Fabio Stefanini
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | | | - Aldo Genovesio
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
- Department of Neuroscience, Vagelos College of Physicians and Surgeons, Columbia University Irving Medical Center, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| |
Collapse
|
17
|
Scott DN, Mukherjee A, Nassar MR, Halassa MM. Thalamocortical architectures for flexible cognition and efficient learning. Trends Cogn Sci 2024; 28:739-756. [PMID: 38886139 PMCID: PMC11305962 DOI: 10.1016/j.tics.2024.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 05/12/2024] [Accepted: 05/13/2024] [Indexed: 06/20/2024]
Abstract
The brain exhibits a remarkable ability to learn and execute context-appropriate behaviors. How it achieves such flexibility, without sacrificing learning efficiency, is an important open question. Neuroscience, psychology, and engineering suggest that reusing and repurposing computations are part of the answer. Here, we review evidence that thalamocortical architectures may have evolved to facilitate these objectives of flexibility and efficiency by coordinating distributed computations. Recent work suggests that distributed prefrontal cortical networks compute with flexible codes, and that the mediodorsal thalamus provides regularization to promote efficient reuse. Thalamocortical interactions resemble hierarchical Bayesian computations, and their network implementation can be related to existing gating, synchronization, and hub theories of thalamic function. By reviewing recent findings and providing a novel synthesis, we highlight key research horizons integrating computation, cognition, and systems neuroscience.
Collapse
Affiliation(s)
- Daniel N Scott
- Department of Neuroscience, Brown University, Providence, RI, USA; Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| | - Arghya Mukherjee
- Department of Neuroscience, Tufts University School of Medicine, Boston, MA, USA
| | - Matthew R Nassar
- Department of Neuroscience, Brown University, Providence, RI, USA; Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, USA
| | - Michael M Halassa
- Department of Neuroscience, Tufts University School of Medicine, Boston, MA, USA; Department of Psychiatry, Tufts University School of Medicine, Boston, MA, USA.
| |
Collapse
|
18
|
Huang H. Eight challenges in developing theory of intelligence. Front Comput Neurosci 2024; 18:1388166. [PMID: 39114083 PMCID: PMC11303322 DOI: 10.3389/fncom.2024.1388166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 07/09/2024] [Indexed: 08/10/2024] Open
Abstract
A good theory of mathematical beauty is more practical than any current observation, as new predictions about physical reality can be self-consistently verified. This belief applies to the current status of understanding deep neural networks including large language models and even the biological intelligence. Toy models provide a metaphor of physical reality, allowing mathematically formulating the reality (i.e., the so-called theory), which can be updated as more conjectures are justified or refuted. One does not need to present all details in a model, but rather, more abstract models are constructed, as complex systems such as the brains or deep networks have many sloppy dimensions but much less stiff dimensions that strongly impact macroscopic observables. This type of bottom-up mechanistic modeling is still promising in the modern era of understanding the natural or artificial intelligence. Here, we shed light on eight challenges in developing theory of intelligence following this theoretical paradigm. Theses challenges are representation learning, generalization, adversarial robustness, continual learning, causal learning, internal model of the brain, next-token prediction, and the mechanics of subjective experience.
Collapse
Affiliation(s)
- Haiping Huang
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
19
|
Serrano-Fernández L, Beirán M, Parga N. Emergent perceptual biases from state-space geometry in trained spiking recurrent neural networks. Cell Rep 2024; 43:114412. [PMID: 38968075 DOI: 10.1016/j.celrep.2024.114412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 04/08/2024] [Accepted: 06/12/2024] [Indexed: 07/07/2024] Open
Abstract
A stimulus held in working memory is perceived as contracted toward the average stimulus. This contraction bias has been extensively studied in psychophysics, but little is known about its origin from neural activity. By training recurrent networks of spiking neurons to discriminate temporal intervals, we explored the causes of this bias and how behavior relates to population firing activity. We found that the trained networks exhibited animal-like behavior. Various geometric features of neural trajectories in state space encoded warped representations of the durations of the first interval modulated by sensory history. Formulating a normative model, we showed that these representations conveyed a Bayesian estimate of the interval durations, thus relating activity and behavior. Importantly, our findings demonstrate that Bayesian computations already occur during the sensory phase of the first stimulus and persist throughout its maintenance in working memory, until the time of stimulus comparison.
Collapse
Affiliation(s)
- Luis Serrano-Fernández
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain; Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| | - Manuel Beirán
- Center for Theoretical Neuroscience, Zuckerman Institute, Columbia University, New York, NY, USA
| | - Néstor Parga
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain; Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain.
| |
Collapse
|
20
|
Ostojic S, Fusi S. Computational role of structure in neural activity and connectivity. Trends Cogn Sci 2024; 28:677-690. [PMID: 38553340 DOI: 10.1016/j.tics.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/05/2024]
Abstract
One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005 Paris, France.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| |
Collapse
|
21
|
Srinath R, Ni AM, Marucci C, Cohen MR, Brainard DH. Orthogonal neural representations support perceptual judgements of natural stimuli. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.14.580134. [PMID: 38464018 PMCID: PMC10925131 DOI: 10.1101/2024.02.14.580134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
In natural behavior, observers must separate relevant information from a barrage of irrelevant information. Many studies have investigated the neural underpinnings of this ability using artificial stimuli presented on simple backgrounds. Natural viewing, however, carries a set of challenges that are inaccessible using artificial stimuli, including neural responses to background objects that are task-irrelevant. An emerging body of evidence suggests that the visual abilities of humans and animals can be modeled through the linear decoding of task-relevant information from visual cortex. This idea suggests the hypothesis that irrelevant features of a natural scene should impair performance on a visual task only if their neural representations intrude on the linear readout of the task relevant feature, as would occur if the representations of task-relevant and irrelevant features are not orthogonal in the underlying neural population. We tested this hypothesis using human psychophysics and monkey neurophysiology, in response to parametrically variable naturalistic stimuli. We demonstrate that 1) the neural representation of one feature (the position of a central object) in visual area V4 is orthogonal to those of several background features, 2) the ability of human observers to precisely judge object position was largely unaffected by task-irrelevant variation in those background features, and 3) many features of the object and the background are orthogonally represented by V4 neural responses. Our observations are consistent with the hypothesis that orthogonal neural representations can support stable perception of objects and features despite the tremendous richness of natural visual scenes.
Collapse
Affiliation(s)
- Ramanujan Srinath
- equal contribution
- Department of Neurobiology and Neuroscience Institute, The University of Chicago, Chicago, IL 60637, USA
| | - Amy M. Ni
- equal contribution
- Department of Neurobiology and Neuroscience Institute, The University of Chicago, Chicago, IL 60637, USA
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Claire Marucci
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Marlene R. Cohen
- Department of Neurobiology and Neuroscience Institute, The University of Chicago, Chicago, IL 60637, USA
- equal contribution
| | - David H. Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
- equal contribution
| |
Collapse
|
22
|
Pellegrino A, Stein H, Cayco-Gajic NA. Dimensionality reduction beyond neural subspaces with slice tensor component analysis. Nat Neurosci 2024; 27:1199-1210. [PMID: 38710876 PMCID: PMC11537991 DOI: 10.1038/s41593-024-01626-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 03/20/2024] [Indexed: 05/08/2024]
Abstract
Recent work has argued that large-scale neural recordings are often well described by patterns of coactivation across neurons. Yet the view that neural variability is constrained to a fixed, low-dimensional subspace may overlook higher-dimensional structure, including stereotyped neural sequences or slowly evolving latent spaces. Here we argue that task-relevant variability in neural data can also cofluctuate over trials or time, defining distinct 'covariability classes' that may co-occur within the same dataset. To demix these covariability classes, we develop sliceTCA (slice tensor component analysis), a new unsupervised dimensionality reduction method for neural data tensors. In three example datasets, including motor cortical activity during a classic reaching task in primates and recent multiregion recordings in mice, we show that sliceTCA can capture more task-relevant structure in neural data using fewer components than traditional methods. Overall, our theoretical framework extends the classic view of low-dimensional population activity by incorporating additional classes of latent variables capturing higher-dimensional structure.
Collapse
Affiliation(s)
- Arthur Pellegrino
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Département D'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France.
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, UK.
| | - Heike Stein
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Département D'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France
| | - N Alex Cayco-Gajic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Département D'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France.
| |
Collapse
|
23
|
Manley J, Lu S, Barber K, Demas J, Kim H, Meyer D, Traub FM, Vaziri A. Simultaneous, cortex-wide dynamics of up to 1 million neurons reveal unbounded scaling of dimensionality with neuron number. Neuron 2024; 112:1694-1709.e5. [PMID: 38452763 PMCID: PMC11098699 DOI: 10.1016/j.neuron.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 05/18/2023] [Accepted: 02/14/2024] [Indexed: 03/09/2024]
Abstract
The brain's remarkable properties arise from the collective activity of millions of neurons. Widespread application of dimensionality reduction to multi-neuron recordings implies that neural dynamics can be approximated by low-dimensional "latent" signals reflecting neural computations. However, can such low-dimensional representations truly explain the vast range of brain activity, and if not, what is the appropriate resolution and scale of recording to capture them? Imaging neural activity at cellular resolution and near-simultaneously across the mouse cortex, we demonstrate an unbounded scaling of dimensionality with neuron number in populations up to 1 million neurons. Although half of the neural variance is contained within sixteen dimensions correlated with behavior, our discovered scaling of dimensionality corresponds to an ever-increasing number of neuronal ensembles without immediate behavioral or sensory correlates. The activity patterns underlying these higher dimensions are fine grained and cortex wide, highlighting that large-scale, cellular-resolution recording is required to uncover the full substrates of neuronal computations.
Collapse
Affiliation(s)
- Jason Manley
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Sihao Lu
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Kevin Barber
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Jeffrey Demas
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Hyewon Kim
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - David Meyer
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Francisca Martínez Traub
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA.
| |
Collapse
|
24
|
Fortunato C, Bennasar-Vázquez J, Park J, Chang JC, Miller LE, Dudman JT, Perich MG, Gallego JA. Nonlinear manifolds underlie neural population activity during behaviour. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.07.18.549575. [PMID: 37503015 PMCID: PMC10370078 DOI: 10.1101/2023.07.18.549575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
There is rich variety in the activity of single neurons recorded during behaviour. Yet, these diverse single neuron responses can be well described by relatively few patterns of neural co-modulation. The study of such low-dimensional structure of neural population activity has provided important insights into how the brain generates behaviour. Virtually all of these studies have used linear dimensionality reduction techniques to estimate these population-wide co-modulation patterns, constraining them to a flat "neural manifold". Here, we hypothesised that since neurons have nonlinear responses and make thousands of distributed and recurrent connections that likely amplify such nonlinearities, neural manifolds should be intrinsically nonlinear. Combining neural population recordings from monkey, mouse, and human motor cortex, and mouse striatum, we show that: 1) neural manifolds are intrinsically nonlinear; 2) their nonlinearity becomes more evident during complex tasks that require more varied activity patterns; and 3) manifold nonlinearity varies across architecturally distinct brain regions. Simulations using recurrent neural network models confirmed the proposed relationship between circuit connectivity and manifold nonlinearity, including the differences across architecturally distinct regions. Thus, neural manifolds underlying the generation of behaviour are inherently nonlinear, and properly accounting for such nonlinearities will be critical as neuroscientists move towards studying numerous brain regions involved in increasingly complex and naturalistic behaviours.
Collapse
Affiliation(s)
- Cátia Fortunato
- Department of Bioengineering, Imperial College London, London UK
| | | | - Junchol Park
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn VA, USA
| | - Joanna C. Chang
- Department of Bioengineering, Imperial College London, London UK
| | - Lee E. Miller
- Department of Neurosciences, Northwestern University, Chicago IL, USA
- Department of Biomedical Engineering, Northwestern University, Chicago IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago IL, USA, and Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Joshua T. Dudman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn VA, USA
| | - Matthew G. Perich
- Department of Neurosciences, Faculté de médecine, Université de Montréal, Montréal, Québec, Canada
- Québec Artificial Intelligence Institute (MILA), Montréal, Québec, Canada
| | - Juan A. Gallego
- Department of Bioengineering, Imperial College London, London UK
| |
Collapse
|
25
|
Podlaski WF, Machens CK. Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks. Neural Comput 2024; 36:803-857. [PMID: 38658028 DOI: 10.1162/neco_a_01658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
Collapse
Affiliation(s)
- William F Podlaski
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| | - Christian K Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| |
Collapse
|
26
|
Beshkov K, Fyhn M, Hafting T, Einevoll GT. Topological structure of population activity in mouse visual cortex encodes densely sampled stimulus rotations. iScience 2024; 27:109370. [PMID: 38523791 PMCID: PMC10959658 DOI: 10.1016/j.isci.2024.109370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 10/06/2023] [Accepted: 02/26/2024] [Indexed: 03/26/2024] Open
Abstract
The primary visual cortex is one of the most well understood regions supporting the processing involved in sensory computation. Following the popularization of high-density neural recordings, it has been observed that the activity of large neural populations is often constrained to low dimensional manifolds. In this work, we quantify the structure of such neural manifolds in the visual cortex. We do this by analyzing publicly available two-photon optical recordings of mouse primary visual cortex in response to visual stimuli with a densely sampled rotation angle. Using a geodesic metric along with persistent homology, we discover that population activity in response to such stimuli generates a circular manifold, encoding the angle of rotation. Furthermore, we observe that this circular manifold is expressed differently in subpopulations of neurons with differing orientation and direction selectivity. Finally, we discuss some of the obstacles to reliably retrieving the truthful topology generated by a neural population.
Collapse
Affiliation(s)
- Kosio Beshkov
- Center for Integrative Neuroplasticity, Department of Bioscience, University of Oslo, Oslo, Norway
| | - Marianne Fyhn
- Center for Integrative Neuroplasticity, Department of Bioscience, University of Oslo, Oslo, Norway
| | - Torkel Hafting
- Center for Integrative Neuroplasticity, Department of Bioscience, University of Oslo, Oslo, Norway
- Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Gaute T. Einevoll
- Center for Integrative Neuroplasticity, Department of Bioscience, University of Oslo, Oslo, Norway
- Department of Physics, Norwegian University of Life Sciences, As, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| |
Collapse
|
27
|
Noda T, Aschauer DF, Chambers AR, Seiler JPH, Rumpel S. Representational maps in the brain: concepts, approaches, and applications. Front Cell Neurosci 2024; 18:1366200. [PMID: 38584779 PMCID: PMC10995314 DOI: 10.3389/fncel.2024.1366200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 03/08/2024] [Indexed: 04/09/2024] Open
Abstract
Neural systems have evolved to process sensory stimuli in a way that allows for efficient and adaptive behavior in a complex environment. Recent technological advances enable us to investigate sensory processing in animal models by simultaneously recording the activity of large populations of neurons with single-cell resolution, yielding high-dimensional datasets. In this review, we discuss concepts and approaches for assessing the population-level representation of sensory stimuli in the form of a representational map. In such a map, not only are the identities of stimuli distinctly represented, but their relational similarity is also mapped onto the space of neuronal activity. We highlight example studies in which the structure of representational maps in the brain are estimated from recordings in humans as well as animals and compare their methodological approaches. Finally, we integrate these aspects and provide an outlook for how the concept of representational maps could be applied to various fields in basic and clinical neuroscience.
Collapse
Affiliation(s)
- Takahiro Noda
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Dominik F. Aschauer
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Anna R. Chambers
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA, United States
- Eaton Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA, United States
| | - Johannes P.-H. Seiler
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Simon Rumpel
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| |
Collapse
|
28
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
29
|
Stern M, Liu AJ, Balasubramanian V. Physical effects of learning. Phys Rev E 2024; 109:024311. [PMID: 38491658 DOI: 10.1103/physreve.109.024311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 01/31/2024] [Indexed: 03/18/2024]
Abstract
Interacting many-body physical systems ranging from neural networks in the brain to folding proteins to self-modifying electrical circuits can learn to perform diverse tasks. This learning, both in nature and in engineered systems, can occur through evolutionary selection or through dynamical rules that drive active learning from experience. Here, we show that learning in linear physical networks with weak input signals leaves architectural imprints on the Hessian of a physical system. Compared to a generic organization of the system components, (a) the effective physical dimension of the response to inputs decreases, (b) the response of physical degrees of freedom to random perturbations (or system "susceptibility") increases, and (c) the low-eigenvalue eigenvectors of the Hessian align with the task. Overall, these effects embody the typical scenario for learning processes in physical systems in the weak input regime, suggesting ways of discovering whether a physical network may have been trained.
Collapse
Affiliation(s)
- Menachem Stern
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - Andrea J Liu
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
- Center for Computational Biology, Flatiron Institute, Simons Foundation, New York, New York 10010, USA
| | - Vijay Balasubramanian
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
- Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501, USA
- Theoretische Natuurkunde, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussels, Belgium
| |
Collapse
|
30
|
Fakhar K, Dixit S, Hadaeghi F, Kording KP, Hilgetag CC. Downstream network transformations dissociate neural activity from causal functional contributions. Sci Rep 2024; 14:2103. [PMID: 38267481 PMCID: PMC10808222 DOI: 10.1038/s41598-024-52423-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 01/18/2024] [Indexed: 01/26/2024] Open
Abstract
Neuroscientists rely on distributed spatio-temporal patterns of neural activity to understand how neural units contribute to cognitive functions and behavior. However, the extent to which neural activity reliably indicates a unit's causal contribution to the behavior is not well understood. To address this issue, we provide a systematic multi-site perturbation framework that captures time-varying causal contributions of elements to a collectively produced outcome. Applying our framework to intuitive toy examples and artificial neural networks revealed that recorded activity patterns of neural elements may not be generally informative of their causal contribution due to activity transformations within a network. Overall, our findings emphasize the limitations of inferring causal mechanisms from neural activities and offer a rigorous lesioning framework for elucidating causal neural contributions.
Collapse
Affiliation(s)
- Kayson Fakhar
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg, Germany.
| | - Shrey Dixit
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg, Germany
| | - Fatemeh Hadaeghi
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg, Germany
| | - Konrad P Kording
- Departments of Bioengineering and Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
- Learning in Machines & Brains, CIFAR, Toronto, ON, Canada
| | - Claus C Hilgetag
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg, Germany
- Department of Health Sciences, Boston University, Boston, MA, USA
| |
Collapse
|
31
|
Orepic P, Truccolo W, Halgren E, Cash SS, Giraud AL, Proix T. Neural manifolds carry reactivation of phonetic representations during semantic processing. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.30.564638. [PMID: 37961305 PMCID: PMC10634964 DOI: 10.1101/2023.10.30.564638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Traditional models of speech perception posit that neural activity encodes speech through a hierarchy of cognitive processes, from low-level representations of acoustic and phonetic features to high-level semantic encoding. Yet it remains unknown how neural representations are transformed across levels of the speech hierarchy. Here, we analyzed unique microelectrode array recordings of neuronal spiking activity from the human left anterior superior temporal gyrus, a brain region at the interface between phonetic and semantic speech processing, during a semantic categorization task and natural speech perception. We identified distinct neural manifolds for semantic and phonetic features, with a functional separation of the corresponding low-dimensional trajectories. Moreover, phonetic and semantic representations were encoded concurrently and reflected in power increases in the beta and low-gamma local field potentials, suggesting top-down predictive and bottom-up cumulative processes. Our results are the first to demonstrate mechanisms for hierarchical speech transformations that are specific to neuronal population dynamics.
Collapse
Affiliation(s)
- Pavo Orepic
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Wilson Truccolo
- Department of Neuroscience, Brown University, Providence, Rhode Island, United States of America
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, United States of America
| | - Eric Halgren
- Department of Neuroscience & Radiology, University of California San Diego, La Jolla, California, United States of America
| | - Sydney S. Cash
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Institut Pasteur, Université Paris Cité, Hearing Institute, Paris, France
| | - Timothée Proix
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| |
Collapse
|
32
|
Gort J. Emergence of Universal Computations Through Neural Manifold Dynamics. Neural Comput 2024; 36:227-270. [PMID: 38101328 DOI: 10.1162/neco_a_01631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/05/2023] [Indexed: 12/17/2023]
Abstract
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology, and nonlinear dynamics in order to provide plausible explanations for these problems. It is shown that low-rank structured connectivities predict the formation of invariant and globally attracting manifolds in all these models. Regarding the dynamics arising in these manifolds, it is proved they are topologically equivalent across the considered formalisms. This letter also shows that under the low-rank hypothesis, the flows emerging in neural manifolds, including input-driven systems, are universal, which broadens previous findings. It explores how low-dimensional orbits can bear the production of continuous sets of muscular trajectories, the implementation of central pattern generators, and the storage of memory states. These dynamics can robustly simulate any Turing machine over arbitrary bounded memory strings, virtually endowing rate models with the power of universal computation. In addition, the letter shows how the low-rank hypothesis predicts the parsimonious correlation structure observed in cortical activity. Finally, it discusses how this theory could provide a useful tool from which to study neuropsychological phenomena using mathematical methods.
Collapse
Affiliation(s)
- Joan Gort
- Facultat de Psicologia, Universitat Autònoma de Barcelona, 08193, Bellaterra, Barcelona, Spain
| |
Collapse
|
33
|
Manley J, Demas J, Kim H, Traub FM, Vaziri A. Simultaneous, cortex-wide and cellular-resolution neuronal population dynamics reveal an unbounded scaling of dimensionality with neuron number. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.15.575721. [PMID: 38293036 PMCID: PMC10827059 DOI: 10.1101/2024.01.15.575721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
The brain's remarkable properties arise from collective activity of millions of neurons. Widespread application of dimensionality reduction to multi-neuron recordings implies that neural dynamics can be approximated by low-dimensional "latent" signals reflecting neural computations. However, what would be the biological utility of such a redundant and metabolically costly encoding scheme and what is the appropriate resolution and scale of neural recording to understand brain function? Imaging the activity of one million neurons at cellular resolution and near-simultaneously across mouse cortex, we demonstrate an unbounded scaling of dimensionality with neuron number. While half of the neural variance lies within sixteen behavior-related dimensions, we find this unbounded scaling of dimensionality to correspond to an ever-increasing number of internal variables without immediate behavioral correlates. The activity patterns underlying these higher dimensions are fine-grained and cortex-wide, highlighting that large-scale recording is required to uncover the full neural substrates of internal and potentially cognitive processes.
Collapse
Affiliation(s)
- Jason Manley
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Jeffrey Demas
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Hyewon Kim
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Francisca Martínez Traub
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
- Lead Contact
| |
Collapse
|
34
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
35
|
Elmoznino E, Bonner MF. High-performing neural network models of visual cortex benefit from high latent dimensionality. PLoS Comput Biol 2024; 20:e1011792. [PMID: 38198504 PMCID: PMC10805290 DOI: 10.1371/journal.pcbi.1011792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 01/23/2024] [Accepted: 12/30/2023] [Indexed: 01/12/2024] Open
Abstract
Geometric descriptions of deep neural networks (DNNs) have the potential to uncover core representational principles of computational models in neuroscience. Here we examined the geometry of DNN models of visual cortex by quantifying the latent dimensionality of their natural image representations. A popular view holds that optimal DNNs compress their representations onto low-dimensional subspaces to achieve invariance and robustness, which suggests that better models of visual cortex should have lower dimensional geometries. Surprisingly, we found a strong trend in the opposite direction-neural networks with high-dimensional image subspaces tended to have better generalization performance when predicting cortical responses to held-out stimuli in both monkey electrophysiology and human fMRI data. Moreover, we found that high dimensionality was associated with better performance when learning new categories of stimuli, suggesting that higher dimensional representations are better suited to generalize beyond their training domains. These findings suggest a general principle whereby high-dimensional geometry confers computational benefits to DNN models of visual cortex.
Collapse
Affiliation(s)
- Eric Elmoznino
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Michael F. Bonner
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
36
|
Esparza J, Sebastián ER, de la Prida LM. From cell types to population dynamics: Making hippocampal manifolds physiologically interpretable. Curr Opin Neurobiol 2023; 83:102800. [PMID: 37898015 DOI: 10.1016/j.conb.2023.102800] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/30/2023]
Abstract
The study of the hippocampal code is gaining momentum. While the physiological approach targets the contribution of individual cells as determined by genetic, biophysical and circuit factors, the field pushes for a population dynamic approach that considers the representation of behavioural variables by a large number of neurons. In this alternative framework, neuronal activity is projected into low-dimensional manifolds. These manifolds can reveal the structure of population representations, but their physiological interpretation is challenging. Here, we review the recent literature and propose that integrating information regarding behavioral traits, local field potential oscillations and cell-type-specificity into neural manifolds offers strategies to make them interpretable at the physiological level.
Collapse
|
37
|
Kim JZ, Larsen B, Parkes L. Shaping dynamical neural computations using spatiotemporal constraints. ARXIV 2023:arXiv:2311.15572v1. [PMID: 38076517 PMCID: PMC10705584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/20/2023]
Abstract
Dynamics play a critical role in computation. The principled evolution of states over time enables both biological and artificial networks to represent and integrate information to make decisions. In the past few decades, significant multidisciplinary progress has been made in bridging the gap between how we understand biological versus artificial computation, including how insights gained from one can translate to the other. Research has revealed that neurobiology is a key determinant of brain network architecture, which gives rise to spatiotemporally constrained patterns of activity that underlie computation. Here, we discuss how neural systems use dynamics for computation, and claim that the biological constraints that shape brain networks may be leveraged to improve the implementation of artificial neural networks. To formalize this discussion, we consider a natural artificial analog of the brain that has been used extensively to model neural computation: the recurrent neural network (RNN). In both the brain and the RNN, we emphasize the common computational substrate atop which dynamics occur-the connectivity between neurons-and we explore the unique computational advantages offered by biophysical constraints such as resource efficiency, spatial embedding, and neurodevelopment.
Collapse
Affiliation(s)
- Jason Z. Kim
- Department of Physics, Cornell University, Ithaca, NY 14853, USA
| | - Bart Larsen
- Department of Pediatrics, Masonic Institute for the Developing Brain, University of Minnesota
| | - Linden Parkes
- Department of Psychiatry, Rutgers University, Piscataway, NJ 08854, USA
| |
Collapse
|
38
|
Durstewitz D, Koppe G, Thurm MI. Reconstructing computational system dynamics from neural data with recurrent neural networks. Nat Rev Neurosci 2023; 24:693-710. [PMID: 37794121 DOI: 10.1038/s41583-023-00740-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/18/2023] [Indexed: 10/06/2023]
Abstract
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
- Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany.
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany.
| | - Georgia Koppe
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Dept. of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Hector Institute for Artificial Intelligence in Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Max Ingo Thurm
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
39
|
De A, Chaudhuri R. Common population codes produce extremely nonlinear neural manifolds. Proc Natl Acad Sci U S A 2023; 120:e2305853120. [PMID: 37733742 PMCID: PMC10523500 DOI: 10.1073/pnas.2305853120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 08/03/2023] [Indexed: 09/23/2023] Open
Abstract
Populations of neurons represent sensory, motor, and cognitive variables via patterns of activity distributed across the population. The size of the population used to encode a variable is typically much greater than the dimension of the variable itself, and thus, the corresponding neural population activity occupies lower-dimensional subsets of the full set of possible activity states. Given population activity data with such lower-dimensional structure, a fundamental question asks how close the low-dimensional data lie to a linear subspace. The linearity or nonlinearity of the low-dimensional structure reflects important computational features of the encoding, such as robustness and generalizability. Moreover, identifying such linear structure underlies common data analysis methods such as Principal Component Analysis (PCA). Here, we show that for data drawn from many common population codes the resulting point clouds and manifolds are exceedingly nonlinear, with the dimension of the best-fitting linear subspace growing at least exponentially with the true dimension of the data. Consequently, linear methods like PCA fail dramatically at identifying the true underlying structure, even in the limit of arbitrarily many data points and no noise.
Collapse
Affiliation(s)
- Anandita De
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Physics, University of California, Davis, CA95616
| | - Rishidev Chaudhuri
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, CA95616
- Department of Mathematics, University of California, Davis, CA95616
| |
Collapse
|
40
|
Clark DG, Abbott LF, Litwin-Kumar A. Dimension of Activity in Random Neural Networks. PHYSICAL REVIEW LETTERS 2023; 131:118401. [PMID: 37774280 DOI: 10.1103/physrevlett.131.118401] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Revised: 05/25/2023] [Accepted: 08/08/2023] [Indexed: 10/01/2023]
Abstract
Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units. Understanding how biological and machine-learning networks function and learn requires knowledge of the structure of this coordinated activity, information contained, for example, in cross covariances between units. Self-consistent dynamical mean field theory (DMFT) has elucidated several features of random neural networks-in particular, that they can generate chaotic activity-however, a calculation of cross covariances using this approach has not been provided. Here, we calculate cross covariances self-consistently via a two-site cavity DMFT. We use this theory to probe spatiotemporal features of activity coordination in a classic random-network model with independent and identically distributed (i.i.d.) couplings, showing an extensive but fractionally low effective dimension of activity and a long population-level timescale. Our formulas apply to a wide range of single-unit dynamics and generalize to non-i.i.d. couplings. As an example of the latter, we analyze the case of partially symmetric couplings.
Collapse
Affiliation(s)
- David G Clark
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| | - L F Abbott
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| | - Ashok Litwin-Kumar
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| |
Collapse
|
41
|
Johnston WJ, Fine JM, Yoo SBM, Ebitz RB, Hayden BY. Semi-orthogonal subspaces for value mediate a tradeoff between binding and generalization. ARXIV 2023:arXiv:2309.07766v1. [PMID: 37744462 PMCID: PMC10516109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
When choosing between options, we must associate their values with the action needed to select them. We hypothesize that the brain solves this binding problem through neural population subspaces. To test this hypothesis, we examined neuronal responses in five reward-sensitive regions in macaques performing a risky choice task with sequential offers. Surprisingly, in all areas, the neural population encoded the values of offers presented on the left and right in distinct subspaces. We show that the encoding we observe is sufficient to bind the values of the offers to their respective positions in space while preserving abstract value information, which may be important for rapid learning and generalization to novel contexts. Moreover, after both offers have been presented, all areas encode the value of the first and second offers in orthogonal subspaces. In this case as well, the orthogonalization provides binding. Our binding-by-subspace hypothesis makes two novel predictions borne out by the data. First, behavioral errors should correlate with putative spatial (but not temporal) misbinding in the neural representation. Second, the specific representational geometry that we observe across animals also indicates that behavioral errors should increase when offers have low or high values, compared to when they have medium values, even when controlling for value difference. Together, these results support the idea that the brain makes use of semi-orthogonal subspaces to bind features together.
Collapse
Affiliation(s)
- W. Jeffrey Johnston
- Center for Theoretical Neuroscience and Mortimer B. Zuckerman Mind, Brain, and Behavior Institute, Columbia University, New York, New York, United States of America
| | - Justin M. Fine
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas, United States of America
| | - Seng Bum Michael Yoo
- Department of Biomedical Engineering, Sunkyunkwan University, and Center for Neuroscience Imaging Research, Institute of Basic Sciences, Suwon, South Korea, Republic of Korea, 16419
| | - R. Becket Ebitz
- Department of Neuroscience, Université de Montréal, Montréal, Quebec, Canada
| | - Benjamin Y. Hayden
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas, United States of America
| |
Collapse
|
42
|
Cimeša L, Ciric L, Ostojic S. Geometry of population activity in spiking networks with low-rank structure. PLoS Comput Biol 2023; 19:e1011315. [PMID: 37549194 PMCID: PMC10461857 DOI: 10.1371/journal.pcbi.1011315] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/28/2023] [Accepted: 06/27/2023] [Indexed: 08/09/2023] Open
Abstract
Recurrent network models are instrumental in investigating how behaviorally-relevant computations emerge from collective neural dynamics. A recently developed class of models based on low-rank connectivity provides an analytically tractable framework for understanding of how connectivity structure determines the geometry of low-dimensional dynamics and the ensuing computations. Such models however lack some fundamental biological constraints, and in particular represent individual neurons in terms of abstract units that communicate through continuous firing rates rather than discrete action potentials. Here we examine how far the theoretical insights obtained from low-rank rate networks transfer to more biologically plausible networks of spiking neurons. Adding a low-rank structure on top of random excitatory-inhibitory connectivity, we systematically compare the geometry of activity in networks of integrate-and-fire neurons to rate networks with statistically equivalent low-rank connectivity. We show that the mean-field predictions of rate networks allow us to identify low-dimensional dynamics at constant population-average activity in spiking networks, as well as novel non-linear regimes of activity such as out-of-phase oscillations and slow manifolds. We finally exploit these results to directly build spiking networks that perform nonlinear computations.
Collapse
Affiliation(s)
- Ljubica Cimeša
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Lazar Ciric
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
43
|
Genkin M, Shenoy KV, Chandrasekaran C, Engel TA. The dynamics and geometry of choice in premotor cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.22.550183. [PMID: 37546748 PMCID: PMC10401920 DOI: 10.1101/2023.07.22.550183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
The brain represents sensory variables in the coordinated activity of neural populations, in which tuning curves of single neurons define the geometry of the population code. Whether the same coding principle holds for dynamic cognitive variables remains unknown because internal cognitive processes unfold with a unique time course on single trials observed only in the irregular spiking of heterogeneous neural populations. Here we show the existence of such a population code for the dynamics of choice formation in the primate premotor cortex. We developed an approach to simultaneously infer population dynamics and tuning functions of single neurons to the population state. Applied to spike data recorded during decision-making, our model revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Our results reveal a common geometric principle for neural encoding of sensory and dynamic cognitive variables.
Collapse
Affiliation(s)
| | - Krishna V Shenoy
- Howard Hughes Medical Institute, Stanford University, Stanford, CA
- Department of Electrical Engineering, Stanford University, Stanford, CA
| | - Chandramouli Chandrasekaran
- Department of Anatomy & Neurobiology, Boston University, Boston, MA
- Department of Psychological and Brain Sciences, Boston University, Boston, MA
- Center for Systems Neuroscience, Boston University, Boston, MA
| | - Tatiana A Engel
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| |
Collapse
|
44
|
Fakhar K, Dixit S, Hadaeghi F, Kording KP, Hilgetag CC. When Neural Activity Fails to Reveal Causal Contributions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.06.543895. [PMID: 37333375 PMCID: PMC10274733 DOI: 10.1101/2023.06.06.543895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Neuroscientists rely on distributed spatio-temporal patterns of neural activity to understand how neural units contribute to cognitive functions and behavior. However, the extent to which neural activity reliably indicates a unit's causal contribution to the behavior is not well understood. To address this issue, we provide a systematic multi-site perturbation framework that captures time-varying causal contributions of elements to a collectively produced outcome. Applying our framework to intuitive toy examples and artificial neuronal networks revealed that recorded activity patterns of neural elements may not be generally informative of their causal contribution due to activity transformations within a network. Overall, our findings emphasize the limitations of inferring causal mechanisms from neural activities and offer a rigorous lesioning framework for elucidating causal neural contributions.
Collapse
Affiliation(s)
- Kayson Fakhar
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Germany
| | - Shrey Dixit
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Germany
| | - Fatemeh Hadaeghi
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Germany
| | - Konrad P. Kording
- Departments of Bioengineering and Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
- Learning in Machines & Brains, CIFAR, Toronto, ON, Canada
| | - Claus C. Hilgetag
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Germany
- Department of Health Sciences, Boston University, Boston, MA, USA
| |
Collapse
|
45
|
Langdon C, Genkin M, Engel TA. A unifying perspective on neural manifolds and circuits for cognition. Nat Rev Neurosci 2023; 24:363-377. [PMID: 37055616 PMCID: PMC11058347 DOI: 10.1038/s41583-023-00693-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/06/2023] [Indexed: 04/15/2023]
Abstract
Two different perspectives have informed efforts to explain the link between the brain and behaviour. One approach seeks to identify neural circuit elements that carry out specific functions, emphasizing connectivity between neurons as a substrate for neural computations. Another approach centres on neural manifolds - low-dimensional representations of behavioural signals in neural population activity - and suggests that neural computations are realized by emergent dynamics. Although manifolds reveal an interpretable structure in heterogeneous neuronal activity, finding the corresponding structure in connectivity remains a challenge. We highlight examples in which establishing the correspondence between low-dimensional activity and connectivity has been possible, unifying the neural manifold and circuit perspectives. This relationship is conspicuous in systems in which the geometry of neural responses mirrors their spatial layout in the brain, such as the fly navigational system. Furthermore, we describe evidence that, in systems in which neural responses are heterogeneous, the circuit comprises interactions between activity patterns on the manifold via low-rank connectivity. We suggest that unifying the manifold and circuit approaches is important if we are to be able to causally test theories about the neural computations that underlie behaviour.
Collapse
Affiliation(s)
- Christopher Langdon
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Mikhail Genkin
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Tatiana A Engel
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
| |
Collapse
|
46
|
Sabesan S, Fragner A, Bench C, Drakopoulos F, Lesica NA. Large-scale electrophysiology and deep learning reveal distorted neural signal dynamics after hearing loss. eLife 2023; 12:e85108. [PMID: 37162188 PMCID: PMC10202456 DOI: 10.7554/elife.85108] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 04/27/2023] [Indexed: 05/11/2023] Open
Abstract
Listeners with hearing loss often struggle to understand speech in noise, even with a hearing aid. To better understand the auditory processing deficits that underlie this problem, we made large-scale brain recordings from gerbils, a common animal model for human hearing, while presenting a large database of speech and noise sounds. We first used manifold learning to identify the neural subspace in which speech is encoded and found that it is low-dimensional and that the dynamics within it are profoundly distorted by hearing loss. We then trained a deep neural network (DNN) to replicate the neural coding of speech with and without hearing loss and analyzed the underlying network dynamics. We found that hearing loss primarily impacts spectral processing, creating nonlinear distortions in cross-frequency interactions that result in a hypersensitivity to background noise that persists even after amplification with a hearing aid. Our results identify a new focus for efforts to design improved hearing aids and demonstrate the power of DNNs as a tool for the study of central brain structures.
Collapse
Affiliation(s)
| | | | - Ciaran Bench
- Ear Institute, University College LondonLondonUnited Kingdom
| | | | | |
Collapse
|
47
|
Schneider S, Lee JH, Mathis MW. Learnable latent embeddings for joint behavioural and neural analysis. Nature 2023; 617:360-368. [PMID: 37138088 PMCID: PMC10172131 DOI: 10.1038/s41586-023-06031-6] [Citation(s) in RCA: 66] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 03/28/2023] [Indexed: 05/05/2023]
Abstract
Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our ability to record large neural and behavioural data increases, there is growing interest in modelling neural dynamics during adaptive behaviours to probe neural representations1-3. In particular, although neural latent embeddings can reveal underlying correlates of behaviour, we lack nonlinear techniques that can explicitly and flexibly leverage joint behaviour and neural data to uncover neural dynamics3-5. Here, we fill this gap with a new encoding method, CEBRA, that jointly uses behavioural and neural data in a (supervised) hypothesis- or (self-supervised) discovery-driven manner to produce both consistent and high-performance latent spaces. We show that consistency can be used as a metric for uncovering meaningful differences, and the inferred latents can be used for decoding. We validate its accuracy and demonstrate our tool's utility for both calcium and electrophysiology datasets, across sensory and motor tasks and in simple or complex behaviours across species. It allows leverage of single- and multi-session datasets for hypothesis testing or can be used label free. Lastly, we show that CEBRA can be used for the mapping of space, uncovering complex kinematic features, for the production of consistent latent spaces across two-photon and Neuropixels data, and can provide rapid, high-accuracy decoding of natural videos from visual cortex.
Collapse
Affiliation(s)
- Steffen Schneider
- Brain Mind Institute & Neuro X Institute, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Jin Hwa Lee
- Brain Mind Institute & Neuro X Institute, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Mackenzie Weygandt Mathis
- Brain Mind Institute & Neuro X Institute, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland.
| |
Collapse
|
48
|
Libedinsky C. Comparing representations and computations in single neurons versus neural networks. Trends Cogn Sci 2023; 27:517-527. [PMID: 37005114 DOI: 10.1016/j.tics.2023.03.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 04/03/2023]
Abstract
Single-neuron-level explanations have been the gold standard in neuroscience for decades. Recently, however, neural-network-level explanations have become increasingly popular. This increase in popularity is driven by the fact that the analysis of neural networks can solve problems that cannot be addressed by analyzing neurons independently. In this opinion article, I argue that while both frameworks employ the same general logic to link physical and mental phenomena, in many cases the neural network framework provides better explanatory objects to understand representations and computations related to mental phenomena. I discuss what constitutes a mechanistic explanation in neural systems, provide examples, and conclude by highlighting a number of the challenges and considerations associated with the use of analyses of neural networks to study brain function.
Collapse
|
49
|
Tang W, Shin JD, Jadhav SP. Geometric transformation of cognitive maps for generalization across hippocampal-prefrontal circuits. Cell Rep 2023; 42:112246. [PMID: 36924498 PMCID: PMC10124109 DOI: 10.1016/j.celrep.2023.112246] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 01/09/2023] [Accepted: 02/26/2023] [Indexed: 03/17/2023] Open
Abstract
The ability to abstract information to guide decisions during navigation across changing environments is essential for adaptation and requires the integrity of the hippocampal-prefrontal circuitry. The hippocampus encodes navigational information in a cognitive map, but it remains unclear how cognitive maps are transformed across hippocampal-prefrontal circuits to support abstraction and generalization. Here, we simultaneously record hippocampal-prefrontal ensembles as rats generalize navigational rules across distinct environments. We find that, whereas hippocampal representational maps maintain specificity of separate environments, prefrontal maps generalize across environments. Furthermore, while both maps are structured within a neural manifold of population activity, they have distinct representational geometries. Prefrontal geometry enables abstraction of rule-informative variables, a representational format that generalizes to novel conditions of existing variable classes. Hippocampal geometry lacks such abstraction. Together, these findings elucidate how cognitive maps are structured into distinct geometric representations to support abstraction and generalization while maintaining memory specificity.
Collapse
Affiliation(s)
- Wenbo Tang
- Neuroscience Program, Department of Psychology, and Volen National Center for Complex Systems, Brandeis University, Waltham, MA 02453, USA.
| | - Justin D Shin
- Neuroscience Program, Department of Psychology, and Volen National Center for Complex Systems, Brandeis University, Waltham, MA 02453, USA
| | - Shantanu P Jadhav
- Neuroscience Program, Department of Psychology, and Volen National Center for Complex Systems, Brandeis University, Waltham, MA 02453, USA.
| |
Collapse
|
50
|
Sedler AR, Versteeg C, Pandarinath C. Expressive architectures enhance interpretability of dynamics-based neural population models. NEURONS, BEHAVIOR, DATA ANALYSIS, AND THEORY 2023; 2023:10.51628/001c.73987. [PMID: 38699512 PMCID: PMC11065448 DOI: 10.51628/001c.73987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
Artificial neural networks that can recover latent dynamics from recorded neural activity may provide a powerful avenue for identifying and interpreting the dynamical motifs underlying biological computation. Given that neural variance alone does not uniquely determine a latent dynamical system, interpretable architectures should prioritize accurate and low-dimensional latent dynamics. In this work, we evaluated the performance of sequential autoencoders (SAEs) in recovering latent chaotic attractors from simulated neural datasets. We found that SAEs with widely-used recurrent neural network (RNN)-based dynamics were unable to infer accurate firing rates at the true latent state dimensionality, and that larger RNNs relied upon dynamical features not present in the data. On the other hand, SAEs with neural ordinary differential equation (NODE)-based dynamics inferred accurate rates at the true latent state dimensionality, while also recovering latent trajectories and fixed point structure. Ablations reveal that this is mainly because NODEs (1) allow use of higher-capacity multi-layer perceptrons (MLPs) to model the vector field and (2) predict the derivative rather than the next state. Decoupling the capacity of the dynamics model from its latent dimensionality enables NODEs to learn the requisite low-D dynamics where RNN cells fail. Additionally, the fact that the NODE predicts derivatives imposes a useful autoregressive prior on the latent states. The suboptimal interpretability of widely-used RNN-based dynamics may motivate substitution for alternative architectures, such as NODE, that enable learning of accurate dynamics in low-dimensional latent spaces.
Collapse
Affiliation(s)
- Andrew R. Sedler
- Center for Machine Learning, Georgia Institute of Technology, Atlanta, GA, USA
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Christopher Versteeg
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Chethan Pandarinath
- Center for Machine Learning, Georgia Institute of Technology, Atlanta, GA, USA
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|