1
|
Lin B, Kriegeskorte N. The topology and geometry of neural representations. Proc Natl Acad Sci U S A 2024; 121:e2317881121. [PMID: 39374397 DOI: 10.1073/pnas.2317881121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 07/24/2024] [Indexed: 10/09/2024] Open
Abstract
A central question for neuroscience is how to characterize brain representations of perceptual and cognitive content. An ideal characterization should distinguish different functional regions with robustness to noise and idiosyncrasies of individual brains that do not correspond to computational differences. Previous studies have characterized brain representations by their representational geometry, which is defined by the representational dissimilarity matrix (RDM), a summary statistic that abstracts from the roles of individual neurons (or responses channels) and characterizes the discriminability of stimuli. Here, we explore a further step of abstraction: from the geometry to the topology of brain representations. We propose topological representational similarity analysis, an extension of representational similarity analysis that uses a family of geotopological summary statistics that generalizes the RDM to characterize the topology while de-emphasizing the geometry. We evaluate this family of statistics in terms of the sensitivity and specificity for model selection using both simulations and functional MRI (fMRI) data. In the simulations, the ground truth is a data-generating layer representation in a neural network model and the models are the same and other layers in different model instances (trained from different random seeds). In fMRI, the ground truth is a visual area and the models are the same and other areas measured in different subjects. Results show that topology-sensitive characterizations of population codes are robust to noise and interindividual variability and maintain excellent sensitivity to the unique representational signatures of different neural network layers and brain regions.
Collapse
Affiliation(s)
- Baihan Lin
- Department of Artificial Intelligence and Human Health, Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029
- Department of Psychiatry, Center for Computational Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY 10029
- Department of Neuroscience, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| | - Nikolaus Kriegeskorte
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
- Department of Psychology, Columbia University, New York, NY 10027
- Department of Neuroscience, Columbia University, New York, NY 10027
| |
Collapse
|
2
|
Noorman M, Hulse BK, Jayaraman V, Romani S, Hermundstad AM. Maintaining and updating accurate internal representations of continuous variables with a handful of neurons. Nat Neurosci 2024:10.1038/s41593-024-01766-5. [PMID: 39363052 DOI: 10.1038/s41593-024-01766-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Accepted: 08/14/2024] [Indexed: 10/05/2024]
Abstract
Many animals rely on persistent internal representations of continuous variables for working memory, navigation, and motor control. Existing theories typically assume that large networks of neurons are required to maintain such representations accurately; networks with few neurons are thought to generate discrete representations. However, analysis of two-photon calcium imaging data from tethered flies walking in darkness suggests that their small head-direction system can maintain a surprisingly continuous and accurate representation. We thus ask whether it is possible for a small network to generate a continuous, rather than discrete, representation of such a variable. We show analytically that even very small networks can be tuned to maintain continuous internal representations, but this comes at the cost of sensitivity to noise and variations in tuning. This work expands the computational repertoire of small networks, and raises the possibility that larger networks could represent more and higher-dimensional variables than previously thought.
Collapse
Affiliation(s)
- Marcella Noorman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| | - Brad K Hulse
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Vivek Jayaraman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Sandro Romani
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| |
Collapse
|
3
|
Street JS, Jeffery KJ. The dorsal thalamic lateral geniculate nucleus is required for visual control of head direction cell firing direction in rats. J Physiol 2024. [PMID: 39235958 DOI: 10.1113/jp286868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 08/13/2024] [Indexed: 09/07/2024] Open
Abstract
Head direction (HD) neurons, signalling facing direction, generate a signal that is primarily anchored to the outside world by visual inputs. We investigated the route for visual landmark information into the HD system in rats. There are two candidates: an evolutionarily older, larger subcortical retino-tectal pathway and a more recently evolved, smaller cortical retino-geniculo-striate pathway. We disrupted the cortical pathway by lesioning the dorsal lateral geniculate thalamic nuclei bilaterally, and recorded HD cells in the postsubicular cortex as rats foraged in a visual-cue-controlled enclosure. In lesioned rats we found the expected number of postsubicular HD cells. Although directional tuning curves were broader across a trial, this was attributable to the increased instability of otherwise normal-width tuning curves. Tuning curves were also poorly responsive to polarizing visual landmarks and did not distinguish cues based on their visual pattern. Thus, the retino-geniculo-striate pathway is not crucial for the generation of an underlying, tightly tuned directional signal but does provide the main route for vision-based anchoring of the signal to the outside world, even when visual cues are high in contrast and low in detail. KEY POINTS: Head direction (HD) cells indicate the facing direction of the head, using visual landmarks to distinguish directions. In rats, we investigated whether this visual information is routed through the thalamus to the visual cortex or arrives via the superior colliculus, which is a phylogenetically older and (in rodents) larger pathway. We lesioned the thalamic dorsal lateral geniculate nucleus (dLGN) in rats and recorded the responsiveness of cortical HD cells to visual cues. We found that cortical HD cells had normal tuning curves, but these were slightly more unstable during a trial. Most notably, HD cells in dLGN-lesioned animals showed little ability to distinguish highly distinct cues and none to distinguish more similar cues. These results suggest that directional processing of visual landmarks in mammals requires the geniculo-cortical pathway, which raises questions about when and how visual directional landmark processing appeared during evolution.
Collapse
Affiliation(s)
- James S Street
- Institute of Neurology, University College London, London, UK
| | - Kate J Jeffery
- School of Psychology & Neuroscience, University of Glasgow, Glasgow, UK
| |
Collapse
|
4
|
Mitchell EC, Story B, Boothe D, Franaszczuk PJ, Maroulas V. A topological deep learning framework for neural spike decoding. Biophys J 2024; 123:2781-2789. [PMID: 38402607 PMCID: PMC11393671 DOI: 10.1016/j.bpj.2024.01.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 01/10/2024] [Accepted: 01/23/2024] [Indexed: 02/27/2024] Open
Abstract
The brain's spatial orientation system uses different neuron ensembles to aid in environment-based navigation. Two of the ways brains encode spatial information are through head direction cells and grid cells. Brains use head direction cells to determine orientation, whereas grid cells consist of layers of decked neurons that overlay to provide environment-based navigation. These neurons fire in ensembles where several neurons fire at once to activate a single head direction or grid. We want to capture this firing structure and use it to decode head direction and animal location from head direction and grid cell activity. Understanding, representing, and decoding these neural structures require models that encompass higher-order connectivity, more than the one-dimensional connectivity that traditional graph-based models provide. To that end, in this work, we develop a topological deep learning framework for neural spike train decoding. Our framework combines unsupervised simplicial complex discovery with the power of deep learning via a new architecture we develop herein called a simplicial convolutional recurrent neural network. Simplicial complexes, topological spaces that use not only vertices and edges but also higher-dimensional objects, naturally generalize graphs and capture more than just pairwise relationships. Additionally, this approach does not require prior knowledge of the neural activity beyond spike counts, which removes the need for similarity measurements. The effectiveness and versatility of the simplicial convolutional neural network is demonstrated on head direction and trajectory prediction via head direction and grid cell datasets.
Collapse
Affiliation(s)
- Edward C Mitchell
- University of Tennessee Knoxville, Knoxville, Tennessee; Joe Gibbs Human Performance Institute, Huntersville, North Carolina
| | - Brittany Story
- University of Tennessee Knoxville, Knoxville, Tennessee; Army Research Lab, Aberdeen, Maryland
| | | | - Piotr J Franaszczuk
- Army Research Lab, Aberdeen, Maryland; Johns Hopkins University, Baltimore, Maryland
| | | |
Collapse
|
5
|
Long X, Wang X, Deng B, Shen R, Lv SQ, Zhang SJ. Intrinsic Bipolar Head-Direction Cells in the Medial Entorhinal Cortex. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024:e2401216. [PMID: 39206928 DOI: 10.1002/advs.202401216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 08/08/2024] [Indexed: 09/04/2024]
Abstract
Head-direction (HD) cells are a fundamental component in the hippocampal-entorhinal circuit for spatial navigation and help maintain an internal sense of direction to anchor the orientation in space. A classical HD cell robustly increases its firing rate when the head is oriented toward a specific direction, with each cell tuned to only one direction. Although unidirectional HD cells are reported broadly across multiple brain regions, computation modelling has predicted the existence of multiple equilibrium states of HD network, which has yet to be proven. In this study, a novel HD variant of bipolar HD cells in the medial entorhinal cortex (MEC) are identified that exhibit stable double-peaked directional tuning properties. The bipolar patterns remain stable in the darkness and across environments of distinct geometric shapes. Moreover, bipolar HD cells co-rotate coherently with unipolar HD cells to anchor the external visual cue. The discovery reveals a new spatial cell type of bipolar HD cells, whose unique activity patterns may comprise a potential building block for a sophisticated local neural circuit configuration for the internal representation of direction. These findings may contribute to the understanding of how the brain processes spatial information by shedding light on the role of bipolar HD cells in this process.
Collapse
Affiliation(s)
- Xiaoyang Long
- Department of Neurosurgery, Xinqiao Hospital, Army Medical University, Chongqing, 400037, China
| | - Xiaoxia Wang
- Department of Basic Psychology, School of Psychology, Army Medical University, Chongqing, 400038, China
| | - Bin Deng
- Department of Neurosurgery, Xinqiao Hospital, Army Medical University, Chongqing, 400037, China
| | - Rui Shen
- Department of Neurosurgery, Xinqiao Hospital, Army Medical University, Chongqing, 400037, China
| | - Sheng-Qing Lv
- Department of Neurosurgery, Xinqiao Hospital, Army Medical University, Chongqing, 400037, China
| | - Sheng-Jia Zhang
- Department of Neurosurgery, Xinqiao Hospital, Army Medical University, Chongqing, 400037, China
| |
Collapse
|
6
|
Luo TZ, Kim TD, Gupta D, Bondy AG, Kopec CD, Elliot VA, DePasquale B, Brody CD. Transitions in dynamical regime and neural mode underlie perceptual decision-making. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.15.562427. [PMID: 37904994 PMCID: PMC10614809 DOI: 10.1101/2023.10.15.562427] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Perceptual decision-making is the process by which an animal uses sensory stimuli to choose an action or mental proposition. This process is thought to be mediated by neurons organized as attractor networks 1,2 . However, whether attractor dynamics underlie decision behavior and the complex neuronal responses remains unclear. Here we use an unsupervised, deep learning-based method to discover decision-related dynamics from the simultaneous activity of neurons in frontal cortex and striatum of rats while they accumulate pulsatile auditory evidence. We found that trajectories evolved along two sequential regimes, the first dominated by sensory inputs, and the second dominated by the autonomous dynamics, with flow in a direction (i.e., "neural mode") largely orthogonal to that in the first regime. We propose that the second regime corresponds to decision commitment. We developed a simplified model that approximates the coupled transition in dynamics and neural mode and allows precise inference, from each trial's neural activity, of a putative internal decision commitment time in that trial. The simplified model captures diverse and complex single-neuron temporal profiles, such as ramping and stepping 3-5 . It also captures trial-averaged curved trajectories 6-8 , and reveals distinctions between brain regions. The putative neurally-inferred commitment times ("nTc") occurred at times broadly distributed across trials, and not time-locked to stimulus onset, offset, or response onset. Nevertheless, when trials were aligned to nTc, behavioral analysis showed that, as predicted by a decision commitment time, sensory evidence before nTc affected the subjects' decision, but evidence after nTc did not. Our results show that the formation of a perceptual choice involves a rapid, coordinated transition in both the dynamical regime and the neural mode of the decision process, and suggest the moment of commitment to be a useful entry point for dissecting mechanisms underlying rapid changes in internal state.
Collapse
|
7
|
Kristensen SS, Kesgin K, Jörntell H. High-dimensional cortical signals reveal rich bimodal and working memory-like representations among S1 neuron populations. Commun Biol 2024; 7:1043. [PMID: 39179675 PMCID: PMC11344095 DOI: 10.1038/s42003-024-06743-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 08/16/2024] [Indexed: 08/26/2024] Open
Abstract
Complexity is important for flexibility of natural behavior and for the remarkably efficient learning of the brain. Here we assessed the signal complexity among neuron populations in somatosensory cortex (S1). To maximize our chances of capturing population-level signal complexity, we used highly repeatable resolvable visual, tactile, and visuo-tactile inputs and neuronal unit activity recorded at high temporal resolution. We found the state space of the spontaneous activity to be extremely high-dimensional in S1 populations. Their processing of tactile inputs was profoundly modulated by visual inputs and even fine nuances of visual input patterns were separated. Moreover, the dynamic activity states of the S1 neuron population signaled the preceding specific input long after the stimulation had terminated, i.e., resident information that could be a substrate for a working memory. Hence, the recorded high-dimensional representations carried rich multimodal and internal working memory-like signals supporting high complexity in cortical circuitry operation.
Collapse
Affiliation(s)
- Sofie S Kristensen
- Department of Experimental Medical Science, Neural Basis of Sensorimotor Control, Lund University, Lund, Sweden
| | - Kaan Kesgin
- Department of Experimental Medical Science, Neural Basis of Sensorimotor Control, Lund University, Lund, Sweden
| | - Henrik Jörntell
- Department of Experimental Medical Science, Neural Basis of Sensorimotor Control, Lund University, Lund, Sweden.
| |
Collapse
|
8
|
Eisen AJ, Kozachkov L, Bastos AM, Donoghue JA, Mahnke MK, Brincat SL, Chandra S, Tauber J, Brown EN, Fiete IR, Miller EK. Propofol anesthesia destabilizes neural dynamics across cortex. Neuron 2024; 112:2799-2813.e9. [PMID: 39013467 DOI: 10.1016/j.neuron.2024.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 05/13/2024] [Accepted: 06/14/2024] [Indexed: 07/18/2024]
Abstract
Every day, hundreds of thousands of people undergo general anesthesia. One hypothesis is that anesthesia disrupts dynamic stability-the ability of the brain to balance excitability with the need to be stable and controllable. To test this hypothesis, we developed a method for quantifying changes in population-level dynamic stability in complex systems: delayed linear analysis for stability estimation (DeLASE). Propofol was used to transition animals between the awake state and anesthetized unconsciousness. DeLASE was applied to macaque cortex local field potentials (LFPs). We found that neural dynamics were more unstable in unconsciousness compared with the awake state. Cortical trajectories mirrored predictions from destabilized linear systems. We mimicked the effect of propofol in simulated neural networks by increasing inhibitory tone. This in turn destabilized the networks, as observed in the neural data. Our results suggest that anesthesia disrupts dynamical stability that is required for consciousness.
Collapse
Affiliation(s)
- Adam J Eisen
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; The K. Lisa Yang Integrative Computational Neuroscience Center, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Leo Kozachkov
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; The K. Lisa Yang Integrative Computational Neuroscience Center, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - André M Bastos
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37235, USA
| | - Jacob A Donoghue
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Beacon Biosignals, Boston, MA 02114, USA
| | - Meredith K Mahnke
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Scott L Brincat
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Sarthak Chandra
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; The K. Lisa Yang Integrative Computational Neuroscience Center, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - John Tauber
- Department of Mathematics and Statistics, Boston University, Boston, MA 02215, USA
| | - Emery N Brown
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA 02114, USA; Division of Sleep Medicine, Harvard Medical School, Boston, MA 02115, USA
| | - Ila R Fiete
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; The K. Lisa Yang Integrative Computational Neuroscience Center, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Earl K Miller
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| |
Collapse
|
9
|
Senzai Y, Scanziani M. The brain simulates actions and their consequences during REM sleep. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.13.607810. [PMID: 39211157 PMCID: PMC11361194 DOI: 10.1101/2024.08.13.607810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Vivid dreams mostly occur during a phase of sleep called REM 1-5 . During REM sleep, the brain's internal representation of direction keeps shifting like that of an awake animal moving through its environment 6-8 . What causes these shifts, given the immobility of the sleeping animal? Here we show that the superior colliculus of the mouse, a motor command center involved in orienting movements 9-15 , issues motor commands during REM sleep, e.g. turn left, that are similar to those issued in the awake behaving animal. Strikingly, these motor commands, despite not being executed, shift the internal representation of direction as if the animal had turned. Thus, during REM sleep, the brain simulates actions by issuing motor commands that, while not executed, have consequences as if they had been. This study suggests that the sleeping brain, while disengaged from the external world, uses its internal model of the world to simulate interactions with it.
Collapse
|
10
|
Dong LL, Fiete IR. Grid Cells in Cognition: Mechanisms and Function. Annu Rev Neurosci 2024; 47:345-368. [PMID: 38684081 DOI: 10.1146/annurev-neuro-101323-112047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
The activity patterns of grid cells form distinctively regular triangular lattices over the explored spatial environment and are largely invariant to visual stimuli, animal movement, and environment geometry. These neurons present numerous fascinating challenges to the curious (neuro)scientist: What are the circuit mechanisms responsible for creating spatially periodic activity patterns from the monotonic input-output responses of single neurons? How and why does the brain encode a local, nonperiodic variable-the allocentric position of the animal-with a periodic, nonlocal code? And, are grid cells truly specialized for spatial computations? Otherwise, what is their role in general cognition more broadly? We review efforts in uncovering the mechanisms and functional properties of grid cells, highlighting recent progress in the experimental validation of mechanistic grid cell models, and discuss the coding properties and functional advantages of the grid code as suggested by continuous attractor network models of grid cells.
Collapse
Affiliation(s)
- Ling L Dong
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
| | - Ila R Fiete
- McGovern Institute and K. Lisa Yang Integrative Computational Neuroscience Center, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
| |
Collapse
|
11
|
Morales-Gregorio A, Kurth AC, Ito J, Kleinjohann A, Barthélemy FV, Brochier T, Grün S, van Albada SJ. Neural manifolds in V1 change with top-down signals from V4 targeting the foveal region. Cell Rep 2024; 43:114371. [PMID: 38923458 DOI: 10.1016/j.celrep.2024.114371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 03/25/2024] [Accepted: 05/31/2024] [Indexed: 06/28/2024] Open
Abstract
High-dimensional brain activity is often organized into lower-dimensional neural manifolds. However, the neural manifolds of the visual cortex remain understudied. Here, we study large-scale multi-electrode electrophysiological recordings of macaque (Macaca mulatta) areas V1, V4, and DP with a high spatiotemporal resolution. We find that the population activity of V1 contains two separate neural manifolds, which correlate strongly with eye closure (eyes open/closed) and have distinct dimensionalities. Moreover, we find strong top-down signals from V4 to V1, particularly to the foveal region of V1, which are significantly stronger during the eyes-open periods. Finally, in silico simulations of a balanced spiking neuron network qualitatively reproduce the experimental findings. Taken together, our analyses and simulations suggest that top-down signals modulate the population activity of V1. We postulate that the top-down modulation during the eyes-open periods prepares V1 for fast and efficient visual responses, resulting in a type of visual stand-by state.
Collapse
Affiliation(s)
- Aitor Morales-Gregorio
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Institute of Zoology, University of Cologne, Cologne, Germany.
| | - Anno C Kurth
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; RWTH Aachen University, Aachen, Germany
| | - Junji Ito
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
| | - Alexander Kleinjohann
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Frédéric V Barthélemy
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Institut de Neurosciences de la Timone (INT), CNRS and Aix-Marseille Université, Marseille, France
| | - Thomas Brochier
- Institut de Neurosciences de la Timone (INT), CNRS and Aix-Marseille Université, Marseille, France
| | - Sonja Grün
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany; JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Sacha J van Albada
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Institute of Zoology, University of Cologne, Cologne, Germany
| |
Collapse
|
12
|
Mehrotra D, Levenstein D, Duszkiewicz AJ, Carrasco SS, Booker SA, Kwiatkowska A, Peyrache A. Hyperpolarization-activated currents drive neuronal activation sequences in sleep. Curr Biol 2024; 34:3043-3054.e8. [PMID: 38901427 DOI: 10.1016/j.cub.2024.05.048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 04/03/2024] [Accepted: 05/23/2024] [Indexed: 06/22/2024]
Abstract
Sequential neuronal patterns are believed to support information processing in the cortex, yet their origin is still a matter of debate. We report that neuronal activity in the mouse postsubiculum (PoSub), where a majority of neurons are modulated by the animal's head direction, was sequentially activated along the dorsoventral axis during sleep at the transition from hyperpolarized "DOWN" to activated "UP" states, while representing a stable direction. Computational modeling suggested that these dynamics could be attributed to a spatial gradient of hyperpolarization-activated currents (Ih), which we confirmed in ex vivo slice experiments and corroborated in other cortical structures. These findings open up the possibility that varying amounts of Ih across cortical neurons could result in sequential neuronal patterns and that traveling activity upstream of the entorhinal-hippocampal circuit organizes large-scale neuronal activity supporting learning and memory during sleep.
Collapse
Affiliation(s)
- Dhruv Mehrotra
- Montréal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, 3801 Rue University, Montréal, QC H3A 2B4, Canada; Integrated Program in Neuroscience, McGill University, 3801 Rue University, Montréal, QC H3A 2B4, Canada
| | - Daniel Levenstein
- Montréal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, 3801 Rue University, Montréal, QC H3A 2B4, Canada; MILA, 6666 Rue Saint-Urbain, Montréal, QC H2S 3H1, Canada
| | - Adrian J Duszkiewicz
- Montréal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, 3801 Rue University, Montréal, QC H3A 2B4, Canada; Division of Psychology, Faculty of Natural Sciences, University of Stirling, Stirling FK9 4LA, UK; Centre for Discovery Brain Sciences, University of Edinburgh, Hugh Robson Building, George Square, Edinburgh EH8 9XD, UK
| | - Sofia Skromne Carrasco
- Montréal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, 3801 Rue University, Montréal, QC H3A 2B4, Canada; Integrated Program in Neuroscience, McGill University, 3801 Rue University, Montréal, QC H3A 2B4, Canada
| | - Sam A Booker
- Centre for Discovery Brain Sciences, University of Edinburgh, Hugh Robson Building, George Square, Edinburgh EH8 9XD, UK; Simons Initiative for the Developing Brain, University of Edinburgh, Hugh Robson Building, George Square, Edinburgh EH8 9XD, UK; Patrick Wild Centre for Research into Autism, Fragile X Syndrome & Intellectual Disabilities, University of Edinburgh, Hugh Robson Building, George Square, Edinburgh EH8 9XD, UK
| | - Angelika Kwiatkowska
- Centre for Discovery Brain Sciences, University of Edinburgh, Hugh Robson Building, George Square, Edinburgh EH8 9XD, UK; Simons Initiative for the Developing Brain, University of Edinburgh, Hugh Robson Building, George Square, Edinburgh EH8 9XD, UK
| | - Adrien Peyrache
- Montréal Neurological Institute and Hospital, Department of Neurology and Neurosurgery, 3801 Rue University, Montréal, QC H3A 2B4, Canada.
| |
Collapse
|
13
|
Li Q, Sorscher B, Sompolinsky H. Representations and generalization in artificial and brain neural networks. Proc Natl Acad Sci U S A 2024; 121:e2311805121. [PMID: 38913896 PMCID: PMC11228472 DOI: 10.1073/pnas.2311805121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/26/2024] Open
Abstract
Humans and animals excel at generalizing from limited data, a capability yet to be fully replicated in artificial intelligence. This perspective investigates generalization in biological and artificial deep neural networks (DNNs), in both in-distribution and out-of-distribution contexts. We introduce two hypotheses: First, the geometric properties of the neural manifolds associated with discrete cognitive entities, such as objects, words, and concepts, are powerful order parameters. They link the neural substrate to the generalization capabilities and provide a unified methodology bridging gaps between neuroscience, machine learning, and cognitive science. We overview recent progress in studying the geometry of neural manifolds, particularly in visual object recognition, and discuss theories connecting manifold dimension and radius to generalization capacity. Second, we suggest that the theory of learning in wide DNNs, especially in the thermodynamic limit, provides mechanistic insights into the learning processes generating desired neural representational geometries and generalization. This includes the role of weight norm regularization, network architecture, and hyper-parameters. We will explore recent advances in this theory and ongoing challenges. We also discuss the dynamics of learning and its relevance to the issue of representational drift in the brain.
Collapse
Affiliation(s)
- Qianyi Li
- The Harvard Biophysics Graduate Program, Harvard University, Cambridge, MA02138
- Center for Brain Science, Harvard University, Cambridge, MA02138
| | - Ben Sorscher
- The Applied Physics Department, Stanford University, Stanford, CA94305
| | - Haim Sompolinsky
- Center for Brain Science, Harvard University, Cambridge, MA02138
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem9190401, Israel
| |
Collapse
|
14
|
Ostojic S, Fusi S. Computational role of structure in neural activity and connectivity. Trends Cogn Sci 2024; 28:677-690. [PMID: 38553340 DOI: 10.1016/j.tics.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/05/2024]
Abstract
One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005 Paris, France.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| |
Collapse
|
15
|
Griffiths BJ, Schreiner T, Schaefer JK, Vollmar C, Kaufmann E, Quach S, Remi J, Noachtar S, Staudigl T. Electrophysiological signatures of veridical head direction in humans. Nat Hum Behav 2024; 8:1334-1350. [PMID: 38710766 DOI: 10.1038/s41562-024-01872-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 03/22/2024] [Indexed: 05/08/2024]
Abstract
Information about heading direction is critical for navigation as it provides the means to orient ourselves in space. However, given that veridical head-direction signals require physical rotation of the head and most human neuroimaging experiments depend upon fixing the head in position, little is known about how the human brain is tuned to such heading signals. Here we adress this by asking 52 healthy participants undergoing simultaneous electroencephalography and motion tracking recordings (split into two experiments) and 10 patients undergoing simultaneous intracranial electroencephalography and motion tracking recordings to complete a series of orientation tasks in which they made physical head rotations to target positions. We then used a series of forward encoding models and linear mixed-effects models to isolate electrophysiological activity that was specifically tuned to heading direction. We identified a robust posterior central signature that predicts changes in veridical head orientation after regressing out confounds including sensory input and muscular activity. Both source localization and intracranial analysis implicated the medial temporal lobe as the origin of this effect. Subsequent analyses disentangled head-direction signatures from signals relating to head rotation and those reflecting location-specific effects. Lastly, when directly comparing head direction and eye-gaze-related tuning, we found that the brain maintains both codes while actively navigating, with stronger tuning to head direction in the medial temporal lobe. Together, these results reveal a taxonomy of population-level head-direction signals within the human brain that is reminiscent of those reported in the single units of rodents.
Collapse
Affiliation(s)
- Benjamin J Griffiths
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
- Centre for Human Brain Health, University of Birmingham, Birmingham, UK
| | - Thomas Schreiner
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Julia K Schaefer
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Christian Vollmar
- Epilepsy Center, Department of Neurology, Ludwig-Maximilians-Universität University Hospital, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Elisabeth Kaufmann
- Epilepsy Center, Department of Neurology, Ludwig-Maximilians-Universität University Hospital, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Stefanie Quach
- Department of Neurosurgery, University Hospital Munich, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Jan Remi
- Epilepsy Center, Department of Neurology, Ludwig-Maximilians-Universität University Hospital, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Soheyl Noachtar
- Epilepsy Center, Department of Neurology, Ludwig-Maximilians-Universität University Hospital, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Tobias Staudigl
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.
| |
Collapse
|
16
|
Hermansen E, Klindt DA, Dunn BA. Uncovering 2-D toroidal representations in grid cell ensemble activity during 1-D behavior. Nat Commun 2024; 15:5429. [PMID: 38926360 PMCID: PMC11208534 DOI: 10.1038/s41467-024-49703-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 06/13/2024] [Indexed: 06/28/2024] Open
Abstract
Minimal experiments, such as head-fixed wheel-running and sleep, offer experimental advantages but restrict the amount of observable behavior, making it difficult to classify functional cell types. Arguably, the grid cell, and its striking periodicity, would not have been discovered without the perspective provided by free behavior in an open environment. Here, we show that by shifting the focus from single neurons to populations, we change the minimal experimental complexity required. We identify grid cell modules and show that the activity covers a similar, stable toroidal state space during wheel running as in open field foraging. Trajectories on grid cell tori correspond to single trial runs in virtual reality and path integration in the dark, and the alignment of the representation rapidly shifts with changes in experimental conditions. Thus, we provide a methodology to discover and study complex internal representations in even the simplest of experiments.
Collapse
Affiliation(s)
- Erik Hermansen
- Department of Mathematical Sciences, NTNU, Trondheim, Norway.
| | - David A Klindt
- Department of Mathematical Sciences, NTNU, Trondheim, Norway
- Cold Spring Harbor Laboratory, Cold Spring Harbor, Laurel Hollow, New York, USA
| | - Benjamin A Dunn
- Department of Mathematical Sciences, NTNU, Trondheim, Norway.
| |
Collapse
|
17
|
Schreiner T, Griffiths BJ, Kutlu M, Vollmar C, Kaufmann E, Quach S, Remi J, Noachtar S, Staudigl T. Spindle-locked ripples mediate memory reactivation during human NREM sleep. Nat Commun 2024; 15:5249. [PMID: 38898100 PMCID: PMC11187142 DOI: 10.1038/s41467-024-49572-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 06/11/2024] [Indexed: 06/21/2024] Open
Abstract
Memory consolidation relies in part on the reactivation of previous experiences during sleep. The precise interplay of sleep-related oscillations (slow oscillations, spindles and ripples) is thought to coordinate the information flow between relevant brain areas, with ripples mediating memory reactivation. However, in humans empirical evidence for a role of ripples in memory reactivation is lacking. Here, we investigated the relevance of sleep oscillations and specifically ripples for memory reactivation during human sleep using targeted memory reactivation. Intracranial electrophysiology in epilepsy patients and scalp EEG in healthy participants revealed that elevated levels of slow oscillation - spindle activity coincided with the read-out of experimentally induced memory reactivation. Importantly, spindle-locked ripples recorded intracranially from the medial temporal lobe were found to be correlated with the identification of memory reactivation during non-rapid eye movement sleep. Our findings establish ripples as key-oscillation for sleep-related memory reactivation in humans and emphasize the importance of the coordinated interplay of the cardinal sleep oscillations.
Collapse
Affiliation(s)
- Thomas Schreiner
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Benjamin J Griffiths
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
- Centre for Human Brain Health, University of Birmingham, Birmingham, UK
| | - Merve Kutlu
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Christian Vollmar
- Epilepsy Center, Department of Neurology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Elisabeth Kaufmann
- Epilepsy Center, Department of Neurology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Stefanie Quach
- Department of Neurosurgery, University Hospital Munich, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Jan Remi
- Epilepsy Center, Department of Neurology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Soheyl Noachtar
- Epilepsy Center, Department of Neurology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Tobias Staudigl
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.
| |
Collapse
|
18
|
Li T, Wang J, Li S, Li K. Probing latent brain dynamics in Alzheimer's disease via recurrent neural network. Cogn Neurodyn 2024; 18:1183-1195. [PMID: 38826675 PMCID: PMC11143160 DOI: 10.1007/s11571-023-09981-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/14/2023] [Accepted: 05/31/2023] [Indexed: 06/04/2024] Open
Abstract
The impairment of cognitive function in Alzheimer's disease (AD) is clearly correlated to abnormal changes in cortical rhythm. However, the mechanisms underlying this correlation are still poorly understood. Here, we investigate how network structure and dynamical characteristics alter their abnormal changes in cortical rhythm. To that end, biological data of AD and normal participates are collected. By extracting the energy characteristics of different sub-bands in EEG signals, we find that the rhythm of AD patients is special particularly in theta and alpha bands. The cortical rhythm of normal state is mainly at alpha band, while that of AD state shift to the theta band. Furthermore, recurrent neural network (RNN) is trained to explore the rhythm formation and transformation between two neural states from the perspective view of neurocomputation. It is found that the neural coupling strength decreases significantly under AD state when compared with normal state, which weakens the ability of information transmission in AD state. Besides, the low-dimensional properties of RNN are obtained. By analyzing the relationship between the cortical rhythm transition and the low-dimensional trajectory, it is concluded that the low-dimensional trajectory update is slower and the communication cost is higher in AD state, which explains the abnormal synchronization of AD brain network. Our work reveals the causes for the formation of abnormal brain synchronous functional network status, which may expand our understanding of the mechanism of cognitive impairment in AD and provide an EEG biomarker for early AD.
Collapse
Affiliation(s)
- Tong Li
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Shanshan Li
- School of Automation and Electrical Engineering, Tianjin University of Technology and Educations, Tianjin, China
| | - Kai Li
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| |
Collapse
|
19
|
Chang JC, Perich MG, Miller LE, Gallego JA, Clopath C. De novo motor learning creates structure in neural activity that shapes adaptation. Nat Commun 2024; 15:4084. [PMID: 38744847 PMCID: PMC11094149 DOI: 10.1038/s41467-024-48008-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 04/18/2024] [Indexed: 05/16/2024] Open
Abstract
Animals can quickly adapt learned movements to external perturbations, and their existing motor repertoire likely influences their ease of adaptation. Long-term learning causes lasting changes in neural connectivity, which shapes the activity patterns that can be produced during adaptation. Here, we examined how a neural population's existing activity patterns, acquired through de novo learning, affect subsequent adaptation by modeling motor cortical neural population dynamics with recurrent neural networks. We trained networks on different motor repertoires comprising varying numbers of movements, which they acquired following various learning experiences. Networks with multiple movements had more constrained and robust dynamics, which were associated with more defined neural 'structure'-organization in the available population activity patterns. This structure facilitated adaptation, but only when the changes imposed by the perturbation were congruent with the organization of the inputs and the structure in neural activity acquired during de novo learning. These results highlight trade-offs in skill acquisition and demonstrate how different learning experiences can shape the geometrical properties of neural population activity and subsequent adaptation.
Collapse
Affiliation(s)
- Joanna C Chang
- Department of Bioengineering, Imperial College London, London, UK
| | - Matthew G Perich
- Département de Neurosciences, Faculté de Médecine, Université de Montréal, Montréal, QC, Canada
- Mila, Québec Artificial Intelligence Institute, Montréal, QC, Canada
| | - Lee E Miller
- Departments of Physiology, Biomedical Engineering and Physical Medicine and Rehabilitation, Northwestern University and Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Juan A Gallego
- Department of Bioengineering, Imperial College London, London, UK.
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK.
| |
Collapse
|
20
|
Fortunato C, Bennasar-Vázquez J, Park J, Chang JC, Miller LE, Dudman JT, Perich MG, Gallego JA. Nonlinear manifolds underlie neural population activity during behaviour. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.07.18.549575. [PMID: 37503015 PMCID: PMC10370078 DOI: 10.1101/2023.07.18.549575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
There is rich variety in the activity of single neurons recorded during behaviour. Yet, these diverse single neuron responses can be well described by relatively few patterns of neural co-modulation. The study of such low-dimensional structure of neural population activity has provided important insights into how the brain generates behaviour. Virtually all of these studies have used linear dimensionality reduction techniques to estimate these population-wide co-modulation patterns, constraining them to a flat "neural manifold". Here, we hypothesised that since neurons have nonlinear responses and make thousands of distributed and recurrent connections that likely amplify such nonlinearities, neural manifolds should be intrinsically nonlinear. Combining neural population recordings from monkey, mouse, and human motor cortex, and mouse striatum, we show that: 1) neural manifolds are intrinsically nonlinear; 2) their nonlinearity becomes more evident during complex tasks that require more varied activity patterns; and 3) manifold nonlinearity varies across architecturally distinct brain regions. Simulations using recurrent neural network models confirmed the proposed relationship between circuit connectivity and manifold nonlinearity, including the differences across architecturally distinct regions. Thus, neural manifolds underlying the generation of behaviour are inherently nonlinear, and properly accounting for such nonlinearities will be critical as neuroscientists move towards studying numerous brain regions involved in increasingly complex and naturalistic behaviours.
Collapse
Affiliation(s)
- Cátia Fortunato
- Department of Bioengineering, Imperial College London, London UK
| | | | - Junchol Park
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn VA, USA
| | - Joanna C. Chang
- Department of Bioengineering, Imperial College London, London UK
| | - Lee E. Miller
- Department of Neurosciences, Northwestern University, Chicago IL, USA
- Department of Biomedical Engineering, Northwestern University, Chicago IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago IL, USA, and Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Joshua T. Dudman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn VA, USA
| | - Matthew G. Perich
- Department of Neurosciences, Faculté de médecine, Université de Montréal, Montréal, Québec, Canada
- Québec Artificial Intelligence Institute (MILA), Montréal, Québec, Canada
| | - Juan A. Gallego
- Department of Bioengineering, Imperial College London, London UK
| |
Collapse
|
21
|
Talpir I, Livneh Y. Stereotyped goal-directed manifold dynamics in the insular cortex. Cell Rep 2024; 43:114027. [PMID: 38568813 PMCID: PMC11063631 DOI: 10.1016/j.celrep.2024.114027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 02/12/2024] [Accepted: 03/15/2024] [Indexed: 04/05/2024] Open
Abstract
The insular cortex is involved in diverse processes, including bodily homeostasis, emotions, and cognition. However, we lack a comprehensive understanding of how it processes information at the level of neuronal populations. We leveraged recent advances in unsupervised machine learning to study insular cortex population activity patterns (i.e., neuronal manifold) in mice performing goal-directed behaviors. We find that the insular cortex activity manifold is remarkably consistent across different animals and under different motivational states. Activity dynamics within the neuronal manifold are highly stereotyped during rewarded trials, enabling robust prediction of single-trial outcomes across different mice and across various natural and artificial motivational states. Comparing goal-directed behavior with self-paced free consumption, we find that the stereotyped activity patterns reflect task-dependent goal-directed reward anticipation, and not licking, taste, or positive valence. These findings reveal a core computation in insular cortex that could explain its involvement in pathologies involving aberrant motivations.
Collapse
Affiliation(s)
- Itay Talpir
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Yoav Livneh
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel.
| |
Collapse
|
22
|
Beshkov K, Fyhn M, Hafting T, Einevoll GT. Topological structure of population activity in mouse visual cortex encodes densely sampled stimulus rotations. iScience 2024; 27:109370. [PMID: 38523791 PMCID: PMC10959658 DOI: 10.1016/j.isci.2024.109370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 10/06/2023] [Accepted: 02/26/2024] [Indexed: 03/26/2024] Open
Abstract
The primary visual cortex is one of the most well understood regions supporting the processing involved in sensory computation. Following the popularization of high-density neural recordings, it has been observed that the activity of large neural populations is often constrained to low dimensional manifolds. In this work, we quantify the structure of such neural manifolds in the visual cortex. We do this by analyzing publicly available two-photon optical recordings of mouse primary visual cortex in response to visual stimuli with a densely sampled rotation angle. Using a geodesic metric along with persistent homology, we discover that population activity in response to such stimuli generates a circular manifold, encoding the angle of rotation. Furthermore, we observe that this circular manifold is expressed differently in subpopulations of neurons with differing orientation and direction selectivity. Finally, we discuss some of the obstacles to reliably retrieving the truthful topology generated by a neural population.
Collapse
Affiliation(s)
- Kosio Beshkov
- Center for Integrative Neuroplasticity, Department of Bioscience, University of Oslo, Oslo, Norway
| | - Marianne Fyhn
- Center for Integrative Neuroplasticity, Department of Bioscience, University of Oslo, Oslo, Norway
| | - Torkel Hafting
- Center for Integrative Neuroplasticity, Department of Bioscience, University of Oslo, Oslo, Norway
- Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Gaute T. Einevoll
- Center for Integrative Neuroplasticity, Department of Bioscience, University of Oslo, Oslo, Norway
- Department of Physics, Norwegian University of Life Sciences, As, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| |
Collapse
|
23
|
Duszkiewicz AJ, Orhan P, Skromne Carrasco S, Brown EH, Owczarek E, Vite GR, Wood ER, Peyrache A. Local origin of excitatory-inhibitory tuning equivalence in a cortical network. Nat Neurosci 2024; 27:782-792. [PMID: 38491324 PMCID: PMC11001581 DOI: 10.1038/s41593-024-01588-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 01/24/2024] [Indexed: 03/18/2024]
Abstract
The interplay between excitation and inhibition determines the fidelity of cortical representations. The receptive fields of excitatory neurons are often finely tuned to encoded features, but the principles governing the tuning of inhibitory neurons remain elusive. In this study, we recorded populations of neurons in the mouse postsubiculum (PoSub), where the majority of excitatory neurons are head-direction (HD) cells. We show that the tuning of fast-spiking (FS) cells, the largest class of cortical inhibitory neurons, was broad and frequently radially symmetrical. By decomposing tuning curves using the Fourier transform, we identified an equivalence in tuning between PoSub-FS and PoSub-HD cell populations. Furthermore, recordings, optogenetic manipulations of upstream thalamic populations and computational modeling provide evidence that the tuning of PoSub-FS cells has a local origin. These findings support the notion that the equivalence of neuronal tuning between excitatory and inhibitory cell populations is an intrinsic property of local cortical networks.
Collapse
Affiliation(s)
- Adrian J Duszkiewicz
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada.
- Centre for Discovery Brain Sciences, University of Edinburgh, Edinburgh, UK.
- Department of Psychology, University of Stirling, Stirling, UK.
| | - Pierre Orhan
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
- Ecole normale supérieure, PSL University, CNRS, Paris, France
| | - Sofia Skromne Carrasco
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| | - Eleanor H Brown
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| | - Eliott Owczarek
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| | - Gilberto R Vite
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| | - Emma R Wood
- Centre for Discovery Brain Sciences, University of Edinburgh, Edinburgh, UK
- Simons Initiative for the Developing Brain, University of Edinburgh, Edinburgh, UK
| | - Adrien Peyrache
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada.
| |
Collapse
|
24
|
Medrano J, Friston K, Zeidman P. Linking fast and slow: The case for generative models. Netw Neurosci 2024; 8:24-43. [PMID: 38562283 PMCID: PMC10861163 DOI: 10.1162/netn_a_00343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 10/11/2023] [Indexed: 04/04/2024] Open
Abstract
A pervasive challenge in neuroscience is testing whether neuronal connectivity changes over time due to specific causes, such as stimuli, events, or clinical interventions. Recent hardware innovations and falling data storage costs enable longer, more naturalistic neuronal recordings. The implicit opportunity for understanding the self-organised brain calls for new analysis methods that link temporal scales: from the order of milliseconds over which neuronal dynamics evolve, to the order of minutes, days, or even years over which experimental observations unfold. This review article demonstrates how hierarchical generative models and Bayesian inference help to characterise neuronal activity across different time scales. Crucially, these methods go beyond describing statistical associations among observations and enable inference about underlying mechanisms. We offer an overview of fundamental concepts in state-space modeling and suggest a taxonomy for these methods. Additionally, we introduce key mathematical principles that underscore a separation of temporal scales, such as the slaving principle, and review Bayesian methods that are being used to test hypotheses about the brain with multiscale data. We hope that this review will serve as a useful primer for experimental and computational neuroscientists on the state of the art and current directions of travel in the complex systems modelling literature.
Collapse
Affiliation(s)
- Johan Medrano
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Karl Friston
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Peter Zeidman
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| |
Collapse
|
25
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
26
|
Wang W, Yan Z, Wang L, Xu S. Topological Characteristics of the Pore Network in the Tight Sandstone Using Persistent Homology. ACS OMEGA 2024; 9:11589-11596. [PMID: 38496948 PMCID: PMC10938304 DOI: 10.1021/acsomega.3c08847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 01/12/2024] [Accepted: 02/13/2024] [Indexed: 03/19/2024]
Abstract
Tight sandstone reservoirs have become important areas for unconventional reservoir development, and their pore network is a key feature for identifying tight sandstone, which affects fluid migration path and reservoir development efficiency. However, the connectivity characteristics of the pore network at different scales have remained unclear owing to the numerous pores and uneven pore shape. Here, using pore size distributions from many hundreds of tight sandstone samples and subsequent topological data analysis, we construct the topological structure of the pore network in the Yanchang Formation tight sandstone of the Ordos Basin in China and visualize the topological characteristics of the pore network with distances. We show that there are three connected groups within the pore structure of the tight sandstone. The topology of the pore network resides on a trident ring manifold, suggesting that the pore network in the tight sandstone encompasses three obvious dominant connection paths. One prominent bar on the H0 dimension in the barcode indicates a two-point connection from nanoscale to microscale in the pore network. Three prominent bars with varying durations on the H1 dimension indicate the presence of three separate multipoint connections within a limited extent in the pore network. Connectivity of combined pores is good and controlled by the topological structure of the pore network. This demonstration of pore connections on a trident ring manifold provides a population-level visualization of the pore network in the tight sandstone.
Collapse
Affiliation(s)
- Wei Wang
- College
of Chemistry and Chemical Engineering, Yulin
University, Yulin 719000, Shaanxi, P. R. China
| | - Zhiyong Yan
- No.
1 Oil Production Plant, Petrochina Changqing
Oilfield Company, Yan’an 716000, Shaanxi, P. R. China
| | - Lina Wang
- No.
1 Oil Production Plant, Petrochina Changqing
Oilfield Company, Yan’an 716000, Shaanxi, P. R. China
| | - Shuang Xu
- No.
1 Oil Production Plant, Petrochina Changqing
Oilfield Company, Yan’an 716000, Shaanxi, P. R. China
| |
Collapse
|
27
|
Schafer M, Kamilar-Britt P, Sahani V, Bachi K, Schiller D. Neural Trajectories of Conceptually Related Events. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.04.569670. [PMID: 38187737 PMCID: PMC10769183 DOI: 10.1101/2023.12.04.569670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
In a series of conceptually related episodes, meaning arises from the link between these events rather than from each event individually. How does the brain keep track of conceptually related sequences of events (i.e., conceptual trajectories)? In a particular kind of conceptual trajectory-a social relationship-meaning arises from a specific sequence of interactions. To test whether such abstract sequences are neurally tracked, we had participants complete a naturalistic narrative-based social interaction game, during functional magnetic resonance imaging. We modeled the simulated relationships as trajectories through an abstract affiliation and power space. In two independent samples, we found evidence of individual social relationships being tracked with unique sequences of hippocampal states. The neural states corresponded to the accumulated trial-to-trial affiliation and power relations between the participant and each character, such that each relationship's history was captured by its own neural trajectory. Each relationship had its own sequence of states, and all relationships were embedded within the same manifold. As such, we show that the hippocampus represents social relationships with ordered sequences of low-dimensional neural patterns. The number of distinct clusters of states on this manifold is also related to social function, as measured by the size of real-world social networks. These results suggest that our evolving relationships with others are represented in trajectory-like neural patterns.
Collapse
Affiliation(s)
- Matthew Schafer
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai; New York City, NY
| | - Philip Kamilar-Britt
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai; New York City, NY
| | - Vyoma Sahani
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai; New York City, NY
| | - Keren Bachi
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai; New York City, NY
- Department of Environmental Medicine and Public Health, Icahn School of Medicine at Mount Sinai; New York City, NY
| | - Daniela Schiller
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai; New York City, NY
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai; New York City, NY
- Friedman Brain Institute, Icahn School of Medicine at Mount Sinai; New York City, NY
| |
Collapse
|
28
|
Pattadkal JJ, Zemelman BV, Fiete I, Priebe NJ. Primate neocortex performs balanced sensory amplification. Neuron 2024; 112:661-675.e7. [PMID: 38091984 PMCID: PMC10922204 DOI: 10.1016/j.neuron.2023.11.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 05/08/2023] [Accepted: 11/07/2023] [Indexed: 01/25/2024]
Abstract
The sensory cortex amplifies relevant features of external stimuli. This sensitivity and selectivity arise through the transformation of inputs by cortical circuitry. We characterize the circuit mechanisms and dynamics of cortical amplification by making large-scale simultaneous measurements of single cells in awake primates and testing computational models. By comparing network activity in both driven and spontaneous states with models, we identify the circuit as operating in a regime of non-normal balanced amplification. Incoming inputs are strongly but transiently amplified by strong recurrent feedback from the disruption of excitatory-inhibitory balance in the network. Strong inhibition rapidly quenches responses, thereby permitting the tracking of time-varying stimuli.
Collapse
Affiliation(s)
- Jagruti J Pattadkal
- Department of Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA.
| | - Boris V Zemelman
- Department of Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA
| | - Ila Fiete
- Department of Brain and Cognitive Sciences, MIT, Boston, MA 02139, USA
| | - Nicholas J Priebe
- Department of Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA.
| |
Collapse
|
29
|
Kawahara D, Fujisawa S. Advantages of Persistent Cohomology in Estimating Animal Location From Grid Cell Population Activity. Neural Comput 2024; 36:385-411. [PMID: 38363660 DOI: 10.1162/neco_a_01645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 10/09/2023] [Indexed: 02/18/2024]
Abstract
Many cognitive functions are represented as cell assemblies. In the case of spatial navigation, the population activity of place cells in the hippocampus and grid cells in the entorhinal cortex represents self-location in the environment. The brain cannot directly observe self-location information in the environment. Instead, it relies on sensory information and memory to estimate self-location. Therefore, estimating low-dimensional dynamics, such as the movement trajectory of an animal exploring its environment, from only the high-dimensional neural activity is important in deciphering the information represented in the brain. Most previous studies have estimated the low-dimensional dynamics (i.e., latent variables) behind neural activity by unsupervised learning with Bayesian population decoding using artificial neural networks or gaussian processes. Recently, persistent cohomology has been used to estimate latent variables from the phase information (i.e., circular coordinates) of manifolds created by neural activity. However, the advantages of persistent cohomology over Bayesian population decoding are not well understood. We compared persistent cohomology and Bayesian population decoding in estimating the animal location from simulated and actual grid cell population activity. We found that persistent cohomology can estimate the animal location with fewer neurons than Bayesian population decoding and robustly estimate the animal location from actual noisy data.
Collapse
Affiliation(s)
- Daisuke Kawahara
- Department of Complexity Science and Engineering, University of Tokyo, Kashiwa, Chiba 277-8563, Japan
- Laboratory for Systems Neurophysiology, RIKEN Center for Brain Science, Wako, Saitama 351-0198, Japan
| | - Shigeyoshi Fujisawa
- Department of Complexity Science and Engineering, University of Tokyo, Kashiwa, Chiba 277-8563, Japan
- Laboratory for Systems Neurophysiology, RIKEN Center for Brain Science, Wako, Saitama 351-0198, Japan
| |
Collapse
|
30
|
Nakai S, Kitanishi T, Mizuseki K. Distinct manifold encoding of navigational information in the subiculum and hippocampus. SCIENCE ADVANCES 2024; 10:eadi4471. [PMID: 38295173 PMCID: PMC10830115 DOI: 10.1126/sciadv.adi4471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 12/29/2023] [Indexed: 02/02/2024]
Abstract
The subiculum (SUB) plays a crucial role in spatial navigation and encodes navigational information differently from the hippocampal CA1 area. However, the representation of subicular population activity remains unknown. Here, we investigated the neuronal population activity recorded extracellularly from the CA1 and SUB of rats performing T-maze and open-field tasks. The trajectory of population activity in both areas was confined to low-dimensional neural manifolds homoeomorphic to external space. The manifolds conveyed position, speed, and future path information with higher decoding accuracy in the SUB than in the CA1. The manifolds exhibited common geometry across rats and regions for the CA1 and SUB and between tasks in the SUB. During post-task ripples in slow-wave sleep, population activity represented reward locations/events more frequently in the SUB than in CA1. Thus, the CA1 and SUB encode information distinctly into the neural manifolds that underlie navigational information processing during wakefulness and sleep.
Collapse
Affiliation(s)
- Shinya Nakai
- Department of Physiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka 545-8585, Japan
- Department of Physiology, Graduate School of Medicine, Osaka City University, Osaka 545-8585, Japan
| | - Takuma Kitanishi
- Department of Physiology, Graduate School of Medicine, Osaka City University, Osaka 545-8585, Japan
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Meguro, Tokyo 153-8902, Japan
- Komaba Institute for Science, The University of Tokyo, Meguro, Tokyo 153-8902, Japan
- PRESTO, Japan Science and Technology Agency (JST), Kawaguchi, Saitama 332-0012, Japan
| | - Kenji Mizuseki
- Department of Physiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka 545-8585, Japan
- Department of Physiology, Graduate School of Medicine, Osaka City University, Osaka 545-8585, Japan
| |
Collapse
|
31
|
Muessig L, Ribeiro Rodrigues F, Bjerknes TL, Towse BW, Barry C, Burgess N, Moser EI, Moser MB, Cacucci F, Wills TJ. Environment geometry alters subiculum boundary vector cell receptive fields in adulthood and early development. Nat Commun 2024; 15:982. [PMID: 38302455 PMCID: PMC10834499 DOI: 10.1038/s41467-024-45098-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 01/15/2024] [Indexed: 02/03/2024] Open
Abstract
Boundaries to movement form a specific class of landmark information used for navigation: Boundary Vector Cells (BVCs) are neurons which encode an animal's location as a vector displacement from boundaries. Here we characterise the prevalence and spatial tuning of subiculum BVCs in adult and developing male rats, and investigate the relationship between BVC spatial firing and boundary geometry. BVC directional tunings align with environment walls in squares, but are uniformly distributed in circles, demonstrating that environmental geometry alters BVC receptive fields. Inserted barriers uncover both excitatory and inhibitory components to BVC receptive fields, demonstrating that inhibitory inputs contribute to BVC field formation. During post-natal development, subiculum BVCs mature slowly, contrasting with the earlier maturation of boundary-responsive cells in upstream Entorhinal Cortex. However, Subiculum and Entorhinal BVC receptive fields are altered by boundary geometry as early as tested, suggesting this is an inherent feature of the hippocampal representation of space.
Collapse
Affiliation(s)
- Laurenz Muessig
- Department of Cell and Developmental Biology, University College London, London, WC1E 6BT, UK
| | | | - Tale L Bjerknes
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, 7491, Norway
| | - Benjamin W Towse
- Institute of Cognitive Neuroscience, University College London, London, WC1N 3AZ, UK
| | - Caswell Barry
- Department of Cell and Developmental Biology, University College London, London, WC1E 6BT, UK
| | - Neil Burgess
- Institute of Cognitive Neuroscience, University College London, London, WC1N 3AZ, UK
- UCL Queen Square Institute of Neurology, University College London, London, WC1N 3BG, UK
| | - Edvard I Moser
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, 7491, Norway
| | - May-Britt Moser
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, 7491, Norway
| | - Francesca Cacucci
- Department of Neuroscience, Physiology and Pharmacology; University College London, London, WC1E 6BT, UK
| | - Thomas J Wills
- Department of Cell and Developmental Biology, University College London, London, WC1E 6BT, UK.
| |
Collapse
|
32
|
Pals M, Macke JH, Barak O. Trained recurrent neural networks develop phase-locked limit cycles in a working memory task. PLoS Comput Biol 2024; 20:e1011852. [PMID: 38315736 PMCID: PMC10868787 DOI: 10.1371/journal.pcbi.1011852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 02/15/2024] [Accepted: 01/22/2024] [Indexed: 02/07/2024] Open
Abstract
Neural oscillations are ubiquitously observed in many brain areas. One proposed functional role of these oscillations is that they serve as an internal clock, or 'frame of reference'. Information can be encoded by the timing of neural activity relative to the phase of such oscillations. In line with this hypothesis, there have been multiple empirical observations of such phase codes in the brain. Here we ask: What kind of neural dynamics support phase coding of information with neural oscillations? We tackled this question by analyzing recurrent neural networks (RNNs) that were trained on a working memory task. The networks were given access to an external reference oscillation and tasked to produce an oscillation, such that the phase difference between the reference and output oscillation maintains the identity of transient stimuli. We found that networks converged to stable oscillatory dynamics. Reverse engineering these networks revealed that each phase-coded memory corresponds to a separate limit cycle attractor. We characterized how the stability of the attractor dynamics depends on both reference oscillation amplitude and frequency, properties that can be experimentally observed. To understand the connectivity structures that underlie these dynamics, we showed that trained networks can be described as two phase-coupled oscillators. Using this insight, we condensed our trained networks to a reduced model consisting of two functional modules: One that generates an oscillation and one that implements a coupling function between the internal oscillation and external reference. In summary, by reverse engineering the dynamics and connectivity of trained RNNs, we propose a mechanism by which neural networks can harness reference oscillations for working memory. Specifically, we propose that a phase-coding network generates autonomous oscillations which it couples to an external reference oscillation in a multi-stable fashion.
Collapse
Affiliation(s)
- Matthijs Pals
- Machine Learning in Science, Excellence Cluster Machine Learning, University of Tübingen, Tübingen, Germany
- Tübingen AI Center, University of Tübingen, Tübingen, Germany
| | - Jakob H. Macke
- Machine Learning in Science, Excellence Cluster Machine Learning, University of Tübingen, Tübingen, Germany
- Tübingen AI Center, University of Tübingen, Tübingen, Germany
- Department Empirical Inference, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Omri Barak
- Rappaport Faculty of Medicine Technion, Israel Institute of Technology, Haifa, Israel
- Network Biology Research Laboratory, Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
33
|
Guidera JA, Gramling DP, Comrie AE, Joshi A, Denovellis EL, Lee KH, Zhou J, Thompson P, Hernandez J, Yorita A, Haque R, Kirst C, Frank LM. Regional specialization manifests in the reliability of neural population codes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.25.576941. [PMID: 38328245 PMCID: PMC10849741 DOI: 10.1101/2024.01.25.576941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
The brain has the remarkable ability to learn and guide the performance of complex tasks. Decades of lesion studies suggest that different brain regions perform specialized functions in support of complex behaviors1-3. Yet recent large-scale studies of neural activity reveal similar patterns of activity and encoding distributed widely throughout the brain4-6. How these distributed patterns of activity and encoding are compatible with regional specialization of brain function remains unclear. Two frontal brain regions, the dorsal medial prefrontal cortex (dmPFC) and orbitofrontal cortex (OFC), are a paradigm of this conundrum. In the setting complex behaviors, the dmPFC is necessary for choosing optimal actions2,7,8, whereas the OFC is necessary for waiting for3,9 and learning from2,7,9-12 the outcomes of those actions. Yet both dmPFC and OFC encode both choice- and outcome-related quantities13-20. Here we show that while ensembles of neurons in the dmPFC and OFC of rats encode similar elements of a cognitive task with similar patterns of activity, the two regions differ in when that coding is consistent across trials ("reliable"). In line with the known critical functions of each region, dmPFC activity is more reliable when animals are making choices and less reliable preceding outcomes, whereas OFC activity shows the opposite pattern. Our findings identify the dynamic reliability of neural population codes as a mechanism whereby different brain regions may support distinct cognitive functions despite exhibiting similar patterns of activity and encoding similar quantities.
Collapse
Affiliation(s)
- Jennifer A. Guidera
- UCSF-UC Berkeley Graduate Program in Bioengineering, University of California, San Francisco; San Francisco, 94158, USA and University of California, Berkeley; Berkely, 94720, USA
- Medical Scientist Training Program, University of California, San Francisco; San Francisco, 94158, USA
| | - Daniel P. Gramling
- Departments of Physiology and Psychiatry, University of California, San Francisco; San Francisco, 94158, USA
- Kavli Institute for Fundamental Neuroscience, University of California, San Francisco; San Francisco, 94158, USA
| | - Alison E. Comrie
- Departments of Physiology and Psychiatry, University of California, San Francisco; San Francisco, 94158, USA
- Kavli Institute for Fundamental Neuroscience, University of California, San Francisco; San Francisco, 94158, USA
| | - Abhilasha Joshi
- Departments of Physiology and Psychiatry, University of California, San Francisco; San Francisco, 94158, USA
- Kavli Institute for Fundamental Neuroscience, University of California, San Francisco; San Francisco, 94158, USA
- Howard Hughes Medical Institute, University of California, San Francisco; San Francisco, 94158, USA
| | - Eric L. Denovellis
- Departments of Physiology and Psychiatry, University of California, San Francisco; San Francisco, 94158, USA
- Kavli Institute for Fundamental Neuroscience, University of California, San Francisco; San Francisco, 94158, USA
- Howard Hughes Medical Institute, University of California, San Francisco; San Francisco, 94158, USA
| | - Kyu Hyun Lee
- Departments of Physiology and Psychiatry, University of California, San Francisco; San Francisco, 94158, USA
- Kavli Institute for Fundamental Neuroscience, University of California, San Francisco; San Francisco, 94158, USA
- Howard Hughes Medical Institute, University of California, San Francisco; San Francisco, 94158, USA
| | - Jenny Zhou
- Center for Micro- and Nano-Technology, Lawrence Livermore National Laboratory; Livermore, 94158, USA
| | - Paige Thompson
- Center for Micro- and Nano-Technology, Lawrence Livermore National Laboratory; Livermore, 94158, USA
| | - Jose Hernandez
- Center for Micro- and Nano-Technology, Lawrence Livermore National Laboratory; Livermore, 94158, USA
| | - Allison Yorita
- Center for Micro- and Nano-Technology, Lawrence Livermore National Laboratory; Livermore, 94158, USA
| | - Razi Haque
- Center for Micro- and Nano-Technology, Lawrence Livermore National Laboratory; Livermore, 94158, USA
| | - Christoph Kirst
- Kavli Institute for Fundamental Neuroscience, University of California, San Francisco; San Francisco, 94158, USA
- Department of Anatomy, University of California, San Francisco; San Francisco, 94158, USA
| | - Loren M. Frank
- UCSF-UC Berkeley Graduate Program in Bioengineering, University of California, San Francisco; San Francisco, 94158, USA and University of California, Berkeley; Berkely, 94720, USA
- Departments of Physiology and Psychiatry, University of California, San Francisco; San Francisco, 94158, USA
- Kavli Institute for Fundamental Neuroscience, University of California, San Francisco; San Francisco, 94158, USA
- Howard Hughes Medical Institute, University of California, San Francisco; San Francisco, 94158, USA
| |
Collapse
|
34
|
Gort J. Emergence of Universal Computations Through Neural Manifold Dynamics. Neural Comput 2024; 36:227-270. [PMID: 38101328 DOI: 10.1162/neco_a_01631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/05/2023] [Indexed: 12/17/2023]
Abstract
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology, and nonlinear dynamics in order to provide plausible explanations for these problems. It is shown that low-rank structured connectivities predict the formation of invariant and globally attracting manifolds in all these models. Regarding the dynamics arising in these manifolds, it is proved they are topologically equivalent across the considered formalisms. This letter also shows that under the low-rank hypothesis, the flows emerging in neural manifolds, including input-driven systems, are universal, which broadens previous findings. It explores how low-dimensional orbits can bear the production of continuous sets of muscular trajectories, the implementation of central pattern generators, and the storage of memory states. These dynamics can robustly simulate any Turing machine over arbitrary bounded memory strings, virtually endowing rate models with the power of universal computation. In addition, the letter shows how the low-rank hypothesis predicts the parsimonious correlation structure observed in cortical activity. Finally, it discusses how this theory could provide a useful tool from which to study neuropsychological phenomena using mathematical methods.
Collapse
Affiliation(s)
- Joan Gort
- Facultat de Psicologia, Universitat Autònoma de Barcelona, 08193, Bellaterra, Barcelona, Spain
| |
Collapse
|
35
|
Oby ER, Degenhart AD, Grigsby EM, Motiwala A, McClain NT, Marino PJ, Yu BM, Batista AP. Dynamical constraints on neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.03.573543. [PMID: 38260549 PMCID: PMC10802336 DOI: 10.1101/2024.01.03.573543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The manner in which neural activity unfolds over time is thought to be central to sensory, motor, and cognitive functions in the brain. Network models have long posited that the brain's computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface (BCI) to challenge monkeys to violate the naturally-occurring time courses of neural population activity that we observed in motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.
Collapse
|
36
|
Laurent G. Mysterious ultraslow and ordered activity observed in the cortex. Nature 2024; 625:244-245. [PMID: 38123849 DOI: 10.1038/d41586-023-03795-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
|
37
|
Sebastian ER, Esparza J, M de la Prida L. Quantifying the distribution of feature values over data represented in arbitrary dimensional spaces. PLoS Comput Biol 2024; 20:e1011768. [PMID: 38175854 DOI: 10.1371/journal.pcbi.1011768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 01/17/2024] [Accepted: 12/18/2023] [Indexed: 01/06/2024] Open
Abstract
Identifying the structured distribution (or lack thereof) of a given feature over a point cloud is a general research question. In the neuroscience field, this problem arises while investigating representations over neural manifolds (e.g., spatial coding), in the analysis of neurophysiological signals (e.g., sensory coding) or in anatomical image segmentation. We introduce the Structure Index (SI) as a directed graph-based metric to quantify the distribution of feature values projected over data in arbitrary D-dimensional spaces (defined from neurons, time stamps, pixels, genes, etc). The SI is defined from the overlapping distribution of data points sharing similar feature values in a given neighborhood of the cloud. Using arbitrary data clouds, we show how the SI provides quantification of the degree and directionality of the local versus global organization of feature distribution. SI can be applied to both scalar and vectorial features permitting quantification of the relative contribution of related variables. When applied to experimental studies of head-direction cells, it is able to retrieve consistent feature structure from both the high- and low-dimensional representations, and to disclose the local and global structure of the angle and speed represented in different brain regions. Finally, we provide two general-purpose examples (sound and image categorization), to illustrate the potential application to arbitrary dimensional spaces. Our method provides versatile applications in the neuroscience and data science fields.
Collapse
|
38
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
39
|
Gonzalo Cogno S, Obenhaus HA, Lautrup A, Jacobsen RI, Clopath C, Andersson SO, Donato F, Moser MB, Moser EI. Minute-scale oscillatory sequences in medial entorhinal cortex. Nature 2024; 625:338-344. [PMID: 38123682 PMCID: PMC10781645 DOI: 10.1038/s41586-023-06864-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Accepted: 11/10/2023] [Indexed: 12/23/2023]
Abstract
The medial entorhinal cortex (MEC) hosts many of the brain's circuit elements for spatial navigation and episodic memory, operations that require neural activity to be organized across long durations of experience1. Whereas location is known to be encoded by spatially tuned cell types in this brain region2,3, little is known about how the activity of entorhinal cells is tied together over time at behaviourally relevant time scales, in the second-to-minute regime. Here we show that MEC neuronal activity has the capacity to be organized into ultraslow oscillations, with periods ranging from tens of seconds to minutes. During these oscillations, the activity is further organized into periodic sequences. Oscillatory sequences manifested while mice ran at free pace on a rotating wheel in darkness, with no change in location or running direction and no scheduled rewards. The sequences involved nearly the entire cell population, and transcended epochs of immobility. Similar sequences were not observed in neighbouring parasubiculum or in visual cortex. Ultraslow oscillatory sequences in MEC may have the potential to couple neurons and circuits across extended time scales and serve as a template for new sequence formation during navigation and episodic memory formation.
Collapse
Affiliation(s)
- Soledad Gonzalo Cogno
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Horst A Obenhaus
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway
| | - Ane Lautrup
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway
| | - R Irene Jacobsen
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| | - Sebastian O Andersson
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway
- Max Planck Institute for Brain Research, Frankfurt am Main, Germany
| | - Flavio Donato
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway
- Biozentrum Universität Basel, Basel, Switzerland
| | - May-Britt Moser
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Edvard I Moser
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway.
| |
Collapse
|
40
|
Capouskova K, Zamora‐López G, Kringelbach ML, Deco G. Integration and segregation manifolds in the brain ensure cognitive flexibility during tasks and rest. Hum Brain Mapp 2023; 44:6349-6363. [PMID: 37846551 PMCID: PMC10681658 DOI: 10.1002/hbm.26511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 09/14/2023] [Accepted: 09/25/2023] [Indexed: 10/18/2023] Open
Abstract
Adapting to a constantly changing environment requires the human brain to flexibly switch among many demanding cognitive tasks, processing both specialized and integrated information associated with the activity in functional networks over time. In this study, we investigated the nature of the temporal alternation between segregated and integrated states in the brain during rest and six cognitive tasks using functional MRI. We employed a deep autoencoder to explore the 2D latent space associated with the segregated and integrated states. Our results show that the integrated state occupies less space in the latent space manifold compared to the segregated states. Moreover, the integrated state is characterized by lower entropy of occupancy than the segregated state, suggesting that integration plays a consolidating role, while segregation may serve as cognitive expertness. Comparing rest and the tasks, we found that rest exhibits higher entropy of occupancy, indicating a more random wandering of the mind compared to the expected focus during task performance. Our study demonstrates that both transient, short-lived integrated and segregated states are present during rest and task performance, flexibly switching between them, with integration serving as information compression and segregation related to information specialization.
Collapse
Affiliation(s)
- Katerina Capouskova
- Center for Brain and Cognition, Computational Neuroscience Group, DTICUniversitat Pompeu FabraBarcelonaSpain
| | - Gorka Zamora‐López
- Center for Brain and Cognition, Computational Neuroscience Group, DTICUniversitat Pompeu FabraBarcelonaSpain
| | - Morten L. Kringelbach
- Department of PsychiatryUniversity of OxfordOxfordUnited Kingdom
- Center for Music in the Brain, Department of Clinical MedicineAarhus UniversityAarhusDenmark
- Centre for Eudaimonia and Human Flourishing, Linacre CollegeUniversity of OxfordOxfordUnited Kingdom
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, DTICUniversitat Pompeu FabraBarcelonaSpain
- Institució Catalana de Recerca i Estudis Avançats (ICREA)BarcelonaSpain
| |
Collapse
|
41
|
Ohki T, Kunii N, Chao ZC. Efficient, continual, and generalized learning in the brain - neural mechanism of Mental Schema 2.0. Rev Neurosci 2023; 34:839-868. [PMID: 36960579 DOI: 10.1515/revneuro-2022-0137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 02/26/2023] [Indexed: 03/25/2023]
Abstract
There has been tremendous progress in artificial neural networks (ANNs) over the past decade; however, the gap between ANNs and the biological brain as a learning device remains large. With the goal of closing this gap, this paper reviews learning mechanisms in the brain by focusing on three important issues in ANN research: efficiency, continuity, and generalization. We first discuss the method by which the brain utilizes a variety of self-organizing mechanisms to maximize learning efficiency, with a focus on the role of spontaneous activity of the brain in shaping synaptic connections to facilitate spatiotemporal learning and numerical processing. Then, we examined the neuronal mechanisms that enable lifelong continual learning, with a focus on memory replay during sleep and its implementation in brain-inspired ANNs. Finally, we explored the method by which the brain generalizes learned knowledge in new situations, particularly from the mathematical generalization perspective of topology. Besides a systematic comparison in learning mechanisms between the brain and ANNs, we propose "Mental Schema 2.0," a new computational property underlying the brain's unique learning ability that can be implemented in ANNs.
Collapse
Affiliation(s)
- Takefumi Ohki
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Tokyo 113-0033, Japan
| | - Naoto Kunii
- Department of Neurosurgery, The University of Tokyo, Tokyo 113-0033, Japan
| | - Zenas C Chao
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Tokyo 113-0033, Japan
| |
Collapse
|
42
|
Sebastian ER, Quintanilla JP, Sánchez-Aguilera A, Esparza J, Cid E, de la Prida LM. Topological analysis of sharp-wave ripple waveforms reveals input mechanisms behind feature variations. Nat Neurosci 2023; 26:2171-2181. [PMID: 37946048 PMCID: PMC10689241 DOI: 10.1038/s41593-023-01471-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 09/22/2023] [Indexed: 11/12/2023]
Abstract
The reactivation of experience-based neural activity patterns in the hippocampus is crucial for learning and memory. These reactivation patterns and their associated sharp-wave ripples (SWRs) are highly variable. However, this variability is missed by commonly used spectral methods. Here, we use topological and dimensionality reduction techniques to analyze the waveform of ripples recorded at the pyramidal layer of CA1. We show that SWR waveforms distribute along a continuum in a low-dimensional space, which conveys information about the underlying layer-specific synaptic inputs. A decoder trained in this space successfully links individual ripples with their expected sinks and sources, demonstrating how physiological mechanisms shape SWR variability. Furthermore, we found that SWR waveforms segregated differently during wakefulness and sleep before and after a series of cognitive tasks, with striking effects of novelty and learning. Our results thus highlight how the topological analysis of ripple waveforms enables a deeper physiological understanding of SWRs.
Collapse
Affiliation(s)
| | | | - Alberto Sánchez-Aguilera
- Instituto Cajal. CSIC, Madrid, Spain
- Department of Physiology, Faculty of Medicine, Universidad Complutense de Madrid, Madrid, Spain
| | | | - Elena Cid
- Instituto Cajal. CSIC, Madrid, Spain
| | | |
Collapse
|
43
|
Esparza J, Sebastián ER, de la Prida LM. From cell types to population dynamics: Making hippocampal manifolds physiologically interpretable. Curr Opin Neurobiol 2023; 83:102800. [PMID: 37898015 DOI: 10.1016/j.conb.2023.102800] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/30/2023]
Abstract
The study of the hippocampal code is gaining momentum. While the physiological approach targets the contribution of individual cells as determined by genetic, biophysical and circuit factors, the field pushes for a population dynamic approach that considers the representation of behavioural variables by a large number of neurons. In this alternative framework, neuronal activity is projected into low-dimensional manifolds. These manifolds can reveal the structure of population representations, but their physiological interpretation is challenging. Here, we review the recent literature and propose that integrating information regarding behavioral traits, local field potential oscillations and cell-type-specificity into neural manifolds offers strategies to make them interpretable at the physiological level.
Collapse
|
44
|
Schøyen V, Pettersen MB, Holzhausen K, Fyhn M, Malthe-Sørenssen A, Lepperød ME. Coherently remapping toroidal cells but not Grid cells are responsible for path integration in virtual agents. iScience 2023; 26:108102. [PMID: 37867941 PMCID: PMC10589895 DOI: 10.1016/j.isci.2023.108102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 08/25/2023] [Accepted: 09/27/2023] [Indexed: 10/24/2023] Open
Abstract
It is widely believed that grid cells provide cues for path integration, with place cells encoding an animal's location and environmental identity. When entering a new environment, these cells remap concurrently, sparking debates about their causal relationship. Using a continuous attractor recurrent neural network, we study spatial cell dynamics in multiple environments. We investigate grid cell remapping as a function of global remapping in place-like units through random resampling of place cell centers. Dimensionality reduction techniques reveal that a subset of cells manifest a persistent torus across environments. Unexpectedly, these toroidal cells resemble band-like cells rather than high grid score units. Subsequent pruning studies reveal that toroidal cells are crucial for path integration while grid cells are not. As we extend the model to operate across many environments, we delineate its generalization boundaries, revealing challenges with modeling many environments in current models.
Collapse
Affiliation(s)
- Vemund Schøyen
- Department of Biosciences, University of Oslo, Oslo 0313, Norway
| | | | | | - Marianne Fyhn
- Department of Biosciences, University of Oslo, Oslo 0313, Norway
- Simula Research Laboratory, Norway
| | - Anders Malthe-Sørenssen
- Department of Physics, University of Oslo, Oslo 0313, Norway
- Simula Research Laboratory, Norway
| | - Mikkel Elle Lepperød
- Department of Physics, University of Oslo, Oslo 0313, Norway
- Department of Biosciences, University of Oslo, Oslo 0313, Norway
- Simula Research Laboratory, Norway
| |
Collapse
|
45
|
Wang S, Falcone R, Richmond B, Averbeck BB. Attractor dynamics reflect decision confidence in macaque prefrontal cortex. Nat Neurosci 2023; 26:1970-1980. [PMID: 37798412 DOI: 10.1038/s41593-023-01445-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 08/31/2023] [Indexed: 10/07/2023]
Abstract
Decisions are made with different degrees of consistency, and this consistency can be linked to the confidence that the best choice has been made. Theoretical work suggests that attractor dynamics in networks can account for choice consistency, but how this is implemented in the brain remains unclear. Here we provide evidence that the energy landscape around attractor basins in population neural activity in the prefrontal cortex reflects choice consistency. We trained two rhesus monkeys to make accept/reject decisions based on pretrained visual cues that signaled reward offers with different magnitudes and delays to reward. Monkeys made consistent decisions for very good and very bad offers, but decisions were less consistent for intermediate offers. Analysis of neural data showed that the attractor basins around patterns of activity reflecting decisions had steeper landscapes for offers that led to consistent decisions. Therefore, we provide neural evidence that energy landscapes predict decision consistency, which reflects decision confidence.
Collapse
Affiliation(s)
- Siyu Wang
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Rossella Falcone
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
- Leo M. Davidoff Department of Neurological Surgery, Albert Einstein College of Medicine Montefiore Medical Center, Bronx, NY, USA
| | - Barry Richmond
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Bruno B Averbeck
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA.
| |
Collapse
|
46
|
Jajcay N, Hlinka J. Towards a dynamical understanding of microstate analysis of M/EEG data. Neuroimage 2023; 281:120371. [PMID: 37716592 DOI: 10.1016/j.neuroimage.2023.120371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 09/04/2023] [Accepted: 09/08/2023] [Indexed: 09/18/2023] Open
Abstract
One of the interesting aspects of EEG data is the presence of temporally stable and spatially coherent patterns of activity, known as microstates, which have been linked to various cognitive and clinical phenomena. However, there is still no general agreement on the interpretation of microstate analysis. Various clustering algorithms have been used for microstate computation, and multiple studies suggest that the microstate time series may provide insight into the neural activity of the brain in the resting state. This study addresses two gaps in the literature. Firstly, by applying several state-of-the-art microstate algorithms to a large dataset of EEG recordings, we aim to characterise and describe various microstate algorithms. We demonstrate and discuss why the three "classically" used algorithms ((T)AAHC and modified K-Means) yield virtually the same results, while HMM algorithm generates the most dissimilar results. Secondly, we aim to test the hypothesis that dynamical microstate properties might be, to a large extent, determined by the linear characteristics of the underlying EEG signal, in particular, by the cross-covariance and autocorrelation structure of the EEG data. To this end, we generated a Fourier transform surrogate of the EEG signal to compare microstate properties. Here, we found that these are largely similar, thus hinting that microstate properties depend to a very high degree on the linear covariance and autocorrelation structure of the underlying EEG data. Finally, we treated the EEG data as a vector autoregression process, estimated its parameters, and generated surrogate stationary and linear data from fitted VAR. We observed that such a linear model generates microstates highly comparable to those estimated from real EEG data, supporting the conclusion that a linear EEG model can help with the methodological and clinical interpretation of both static and dynamic human brain microstate properties.
Collapse
Affiliation(s)
- Nikola Jajcay
- Center for Advanced Studies of Brain and Consciousness, National Institute of Mental Health, Klecany, 250 67, Czech Republic; Department of Complex Systems, Institute of Computer Science, Czech Academy of Sciences, Prague, 182 07, Czech Republic.
| | - Jaroslav Hlinka
- Center for Advanced Studies of Brain and Consciousness, National Institute of Mental Health, Klecany, 250 67, Czech Republic; Department of Complex Systems, Institute of Computer Science, Czech Academy of Sciences, Prague, 182 07, Czech Republic.
| |
Collapse
|
47
|
Levy ERJ, Carrillo-Segura S, Park EH, Redman WT, Hurtado JR, Chung S, Fenton AA. A manifold neural population code for space in hippocampal coactivity dynamics independent of place fields. Cell Rep 2023; 42:113142. [PMID: 37742193 PMCID: PMC10842170 DOI: 10.1016/j.celrep.2023.113142] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 06/14/2023] [Accepted: 08/30/2023] [Indexed: 09/26/2023] Open
Abstract
Hippocampus place cell discharge is temporally unreliable across seconds and days, and place fields are multimodal, suggesting an "ensemble cofiring" spatial coding hypothesis with manifold dynamics that does not require reliable spatial tuning, in contrast to hypotheses based on place field (spatial tuning) stability. We imaged mouse CA1 (cornu ammonis 1) ensembles in two environments across three weeks to evaluate these coding hypotheses. While place fields "remap," being more distinct between than within environments, coactivity relationships generally change less. Decoding location and environment from 1-s ensemble location-specific activity is effective and improves with experience. Decoding environment from cell-pair coactivity relationships is also effective and improves with experience, even after removing place tuning. Discriminating environments from 1-s ensemble coactivity relies crucially on the cells with the most anti-coactive cell-pair relationships because activity is internally organized on a low-dimensional manifold of non-linear coactivity relationships that intermittently reregisters to environments according to the anti-cofiring subpopulation activity.
Collapse
Affiliation(s)
| | - Simón Carrillo-Segura
- Center for Neural Science, New York University, New York, NY 10003, USA; Graduate Program in Mechanical and Aerospace Engineering, Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA
| | - Eun Hye Park
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - William Thomas Redman
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, Santa Barbara, CA 93106, USA
| | | | - SueYeon Chung
- Center for Neural Science, New York University, New York, NY 10003, USA; Flatiron Institute Center for Computational Neuroscience, New York, NY 10010, USA
| | - André Antonio Fenton
- Center for Neural Science, New York University, New York, NY 10003, USA; Neuroscience Institute at the NYU Langone Medical Center, New York, NY 10016, USA.
| |
Collapse
|
48
|
Viejo G, Levenstein D, Skromne Carrasco S, Mehrotra D, Mahallati S, Vite GR, Denny H, Sjulson L, Battaglia FP, Peyrache A. Pynapple, a toolbox for data analysis in neuroscience. eLife 2023; 12:RP85786. [PMID: 37843985 PMCID: PMC10578930 DOI: 10.7554/elife.85786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2023] Open
Abstract
Datasets collected in neuroscientific studies are of ever-growing complexity, often combining high-dimensional time series data from multiple data acquisition modalities. Handling and manipulating these various data streams in an adequate programming environment is crucial to ensure reliable analysis, and to facilitate sharing of reproducible analysis pipelines. Here, we present Pynapple, the PYthon Neural Analysis Package, a lightweight python package designed to process a broad range of time-resolved data in systems neuroscience. The core feature of this package is a small number of versatile objects that support the manipulation of any data streams and task parameters. The package includes a set of methods to read common data formats and allows users to easily write their own. The resulting code is easy to read and write, avoids low-level data processing and other error-prone steps, and is open source. Libraries for higher-level analyses are developed within the Pynapple framework but are contained within a collaborative repository of specialized and continuously updated analysis routines. This provides flexibility while ensuring long-term stability of the core package. In conclusion, Pynapple provides a common framework for data analysis in neuroscience.
Collapse
Affiliation(s)
- Guillaume Viejo
- Montreal Neurological Institute and Hospital, McGill UniversityMontrealCanada
- Flatiron Institute, Center for Computational NeuroscienceNew YorkUnited States
| | - Daniel Levenstein
- Montreal Neurological Institute and Hospital, McGill UniversityMontrealCanada
- MILA – Quebec IA InstituteMontrealCanada
| | | | - Dhruv Mehrotra
- Montreal Neurological Institute and Hospital, McGill UniversityMontrealCanada
| | - Sara Mahallati
- Montreal Neurological Institute and Hospital, McGill UniversityMontrealCanada
| | - Gilberto R Vite
- Montreal Neurological Institute and Hospital, McGill UniversityMontrealCanada
| | - Henry Denny
- Montreal Neurological Institute and Hospital, McGill UniversityMontrealCanada
| | - Lucas Sjulson
- Departments of Psychiatry and Neuroscience, Albert Einstein College of MedicineBronxUnited States
| | - Francesco P Battaglia
- Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegenNetherlands
| | - Adrien Peyrache
- Montreal Neurological Institute and Hospital, McGill UniversityMontrealCanada
| |
Collapse
|
49
|
Fenton AA, Hurtado JR, Broek JAC, Park E, Mishra B. Do Place Cells Dream of Deceptive Moves in a Signaling Game? Neuroscience 2023; 529:129-147. [PMID: 37591330 PMCID: PMC10592151 DOI: 10.1016/j.neuroscience.2023.08.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 07/27/2023] [Accepted: 08/06/2023] [Indexed: 08/19/2023]
Abstract
We consider the possibility of applying game theory to analysis and modeling of neurobiological systems. Specifically, the basic properties and features of information asymmetric signaling games are considered and discussed as having potential to explain diverse neurobiological phenomena; we focus on neuronal action potential discharge that can represent cognitive variables in memory and purposeful behavior. We begin by arguing that there is a pressing need for conceptual frameworks that can permit analysis and integration of information and explanations across many scales of biological function including gene regulation, molecular and biochemical signaling, cellular and metabolic function, neuronal population, and systems level organization to generate plausible hypotheses across these scales. Developing such integrative frameworks is crucial if we are to understand cognitive functions like learning, memory, and perception. The present work focuses on systems neuroscience organized around the connected brain regions of the entorhinal cortex and hippocampus. These areas are intensely studied in rodent subjects as model neuronal systems that undergo activity-dependent synaptic plasticity to form neuronal circuits and represent memories and spatial knowledge used for purposeful navigation. Examples of cognition-related spatial information in the observed neuronal discharge of hippocampal place cell populations and medial entorhinal head-direction cell populations are used to illustrate possible challenges to information maximization concepts. It may be natural to explain these observations using the ideas and features of information asymmetric signaling games.
Collapse
Affiliation(s)
- André A Fenton
- Neurobiology of Cognition Laboratory, Center for Neural Science, New York University, New York, NY, USA; Neuroscience Institute at the NYU Langone Medical Center, New York, NY, USA.
| | - José R Hurtado
- Neurobiology of Cognition Laboratory, Center for Neural Science, New York University, New York, NY, USA
| | - Jantine A C Broek
- Departments of Computer Science and Mathematics, Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - EunHye Park
- Neurobiology of Cognition Laboratory, Center for Neural Science, New York University, New York, NY, USA
| | - Bud Mishra
- Departments of Computer Science and Mathematics, Courant Institute of Mathematical Sciences, New York University, New York, NY, USA; Department of Cell Biology, NYU Langone Medical Center, New York, NY, USA; Simon Center for Quantitative Biology, Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| |
Collapse
|
50
|
De A, Chaudhuri R. Common population codes produce extremely nonlinear neural manifolds. Proc Natl Acad Sci U S A 2023; 120:e2305853120. [PMID: 37733742 PMCID: PMC10523500 DOI: 10.1073/pnas.2305853120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 08/03/2023] [Indexed: 09/23/2023] Open
Abstract
Populations of neurons represent sensory, motor, and cognitive variables via patterns of activity distributed across the population. The size of the population used to encode a variable is typically much greater than the dimension of the variable itself, and thus, the corresponding neural population activity occupies lower-dimensional subsets of the full set of possible activity states. Given population activity data with such lower-dimensional structure, a fundamental question asks how close the low-dimensional data lie to a linear subspace. The linearity or nonlinearity of the low-dimensional structure reflects important computational features of the encoding, such as robustness and generalizability. Moreover, identifying such linear structure underlies common data analysis methods such as Principal Component Analysis (PCA). Here, we show that for data drawn from many common population codes the resulting point clouds and manifolds are exceedingly nonlinear, with the dimension of the best-fitting linear subspace growing at least exponentially with the true dimension of the data. Consequently, linear methods like PCA fail dramatically at identifying the true underlying structure, even in the limit of arbitrarily many data points and no noise.
Collapse
Affiliation(s)
- Anandita De
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Physics, University of California, Davis, CA95616
| | - Rishidev Chaudhuri
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, CA95616
- Department of Mathematics, University of California, Davis, CA95616
| |
Collapse
|