1
|
Senk J, Hagen E, van Albada SJ, Diesmann M. Reconciliation of weak pairwise spike-train correlations and highly coherent local field potentials across space. Cereb Cortex 2024; 34:bhae405. [PMID: 39462814 PMCID: PMC11513197 DOI: 10.1093/cercor/bhae405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 09/09/2024] [Accepted: 09/23/2024] [Indexed: 10/29/2024] Open
Abstract
Multi-electrode arrays covering several square millimeters of neural tissue provide simultaneous access to population signals such as extracellular potentials and spiking activity of one hundred or more individual neurons. The interpretation of the recorded data calls for multiscale computational models with corresponding spatial dimensions and signal predictions. Multi-layer spiking neuron network models of local cortical circuits covering about $1\,{\text{mm}^{2}}$ have been developed, integrating experimentally obtained neuron-type-specific connectivity data and reproducing features of observed in-vivo spiking statistics. Local field potentials can be computed from the simulated spiking activity. We here extend a local network and local field potential model to an area of $4\times 4\,{\text{mm}^{2}}$, preserving the neuron density and introducing distance-dependent connection probabilities and conduction delays. We find that the upscaling procedure preserves the overall spiking statistics of the original model and reproduces asynchronous irregular spiking across populations and weak pairwise spike-train correlations in agreement with experimental recordings from sensory cortex. Also compatible with experimental observations, the correlation of local field potential signals is strong and decays over a distance of several hundred micrometers. Enhanced spatial coherence in the low-gamma band around $50\,\text{Hz}$ may explain the recent report of an apparent band-pass filter effect in the spatial reach of the local field potential.
Collapse
Affiliation(s)
- Johanna Senk
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Wilhelm-Johnen-Str., 52428 Jülich, Germany
- Sussex AI, School of Engineering and Informatics, University of Sussex, Chichester, Falmer, Brighton BN1 9QJ, United Kingdom
| | - Espen Hagen
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Wilhelm-Johnen-Str., 52428 Jülich, Germany
- Centre for Precision Psychiatry, Institute of Clinical Medicine, University of Oslo, and Division of Mental Health and Addiction, Oslo University Hospital, Ullevål Hospital, 0424 Oslo, Norway
| | - Sacha J van Albada
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Wilhelm-Johnen-Str., 52428 Jülich, Germany
- Institute of Zoology, University of Cologne, Zülpicher Str., 50674 Cologne, Germany
| | - Markus Diesmann
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Wilhelm-Johnen-Str., 52428 Jülich, Germany
- JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Wilhelm-Johnen-Str., 52428 Jülich Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Pauwelsstr., 52074 Aachen, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Otto-Blumenthal-Str., 52074 Aachen, Germany
| |
Collapse
|
2
|
Powell NJ, Hein B, Kong D, Elpelt J, Mulholland HN, Kaschube M, Smith GB. Developmental maturation of millimeter-scale functional networks across brain areas. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.28.595371. [PMID: 38853883 PMCID: PMC11160666 DOI: 10.1101/2024.05.28.595371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
Interacting with the environment to process sensory information, generate perceptions, and shape behavior engages neural networks in brain areas with highly varied representations, ranging from unimodal sensory cortices to higher-order association areas. Recent work suggests a much greater degree of commonality across areas, with distributed and modular networks present in both sensory and non-sensory areas during early development. However, it is currently unknown whether this initially common modular structure undergoes an equally common developmental trajectory, or whether such a modular functional organization persists in some areas-such as primary visual cortex-but not others. Here we examine the development of network organization across diverse cortical regions in ferrets of both sexes using in vivo widefield calcium imaging of spontaneous activity. We find that all regions examined, including both primary sensory cortices (visual, auditory, and somatosensory-V1, A1, and S1, respectively) and higher order association areas (prefrontal and posterior parietal cortices) exhibit a largely similar pattern of changes over an approximately 3 week developmental period spanning eye opening and the transition to predominantly externally-driven sensory activity. We find that both a modular functional organization and millimeter-scale correlated networks remain present across all cortical areas examined. These networks weakened over development in most cortical areas, but strengthened in V1. Overall, the conserved maintenance of modular organization across different cortical areas suggests a common pathway of network refinement, and suggests that a modular organization-known to encode functional representations in visual areas-may be similarly engaged in highly diverse brain areas.
Collapse
Affiliation(s)
- Nathaniel J Powell
- Optical Imaging and Brain Sciences Medical Discovery Team, Department of Neuroscience, University of Minnesota, Minneapolis, MN, USA
| | - Bettina Hein
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Deyue Kong
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
- Department of Computer Science and Mathematics, Goethe University, Frankfurt, Germany
- International Max Planck Research School for Neural Circuits, Frankfurt, Germany
| | - Jonas Elpelt
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
- Department of Computer Science and Mathematics, Goethe University, Frankfurt, Germany
| | - Haleigh N Mulholland
- Optical Imaging and Brain Sciences Medical Discovery Team, Department of Neuroscience, University of Minnesota, Minneapolis, MN, USA
| | - Matthias Kaschube
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
- Department of Computer Science and Mathematics, Goethe University, Frankfurt, Germany
| | - Gordon B Smith
- Optical Imaging and Brain Sciences Medical Discovery Team, Department of Neuroscience, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
3
|
Fortunato C, Bennasar-Vázquez J, Park J, Chang JC, Miller LE, Dudman JT, Perich MG, Gallego JA. Nonlinear manifolds underlie neural population activity during behaviour. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.07.18.549575. [PMID: 37503015 PMCID: PMC10370078 DOI: 10.1101/2023.07.18.549575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
There is rich variety in the activity of single neurons recorded during behaviour. Yet, these diverse single neuron responses can be well described by relatively few patterns of neural co-modulation. The study of such low-dimensional structure of neural population activity has provided important insights into how the brain generates behaviour. Virtually all of these studies have used linear dimensionality reduction techniques to estimate these population-wide co-modulation patterns, constraining them to a flat "neural manifold". Here, we hypothesised that since neurons have nonlinear responses and make thousands of distributed and recurrent connections that likely amplify such nonlinearities, neural manifolds should be intrinsically nonlinear. Combining neural population recordings from monkey, mouse, and human motor cortex, and mouse striatum, we show that: 1) neural manifolds are intrinsically nonlinear; 2) their nonlinearity becomes more evident during complex tasks that require more varied activity patterns; and 3) manifold nonlinearity varies across architecturally distinct brain regions. Simulations using recurrent neural network models confirmed the proposed relationship between circuit connectivity and manifold nonlinearity, including the differences across architecturally distinct regions. Thus, neural manifolds underlying the generation of behaviour are inherently nonlinear, and properly accounting for such nonlinearities will be critical as neuroscientists move towards studying numerous brain regions involved in increasingly complex and naturalistic behaviours.
Collapse
Affiliation(s)
- Cátia Fortunato
- Department of Bioengineering, Imperial College London, London UK
| | | | - Junchol Park
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn VA, USA
| | - Joanna C. Chang
- Department of Bioengineering, Imperial College London, London UK
| | - Lee E. Miller
- Department of Neurosciences, Northwestern University, Chicago IL, USA
- Department of Biomedical Engineering, Northwestern University, Chicago IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago IL, USA, and Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Joshua T. Dudman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn VA, USA
| | - Matthew G. Perich
- Department of Neurosciences, Faculté de médecine, Université de Montréal, Montréal, Québec, Canada
- Québec Artificial Intelligence Institute (MILA), Montréal, Québec, Canada
| | - Juan A. Gallego
- Department of Bioengineering, Imperial College London, London UK
| |
Collapse
|
4
|
Stroh A, Schweiger S, Ramirez JM, Tüscher O. The selfish network: how the brain preserves behavioral function through shifts in neuronal network state. Trends Neurosci 2024; 47:246-258. [PMID: 38485625 DOI: 10.1016/j.tins.2024.02.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Revised: 01/31/2024] [Accepted: 02/19/2024] [Indexed: 04/12/2024]
Abstract
Neuronal networks possess the ability to regulate their activity states in response to disruptions. How and when neuronal networks turn from physiological into pathological states, leading to the manifestation of neuropsychiatric disorders, remains largely unknown. Here, we propose that neuronal networks intrinsically maintain network stability even at the cost of neuronal loss. Despite the new stable state being potentially maladaptive, neural networks may not reverse back to states associated with better long-term outcomes. These maladaptive states are often associated with hyperactive neurons, marking the starting point for activity-dependent neurodegeneration. Transitions between network states may occur rapidly, and in discrete steps rather than continuously, particularly in neurodegenerative disorders. The self-stabilizing, metastable, and noncontinuous characteristics of these network states can be mathematically described as attractors. Maladaptive attractors may represent a distinct pathophysiological entity that could serve as a target for new therapies and for fostering resilience.
Collapse
Affiliation(s)
- Albrecht Stroh
- Leibniz Institute for Resilience Research, Mainz, Germany; Institute of Pathophysiology, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany.
| | - Susann Schweiger
- Leibniz Institute for Resilience Research, Mainz, Germany; Institute of Human Genetics, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany; Institute of Molecular Biology (IMB), Mainz, Germany
| | - Jan-Marino Ramirez
- Center for Integrative Brain Research at the Seattle Children's Research Institute, University of Washington, Seattle, USA
| | - Oliver Tüscher
- Leibniz Institute for Resilience Research, Mainz, Germany; Institute of Molecular Biology (IMB), Mainz, Germany; Department of Psychiatry and Psychotherapy, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany.
| |
Collapse
|
5
|
Powell NJ, Hein B, Kong D, Elpelt J, Mulholland HN, Kaschube M, Smith GB. Common modular architecture across diverse cortical areas in early development. Proc Natl Acad Sci U S A 2024; 121:e2313743121. [PMID: 38446851 PMCID: PMC10945769 DOI: 10.1073/pnas.2313743121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 01/16/2024] [Indexed: 03/08/2024] Open
Abstract
In order to deal with a complex environment, animals form a diverse range of neural representations that vary across cortical areas, ranging from largely unimodal sensory input to higher-order representations of goals, outcomes, and motivation. The developmental origin of this diversity is currently unclear, as representations could arise through processes that are already area-specific from the earliest developmental stages or alternatively, they could emerge from an initially common functional organization shared across areas. Here, we use spontaneous activity recorded with two-photon and widefield calcium imaging to reveal the functional organization across the early developing cortex in ferrets, a species with a well-characterized columnar organization and modular structure of spontaneous activity in the visual cortex. We find that in animals 7 to 14 d prior to eye-opening and ear canal opening, spontaneous activity in both sensory areas (auditory and somatosensory cortex, A1 and S1, respectively), and association areas (posterior parietal and prefrontal cortex, PPC and PFC, respectively) showed an organized and modular structure that is highly similar to the organization in V1. In all cortical areas, this modular activity was distributed across the cortical surface, forming functional networks that exhibit millimeter-scale correlations. Moreover, this modular structure was evident in highly coherent spontaneous activity at the cellular level, with strong correlations among local populations of neurons apparent in all cortical areas examined. Together, our results demonstrate a common distributed and modular organization across the cortex during early development, suggesting that diverse cortical representations develop initially according to similar design principles.
Collapse
Affiliation(s)
- Nathaniel J. Powell
- Optical Imaging and Brain Sciences Medical Discovery Team, Department of Neuroscience, University of Minnesota, Minneapolis, MN55455
| | - Bettina Hein
- Center for Theoretical Neuroscience, Zuckerman Institute, Columbia University, New York, NY10027
| | - Deyue Kong
- Frankfurt Institute for Advanced Studies, Frankfurt am Main60438, Germany
- Department of Computer Science and Mathematics, Goethe University, Frankfurt am Main60629, Germany
- International Max Planck Research School for Neural Circuits, Frankfurt am Main60438, Germany
| | - Jonas Elpelt
- Frankfurt Institute for Advanced Studies, Frankfurt am Main60438, Germany
- Department of Computer Science and Mathematics, Goethe University, Frankfurt am Main60629, Germany
| | - Haleigh N. Mulholland
- Optical Imaging and Brain Sciences Medical Discovery Team, Department of Neuroscience, University of Minnesota, Minneapolis, MN55455
| | - Matthias Kaschube
- Frankfurt Institute for Advanced Studies, Frankfurt am Main60438, Germany
- Department of Computer Science and Mathematics, Goethe University, Frankfurt am Main60629, Germany
| | - Gordon B. Smith
- Optical Imaging and Brain Sciences Medical Discovery Team, Department of Neuroscience, University of Minnesota, Minneapolis, MN55455
| |
Collapse
|
6
|
Shi YL, Zeraati R, Levina A, Engel TA. Spatial and temporal correlations in neural networks with structured connectivity. PHYSICAL REVIEW RESEARCH 2023; 5:013005. [PMID: 38938692 PMCID: PMC11210526 DOI: 10.1103/physrevresearch.5.013005] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/29/2024]
Abstract
Correlated fluctuations in the activity of neural populations reflect the network's dynamics and connectivity. The temporal and spatial dimensions of neural correlations are interdependent. However, prior theoretical work mainly analyzed correlations in either spatial or temporal domains, oblivious to their interplay. We show that the network dynamics and connectivity jointly define the spatiotemporal profile of neural correlations. We derive analytical expressions for pairwise correlations in networks of binary units with spatially arranged connectivity in one and two dimensions. We find that spatial interactions among units generate multiple timescales in auto- and cross-correlations. Each timescale is associated with fluctuations at a particular spatial frequency, making a hierarchical contribution to the correlations. External inputs can modulate the correlation timescales when spatial interactions are nonlinear, and the modulation effect depends on the operating regime of network dynamics. These theoretical results open new ways to relate connectivity and dynamics in cortical networks via measurements of spatiotemporal neural correlations.
Collapse
Affiliation(s)
- Yan-Liang Shi
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, USA
| | - Roxana Zeraati
- International Max Planck Research School for the Mechanisms of Mental Function and Dysfunction, University of Tübingen, Tübingen, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Anna Levina
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Computer Science, University of Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience Tübingen, Tübingen, Germany
| | - Tatiana A Engel
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, USA
| |
Collapse
|
7
|
Evaluating the statistical similarity of neural network activity and connectivity via eigenvector angles. Biosystems 2023; 223:104813. [PMID: 36460172 DOI: 10.1016/j.biosystems.2022.104813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 11/15/2022] [Accepted: 11/15/2022] [Indexed: 12/02/2022]
Abstract
Neural systems are networks, and strategic comparisons between multiple networks are a prevalent task in many research scenarios. In this study, we construct a statistical test for the comparison of matrices representing pairwise aspects of neural networks, in particular, the correlation between spiking activity and connectivity. The "eigenangle test" quantifies the similarity of two matrices by the angles between their ranked eigenvectors. We calibrate the behavior of the test for use with correlation matrices using stochastic models of correlated spiking activity and demonstrate how it compares to classical two-sample tests, such as the Kolmogorov-Smirnov distance, in the sense that it is able to evaluate also structural aspects of pairwise measures. Furthermore, the principle of the eigenangle test can be applied to compare the similarity of adjacency matrices of certain types of networks. Thus, the approach can be used to quantitatively explore the relationship between connectivity and activity with the same metric. By applying the eigenangle test to the comparison of connectivity matrices and correlation matrices of a random balanced network model before and after a specific synaptic rewiring intervention, we gauge the influence of connectivity features on the correlated activity. Potential applications of the eigenangle test include simulation experiments, model validation, and data analysis.
Collapse
|
8
|
Hopkins M, Fil J, Jones EG, Furber S. BitBrain and Sparse Binary Coincidence (SBC) memories: Fast, robust learning and inference for neuromorphic architectures. Front Neuroinform 2023; 17:1125844. [PMID: 37025552 PMCID: PMC10071999 DOI: 10.3389/fninf.2023.1125844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 03/03/2023] [Indexed: 04/08/2023] Open
Abstract
We present an innovative working mechanism (the SBC memory) and surrounding infrastructure (BitBrain) based upon a novel synthesis of ideas from sparse coding, computational neuroscience and information theory that enables fast and adaptive learning and accurate, robust inference. The mechanism is designed to be implemented efficiently on current and future neuromorphic devices as well as on more conventional CPU and memory architectures. An example implementation on the SpiNNaker neuromorphic platform has been developed and initial results are presented. The SBC memory stores coincidences between features detected in class examples in a training set, and infers the class of a previously unseen test example by identifying the class with which it shares the highest number of feature coincidences. A number of SBC memories may be combined in a BitBrain to increase the diversity of the contributing feature coincidences. The resulting inference mechanism is shown to have excellent classification performance on benchmarks such as MNIST and EMNIST, achieving classification accuracy with single-pass learning approaching that of state-of-the-art deep networks with much larger tuneable parameter spaces and much higher training costs. It can also be made very robust to noise. BitBrain is designed to be very efficient in training and inference on both conventional and neuromorphic architectures. It provides a unique combination of single-pass, single-shot and continuous supervised learning; following a very simple unsupervised phase. Accurate classification inference that is very robust against imperfect inputs has been demonstrated. These contributions make it uniquely well-suited for edge and IoT applications.
Collapse
|