1
|
Savtchenko LP, Rusakov DA. Equal levels of pre- and postsynaptic potentiation produce unequal outcomes. Philos Trans R Soc Lond B Biol Sci 2024; 379:20230235. [PMID: 38853561 DOI: 10.1098/rstb.2023.0235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 01/05/2024] [Indexed: 06/11/2024] Open
Abstract
Which proportion of the long-term potentiation (LTP) expressed in the bulk of excitatory synapses is postsynaptic and which presynaptic remains debatable. To understand better the possible impact of either LTP form, we explored a realistic model of a CA1 pyramidal cell equipped with known membrane mechanisms and multiple, stochastic excitatory axo-spinous synapses. Our simulations were designed to establish an input-output transfer function, the dependence between the frequency of presynaptic action potentials triggering probabilistic synaptic discharges and the average frequency of postsynaptic spiking. We found that, within the typical physiological range, potentiation of the postsynaptic current results in a greater overall output than an equivalent increase in presynaptic release probability. This difference grows stronger at lower input frequencies and lower release probabilities. Simulations with a non-hierarchical circular network of principal neurons indicated that equal increases in either synaptic fidelity or synaptic strength of individual connections also produce distinct changes in network activity, although the network phenomenology is likely to be complex. These observations should help to interpret the machinery of LTP phenomena documented in situ. This article is part of a discussion meeting issue 'Long-term potentiation: 50 years on'.
Collapse
Affiliation(s)
- Leonid P Savtchenko
- UCL Queen Square Institute of Neurology, University College London , London WC1N 3BG, UK
| | - Dmitri A Rusakov
- UCL Queen Square Institute of Neurology, University College London , London WC1N 3BG, UK
| |
Collapse
|
2
|
Tang D, Zylberberg J, Jia X, Choi H. Stimulus type shapes the topology of cellular functional networks in mouse visual cortex. Nat Commun 2024; 15:5753. [PMID: 38982078 PMCID: PMC11233648 DOI: 10.1038/s41467-024-49704-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 06/13/2024] [Indexed: 07/11/2024] Open
Abstract
On the timescale of sensory processing, neuronal networks have relatively fixed anatomical connectivity, while functional interactions between neurons can vary depending on the ongoing activity of the neurons within the network. We thus hypothesized that different types of stimuli could lead those networks to display stimulus-dependent functional connectivity patterns. To test this hypothesis, we analyzed single-cell resolution electrophysiological data from the Allen Institute, with simultaneous recordings of stimulus-evoked activity from neurons across 6 different regions of mouse visual cortex. Comparing the functional connectivity patterns during different stimulus types, we made several nontrivial observations: (1) while the frequencies of different functional motifs were preserved across stimuli, the identities of the neurons within those motifs changed; (2) the degree to which functional modules are contained within a single brain region increases with stimulus complexity. Altogether, our work reveals unexpected stimulus-dependence to the way groups of neurons interact to process incoming sensory information.
Collapse
Affiliation(s)
- Disheng Tang
- School of Life Sciences, Tsinghua University, Beijing, 100084, PR China.
- Quantitative Biosciences Program, Georgia Institute of Technology, Atlanta, 30332, GA, USA.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, 100084, PR China.
| | - Joel Zylberberg
- Department of Physics and Astronomy, and Centre for Vision Research, York University, Toronto, ON M3J 1P3, ON, Canada.
- Learning in Machines and Brains Program, CIFAR, Toronto, ON M5G 1M1, ON, Canada.
| | - Xiaoxuan Jia
- School of Life Sciences, Tsinghua University, Beijing, 100084, PR China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, 100084, PR China.
- Tsinghua-Peking Center for Life Sciences, Tsinghua University, Beijing, 100084, PR China.
| | - Hannah Choi
- Quantitative Biosciences Program, Georgia Institute of Technology, Atlanta, 30332, GA, USA.
- School of Mathematics, Georgia Institute of Technology, Atlanta, 30332, GA, USA.
| |
Collapse
|
3
|
Moore JJ, Genkin A, Tournoy M, Pughe-Sanford JL, de Ruyter van Steveninck RR, Chklovskii DB. The neuron as a direct data-driven controller. Proc Natl Acad Sci U S A 2024; 121:e2311893121. [PMID: 38913890 PMCID: PMC11228465 DOI: 10.1073/pnas.2311893121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 04/12/2024] [Indexed: 06/26/2024] Open
Abstract
In the quest to model neuronal function amid gaps in physiological data, a promising strategy is to develop a normative theory that interprets neuronal physiology as optimizing a computational objective. This study extends current normative models, which primarily optimize prediction, by conceptualizing neurons as optimal feedback controllers. We posit that neurons, especially those beyond early sensory areas, steer their environment toward a specific desired state through their output. This environment comprises both synaptically interlinked neurons and external motor sensory feedback loops, enabling neurons to evaluate the effectiveness of their control via synaptic feedback. To model neurons as biologically feasible controllers which implicitly identify loop dynamics, infer latent states, and optimize control we utilize the contemporary direct data-driven control (DD-DC) framework. Our DD-DC neuron model explains various neurophysiological phenomena: the shift from potentiation to depression in spike-timing-dependent plasticity with its asymmetry, the duration and adaptive nature of feedforward and feedback neuronal filters, the imprecision in spike generation under constant stimulation, and the characteristic operational variability and noise in the brain. Our model presents a significant departure from the traditional, feedforward, instant-response McCulloch-Pitts-Rosenblatt neuron, offering a modern, biologically informed fundamental unit for constructing neural networks.
Collapse
Affiliation(s)
- Jason J Moore
- Neuroscience Institute, New York University Grossman School of Medicine, New York City, NY 10016
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | - Alexander Genkin
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | - Magnus Tournoy
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | | | | | - Dmitri B Chklovskii
- Neuroscience Institute, New York University Grossman School of Medicine, New York City, NY 10016
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| |
Collapse
|
4
|
Abatis M, Perin R, Niu R, van den Burg E, Hegoburu C, Kim R, Okamura M, Bito H, Markram H, Stoop R. Fear learning induces synaptic potentiation between engram neurons in the rat lateral amygdala. Nat Neurosci 2024; 27:1309-1317. [PMID: 38871992 PMCID: PMC11239494 DOI: 10.1038/s41593-024-01676-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 05/07/2024] [Indexed: 06/15/2024]
Abstract
The lateral amygdala (LA) encodes fear memories by potentiating sensory inputs associated with threats and, in the process, recruits 10-30% of its neurons per fear memory engram. However, how the local network within the LA processes this information and whether it also plays a role in storing it are still largely unknown. Here, using ex vivo 12-patch-clamp and in vivo 32-electrode electrophysiological recordings in the LA of fear-conditioned rats, in combination with activity-dependent fluorescent and optogenetic tagging and recall, we identified a sparsely connected network between principal LA neurons that is organized in clusters. Fear conditioning specifically causes potentiation of synaptic connections between learning-recruited neurons. These findings of synaptic plasticity in an autoassociative excitatory network of the LA may suggest a basic principle through which a small number of pyramidal neurons could encode a large number of memories.
Collapse
Affiliation(s)
- Marios Abatis
- Department of Psychiatry, Center for Psychiatric Neuroscience, University Hospital of Lausanne, Prilly-Lausanne, Switzerland
| | - Rodrigo Perin
- Brain-Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Ruifang Niu
- Department of Psychiatry, Center for Psychiatric Neuroscience, University Hospital of Lausanne, Prilly-Lausanne, Switzerland
| | - Erwin van den Burg
- Department of Psychiatry, Center for Psychiatric Neuroscience, University Hospital of Lausanne, Prilly-Lausanne, Switzerland
| | - Chloe Hegoburu
- Department of Psychiatry, Center for Psychiatric Neuroscience, University Hospital of Lausanne, Prilly-Lausanne, Switzerland
| | - Ryang Kim
- Department of Neurochemistry, The University of Tokyo Graduate School of Medicine, Tokyo, Japan
| | - Michiko Okamura
- Department of Neurochemistry, The University of Tokyo Graduate School of Medicine, Tokyo, Japan
| | - Haruhiko Bito
- Department of Neurochemistry, The University of Tokyo Graduate School of Medicine, Tokyo, Japan
| | - Henry Markram
- Brain-Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Ron Stoop
- Department of Psychiatry, Center for Psychiatric Neuroscience, University Hospital of Lausanne, Prilly-Lausanne, Switzerland.
| |
Collapse
|
5
|
Tsubo Y, Shinomoto S. Nondifferentiable activity in the brain. PNAS NEXUS 2024; 3:pgae261. [PMID: 38994500 PMCID: PMC11238849 DOI: 10.1093/pnasnexus/pgae261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 06/05/2024] [Indexed: 07/13/2024]
Abstract
Spike raster plots of numerous neurons show vertical stripes, indicating that neurons exhibit synchronous activity in the brain. We seek to determine whether these coherent dynamics are caused by smooth brainwave activity or by something else. By analyzing biological data, we find that their cross-correlograms exhibit not only slow undulation but also a cusp at the origin, in addition to possible signs of monosynaptic connectivity. Here we show that undulation emerges if neurons are subject to smooth brainwave oscillations while a cusp results from nondifferentiable fluctuations. While modern analysis methods have achieved good connectivity estimation by adapting the models to slow undulation, they still make false inferences due to the cusp. We devise a new analysis method that may solve both problems. We also demonstrate that oscillations and nondifferentiable fluctuations may emerge in simulations of large-scale neural networks.
Collapse
Affiliation(s)
- Yasuhiro Tsubo
- College of Information Science and Engineering, Ritsumeikan University, Osaka 567-8570, Japan
| | - Shigeru Shinomoto
- Research Organization of Open Innovation and Collaboration, Ritsumeikan University, Osaka 567-8570, Japan
- Graduate School of Biostudies, Kyoto University, Kyoto 606-8501, Japan
| |
Collapse
|
6
|
Chini M, Hnida M, Kostka JK, Chen YN, Hanganu-Opatz IL. Preconfigured architecture of the developing mouse brain. Cell Rep 2024; 43:114267. [PMID: 38795344 DOI: 10.1016/j.celrep.2024.114267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 03/13/2024] [Accepted: 05/08/2024] [Indexed: 05/27/2024] Open
Abstract
In the adult brain, structural and functional parameters, such as synaptic sizes and neuronal firing rates, follow right-skewed and heavy-tailed distributions. While this organization is thought to have significant implications, its development is still largely unknown. Here, we address this knowledge gap by investigating a large-scale dataset recorded from the prefrontal cortex and the olfactory bulb of mice aged 4-60 postnatal days. We show that firing rates and spike train interactions have a largely stable distribution shape throughout the first 60 postnatal days and that the prefrontal cortex displays a functional small-world architecture. Moreover, early brain activity exhibits an oligarchical organization, where high-firing neurons have hub-like properties. In a neural network model, we show that analogously right-skewed and heavy-tailed synaptic parameters are instrumental to consistently recapitulate the experimental data. Thus, functional and structural parameters in the developing brain are already extremely distributed, suggesting that this organization is preconfigured and not experience dependent.
Collapse
Affiliation(s)
- Mattia Chini
- Institute of Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| | - Marilena Hnida
- Institute of Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Johanna K Kostka
- Institute of Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Yu-Nan Chen
- Institute of Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Ileana L Hanganu-Opatz
- Institute of Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
7
|
Breffle J, Germaine H, Shin JD, Jadhav SP, Miller P. Intrinsic dynamics of randomly clustered networks generate place fields and preplay of novel environments. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.26.564173. [PMID: 37961479 PMCID: PMC10634993 DOI: 10.1101/2023.10.26.564173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
During both sleep and awake immobility, hippocampal place cells reactivate time-compressed versions of sequences representing recently experienced trajectories in a phenomenon known as replay. Intriguingly, spontaneous sequences can also correspond to forthcoming trajectories in novel environments experienced later, in a phenomenon known as preplay. Here, we present a model showing that sequences of spikes correlated with the place fields underlying spatial trajectories in both previously experienced and future novel environments can arise spontaneously in neural circuits with random, clustered connectivity rather than pre-configured spatial maps. Moreover, the realistic place fields themselves arise in the circuit from minimal, landmark-based inputs. We find that preplay quality depends on the network's balance of cluster isolation and overlap, with optimal preplay occurring in small-world regimes of high clustering yet short path lengths. We validate the results of our model by applying the same place field and preplay analyses to previously published rat hippocampal place cell data. Our results show that clustered recurrent connectivity can generate spontaneous preplay and immediate replay of novel environments. These findings support a framework whereby novel sensory experiences become associated with preexisting "pluripotent" internal neural activity patterns.
Collapse
Affiliation(s)
- Jordan Breffle
- Neuroscience Program, Brandeis University, 415 South St., Waltham, MA 02454
| | - Hannah Germaine
- Neuroscience Program, Brandeis University, 415 South St., Waltham, MA 02454
| | - Justin D Shin
- Neuroscience Program, Brandeis University, 415 South St., Waltham, MA 02454
- Volen National Center for Complex Systems, Brandeis University, 415 South St., Waltham, MA 02454
- Department of Psychology, Brandeis University, 415 South St., Waltham, MA 02454
| | - Shantanu P Jadhav
- Neuroscience Program, Brandeis University, 415 South St., Waltham, MA 02454
- Volen National Center for Complex Systems, Brandeis University, 415 South St., Waltham, MA 02454
- Department of Psychology, Brandeis University, 415 South St., Waltham, MA 02454
| | - Paul Miller
- Neuroscience Program, Brandeis University, 415 South St., Waltham, MA 02454
- Volen National Center for Complex Systems, Brandeis University, 415 South St., Waltham, MA 02454
- Department of Biology, Brandeis University, 415 South St., Waltham, MA 02454
| |
Collapse
|
8
|
Iwase M, Diba K, Pastalkova E, Mizuseki K. Dynamics of spike transmission and suppression between principal cells and interneurons in the hippocampus and entorhinal cortex. Hippocampus 2024. [PMID: 38874439 DOI: 10.1002/hipo.23612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 03/29/2024] [Accepted: 05/07/2024] [Indexed: 06/15/2024]
Abstract
Synaptic excitation and inhibition are essential for neuronal communication. However, the variables that regulate synaptic excitation and inhibition in the intact brain remain largely unknown. Here, we examined how spike transmission and suppression between principal cells (PCs) and interneurons (INTs) are modulated by activity history, brain state, cell type, and somatic distance between presynaptic and postsynaptic neurons by applying cross-correlogram analyses to datasets recorded from the dorsal hippocampus and medial entorhinal cortex (MEC) of 11 male behaving and sleeping Long Evans rats. The strength, temporal delay, and brain-state dependency of the spike transmission and suppression depended on the subregions/layers. The spike transmission probability of PC-INT excitatory pairs that showed short-term depression versus short-term facilitation was higher in CA1 and lower in CA3. Likewise, the intersomatic distance affected the proportion of PC-INT excitatory pairs that showed short-term depression and facilitation in the opposite manner in CA1 compared with CA3. The time constant of depression was longer, while that of facilitation was shorter in MEC than in CA1 and CA3. During sharp-wave ripples, spike transmission showed a larger gain in the MEC than in CA1 and CA3. The intersomatic distance affected the spike transmission gain during sharp-wave ripples differently in CA1 versus CA3. A subgroup of MEC layer 3 (EC3) INTs preferentially received excitatory inputs from and inhibited MEC layer 2 (EC2) PCs. The EC2 PC-EC3 INT excitatory pairs, most of which showed short-term depression, exhibited higher spike transmission probabilities than the EC2 PC-EC2 INT and EC3 PC-EC3 INT excitatory pairs. EC2 putative stellate cells exhibited stronger spike transmission to and received weaker spike suppression from EC3 INTs than EC2 putative pyramidal cells. This study provides detailed comparisons of monosynaptic interaction dynamics in the hippocampal-entorhinal loop, which may help to elucidate circuit operations.
Collapse
Affiliation(s)
- Motosada Iwase
- Department of Physiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
- Department of Physiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Kamran Diba
- Department of Anesthesiology, Neuroscience Graduate Program, University of Michigan Medical School, Ann Arbor, Michigan, USA
| | - Eva Pastalkova
- The William Alanson White Institute of Psychiatry, Psychoanalysis & Psychology, New York, New York, USA
| | - Kenji Mizuseki
- Department of Physiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
- Department of Physiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| |
Collapse
|
9
|
Yang J, Feng P, Wu Y. Neuronal avalanche dynamics regulated by spike-timing-dependent plasticity under different topologies and heterogeneities. Cogn Neurodyn 2024; 18:1307-1321. [PMID: 38826660 PMCID: PMC11143121 DOI: 10.1007/s11571-023-09966-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 03/18/2023] [Accepted: 03/26/2023] [Indexed: 06/04/2024] Open
Abstract
Neuronal avalanches, a critical state of network self-organization, have been widely observed in electrophysiological records at different signal levels and spatial scales of the brain, which has significant influence on information transmission and processing in the brain. In this paper, the collective behavior of neuron firing is studied based on Leaky Integrate-and-Fire model and we induce spike-timing-dependent plasticity (STDP) to update the connection weight through competition between adjacent neurons in different network topologies. The result shows that STDP can facilitate the synchronization of the network and increase the probability of large-scale neuron avalanche obviously. Moreover, both the structure of STDP and network connection density can affect the generation of avalanche critical states, specifically, learning rate has positive correlation effect on the slope of power-law distribution and time constant has negative correction on it. However, when we the increase of heterogeneity in network, STDP can only has obvious promotion in synchrony under suitable level of heterogeneity. And we find that the process of long-term potentiation is sensitive to the adjustment of time constant and learning rate, unlike long-term depression, which is only sensitive to learning rate in heterogeneity network. It is suggested that presented results could facilitate our understanding on synchronization in various neural networks under the effect of STDP learning rules.
Collapse
Affiliation(s)
- Jiayi Yang
- State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace Engineering, Xi’an Jiaotong University, Xi’an, 710049 Shanxi China
| | - Peihua Feng
- State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace Engineering, Xi’an Jiaotong University, Xi’an, 710049 Shanxi China
| | - Ying Wu
- State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace Engineering, Xi’an Jiaotong University, Xi’an, 710049 Shanxi China
| |
Collapse
|
10
|
Ji-An L, Benna MK. Deep Learning without Weight Symmetry. ARXIV 2024:arXiv:2405.20594v1. [PMID: 38855537 PMCID: PMC11160852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
Backpropagation (BP), a foundational algorithm for training artificial neural networks, predominates in contemporary deep learning. Although highly successful, it is often considered biologically implausible. A significant limitation arises from the need for precise symmetry between connections in the backward and forward pathways to backpropagate gradient signals accurately, which is not observed in biological brains. Researchers have proposed several algorithms to alleviate this symmetry constraint, such as feedback alignment and direct feedback alignment. However, their divergence from backpropagation dynamics presents challenges, particularly in deeper networks and convolutional layers. Here we introduce the Product Feedback Alignment (PFA) algorithm. Our findings demonstrate that PFA closely approximates BP and achieves comparable performance in deep convolutional networks while avoiding explicit weight symmetry. Our results offer a novel solution to the longstanding weight symmetry problem, leading to more biologically plausible learning in deep convolutional networks compared to earlier methods.
Collapse
Affiliation(s)
- Li Ji-An
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093
| | - Marcus K Benna
- Department of Neurobiology, University of California, San Diego, La Jolla, CA 92093
| |
Collapse
|
11
|
Hudetz AG. Microstimulation reveals anesthetic state-dependent effective connectivity of neurons in cerebral cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.29.591664. [PMID: 38746366 PMCID: PMC11092428 DOI: 10.1101/2024.04.29.591664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Complex neuronal interactions underlie cortical information processing that can be compromised in altered states of consciousness. Here intracortical microstimulation was applied to investigate the state-dependent effective connectivity of neurons in rat visual cortex in vivo. Extracellular activity was recorded at 32 sites in layers 5/6 while stimulating with charge-balanced discrete pulses at each electrode in random order. The same stimulation pattern was applied at three levels of anesthesia with desflurane and in wakefulness. Spikes were sorted and classified by their waveform features as putative excitatory and inhibitory neurons. Microstimulation caused early (<10ms) increase followed by prolonged (11-100ms) decrease in spiking of all neurons throughout the electrode array. The early response of excitatory but not inhibitory neurons decayed rapidly with distance from the stimulation site over 1mm. Effective connectivity of neurons with significant stimulus response was dense in wakefulness and sparse under anesthesia. Network motifs were identified in graphs of effective connectivity constructed from monosynaptic cross-correlograms. The number of motifs, especially those of higher order, increased rapidly as the anesthesia was withdrawn indicating a substantial increase in network connectivity as the animals woke up. The results illuminate the impact of anesthesia on functional integrity of local circuits affecting the state of consciousness.
Collapse
|
12
|
Albesa-González A, Clopath C. Learning with filopodia and spines: Complementary strong and weak competition lead to specialized, graded, and protected receptive fields. PLoS Comput Biol 2024; 20:e1012110. [PMID: 38743789 PMCID: PMC11125506 DOI: 10.1371/journal.pcbi.1012110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 05/24/2024] [Accepted: 04/25/2024] [Indexed: 05/16/2024] Open
Abstract
Filopodia are thin synaptic protrusions that have been long known to play an important role in early development. Recently, they have been found to be more abundant in the adult cortex than previously thought, and more plastic than spines (button-shaped mature synapses). Inspired by these findings, we introduce a new model of synaptic plasticity that jointly describes learning of filopodia and spines. The model assumes that filopodia exhibit strongly competitive learning dynamics -similarly to additive spike-timing-dependent plasticity (STDP). At the same time it proposes that, if filopodia undergo sufficient potentiation, they consolidate into spines. Spines follow weakly competitive learning, classically associated with multiplicative, soft-bounded models of STDP. This makes spines more stable and sensitive to the fine structure of input correlations. We show that our learning rule has a selectivity comparable to additive STDP and captures input correlations as well as multiplicative models of STDP. We also show how it can protect previously formed memories and perform synaptic consolidation. Overall, our results can be seen as a phenomenological description of how filopodia and spines could cooperate to overcome the individual difficulties faced by strong and weak competition mechanisms.
Collapse
Affiliation(s)
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
13
|
Chen K, Pan K, He S, Liu R, Zhou Z, Zhu D, Liu Z, He Z, Sun H, Wang M, Wang K, Tang M, Liu J. Mimicking Bidirectional Inhibitory Synapse Using a Porous-Confined Ionic Memristor with Electrolyte/Tris(4-aminophenyl)amine Neurotransmitter. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2400966. [PMID: 38483027 PMCID: PMC11109647 DOI: 10.1002/advs.202400966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Indexed: 05/23/2024]
Abstract
Ionic memristors can emulate brain-like functions of biological synapses for neuromorphic technologies. Apart from the widely studied excitatory-excitatory and excitatory-inhibitory synapses, reports on memristors with the inhibitory-inhibitory synaptic behaviors remain a challenge. Here, the first biaxially inhibited artificial synapse is demonstrated, consisting of a solid electrolyte and conjugated microporous polymers bilayer as neurotransmitter, with the former serving as an ion reservoir and the latter acting as a confined transport. Due to the migration, trapping, and de-trapping of ions within the nanoslits, the device poses inhibitory synaptic plasticity under both positive and negative stimuli. Remarkably, the artificial synapse is able to maintain a low level of stable nonvolatile memory over a long period of time (≈60 min) after multiple stimuli, with feature-inferencing/-training capabilities of neural node in neuromorphic computing. This work paves a reliable strategy for constructing nanochannel ionic memristive materials toward fully inhibitory synaptic devices.
Collapse
Affiliation(s)
- Kang Chen
- School of Materials Science and EngineeringXiangtan UniversityNorth Second Ring Road, YuhuXiangtanHunan411105China
| | - Keyuan Pan
- Key Laboratory of Flexible Electronics (KLOFE) & Institute of Advanced Materials (IAM)Nanjing Tech University (NanjingTech)30 South Puzhu RoadNanjing211816China
| | - Shang He
- School of Materials Science and EngineeringXiangtan UniversityNorth Second Ring Road, YuhuXiangtanHunan411105China
| | - Rui Liu
- School of Materials Science and EngineeringXiangtan UniversityNorth Second Ring Road, YuhuXiangtanHunan411105China
| | - Zhe Zhou
- Key Laboratory of Flexible Electronics (KLOFE) & Institute of Advanced Materials (IAM)Nanjing Tech University (NanjingTech)30 South Puzhu RoadNanjing211816China
| | - Duoyi Zhu
- Key Laboratory of Flexible Electronics (KLOFE) & Institute of Advanced Materials (IAM)Nanjing Tech University (NanjingTech)30 South Puzhu RoadNanjing211816China
| | - Zhengdong Liu
- Key Laboratory of Flexible Electronics (KLOFE) & Institute of Advanced Materials (IAM)Nanjing Tech University (NanjingTech)30 South Puzhu RoadNanjing211816China
| | - Zixi He
- Key Laboratory of Flexible Electronics (KLOFE) & Institute of Advanced Materials (IAM)Nanjing Tech University (NanjingTech)30 South Puzhu RoadNanjing211816China
| | - Hongchao Sun
- Key Laboratory of Flexible Electronics (KLOFE) & Institute of Advanced Materials (IAM)Nanjing Tech University (NanjingTech)30 South Puzhu RoadNanjing211816China
| | - Min Wang
- Key Laboratory of Flexible Electronics (KLOFE) & Institute of Advanced Materials (IAM)Nanjing Tech University (NanjingTech)30 South Puzhu RoadNanjing211816China
| | - Kaili Wang
- Key Laboratory of Flexible Electronics (KLOFE) & Institute of Advanced Materials (IAM)Nanjing Tech University (NanjingTech)30 South Puzhu RoadNanjing211816China
| | - Minghua Tang
- School of Materials Science and EngineeringXiangtan UniversityNorth Second Ring Road, YuhuXiangtanHunan411105China
| | - Juqing Liu
- Key Laboratory of Flexible Electronics (KLOFE) & Institute of Advanced Materials (IAM)Nanjing Tech University (NanjingTech)30 South Puzhu RoadNanjing211816China
| |
Collapse
|
14
|
Carbonero D, Noueihed J, Kramer MA, White JA. Non-Negative Matrix Factorization for Analyzing State Dependent Neuronal Network Dynamics in Calcium Recordings. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.11.561797. [PMID: 37905071 PMCID: PMC10614735 DOI: 10.1101/2023.10.11.561797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Calcium imaging allows recording from hundreds of neurons in vivo with the ability to resolve single cell activity. Evaluating and analyzing neuronal responses, while also considering all dimensions of the data set to make specific conclusions, is extremely difficult. Often, descriptive statistics are used to analyze these forms of data. These analyses, however, remove variance by averaging the responses of single neurons across recording sessions, or across combinations of neurons, to create single quantitative metrics, losing the temporal dynamics of neuronal activity, and their responses relative to each other. Dimensionally Reduction (DR) methods serve as a good foundation for these analyses because they reduce the dimensions of the data into components, while still maintaining the variance. Non-negative Matrix Factorization (NMF) is an especially promising DR analysis method for analyzing activity recorded in calcium imaging because of its mathematical constraints, which include positivity and linearity. We adapt NMF for our analyses and compare its performance to alternative dimensionality reduction methods on both artificial and in vivo data. We find that NMF is well-suited for analyzing calcium imaging recordings, accurately capturing the underlying dynamics of the data, and outperforming alternative methods in common use.
Collapse
Affiliation(s)
- Daniel Carbonero
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, United States of America
- Neurophotonics Center, Boston University, Boston, Massachusetts, United States of America
| | - Jad Noueihed
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, United States of America
- Neurophotonics Center, Boston University, Boston, Massachusetts, United States of America
| | - Mark A. Kramer
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts, United States of America
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, United States of America
| | - John A. White
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, United States of America
- Neurophotonics Center, Boston University, Boston, Massachusetts, United States of America
| |
Collapse
|
15
|
Celii B, Papadopoulos S, Ding Z, Fahey PG, Wang E, Papadopoulos C, Kunin AB, Patel S, Bae JA, Bodor AL, Brittain D, Buchanan J, Bumbarger DJ, Castro MA, Cobos E, Dorkenwald S, Elabbady L, Halageri A, Jia Z, Jordan C, Kapner D, Kemnitz N, Kinn S, Lee K, Li K, Lu R, Macrina T, Mahalingam G, Mitchell E, Mondal SS, Mu S, Nehoran B, Popovych S, Schneider-Mizell CM, Silversmith W, Takeno M, Torres R, Turner NL, Wong W, Wu J, Yu SC, Yin W, Xenes D, Kitchell LM, Rivlin PK, Rose VA, Bishop CA, Wester B, Froudarakis E, Walker EY, Sinz F, Seung HS, Collman F, da Costa NM, Reid RC, Pitkow X, Tolias AS, Reimer J. NEURD: automated proofreading and feature extraction for connectomics. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.03.14.532674. [PMID: 36993282 PMCID: PMC10055177 DOI: 10.1101/2023.03.14.532674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
We are now in the era of millimeter-scale electron microscopy (EM) volumes collected at nanometer resolution (Shapson-Coe et al., 2021; Consortium et al., 2021). Dense reconstruction of cellular compartments in these EM volumes has been enabled by recent advances in Machine Learning (ML) (Lee et al., 2017; Wu et al., 2021; Lu et al., 2021; Macrina et al., 2021). Automated segmentation methods can now yield exceptionally accurate reconstructions of cells, but despite this accuracy, laborious post-hoc proofreading is still required to generate large connectomes free of merge and split errors. The elaborate 3-D meshes of neurons produced by these segmentations contain detailed morphological information, from the diameter, shape, and branching patterns of axons and dendrites, down to the fine-scale structure of dendritic spines. However, extracting information about these features can require substantial effort to piece together existing tools into custom workflows. Building on existing open-source software for mesh manipulation, here we present "NEURD", a software package that decomposes each meshed neuron into a compact and extensively-annotated graph representation. With these feature-rich graphs, we implement workflows to automate a variety of tasks that would otherwise require extensive manual effort, such as state of the art automated post-hoc proofreading of merge errors, cell classification, spine detection, axon-dendritic proximities, and computation of other features. These features enable many downstream analyses of neural morphology and connectivity, making these new massive and complex datasets more accessible to neuroscience researchers focused on a variety of scientific questions.
Collapse
|
16
|
Peng Y, Bjelde A, Aceituno PV, Mittermaier FX, Planert H, Grosser S, Onken J, Faust K, Kalbhenn T, Simon M, Radbruch H, Fidzinski P, Schmitz D, Alle H, Holtkamp M, Vida I, Grewe BF, Geiger JRP. Directed and acyclic synaptic connectivity in the human layer 2-3 cortical microcircuit. Science 2024; 384:338-343. [PMID: 38635709 DOI: 10.1126/science.adg8828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 03/12/2024] [Indexed: 04/20/2024]
Abstract
The computational capabilities of neuronal networks are fundamentally constrained by their specific connectivity. Previous studies of cortical connectivity have mostly been carried out in rodents; whether the principles established therein also apply to the evolutionarily expanded human cortex is unclear. We studied network properties within the human temporal cortex using samples obtained from brain surgery. We analyzed multineuron patch-clamp recordings in layer 2-3 pyramidal neurons and identified substantial differences compared with rodents. Reciprocity showed random distribution, synaptic strength was independent from connection probability, and connectivity of the supragranular temporal cortex followed a directed and mostly acyclic graph topology. Application of these principles in neuronal models increased dimensionality of network dynamics, suggesting a critical role for cortical computation.
Collapse
Affiliation(s)
- Yangfan Peng
- Institute of Neurophysiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Antje Bjelde
- Institute of Neurophysiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Pau Vilimelis Aceituno
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, 8057 Zürich, Switzerland
| | - Franz X Mittermaier
- Institute of Neurophysiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Henrike Planert
- Institute of Neurophysiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Sabine Grosser
- Institute for Integrative Neuroanatomy, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Julia Onken
- Department of Neurosurgery, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Katharina Faust
- Department of Neurosurgery, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Thilo Kalbhenn
- Department of Neurosurgery (Evangelisches Klinikum Bethel), Medical School, Bielefeld University, 33617 Bielefeld, Germany
| | - Matthias Simon
- Department of Neurosurgery (Evangelisches Klinikum Bethel), Medical School, Bielefeld University, 33617 Bielefeld, Germany
| | - Helena Radbruch
- Department of Neuropathology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Pawel Fidzinski
- Clinical Study Center, Berlin Institute of Health at Charité-Universitätsmedizin Berlin, 10117 Berlin, Germany
- German Center for Neurodegenerative Diseases (DZNE) Berlin, 10117 Berlin, Germany
| | - Dietmar Schmitz
- German Center for Neurodegenerative Diseases (DZNE) Berlin, 10117 Berlin, Germany
- Neuroscience Research Center, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Henrik Alle
- Institute of Neurophysiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Martin Holtkamp
- Epilepsy-Center Berlin-Brandenburg, Department of Neurology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Imre Vida
- Institute for Integrative Neuroanatomy, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Benjamin F Grewe
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, 8057 Zürich, Switzerland
| | - Jörg R P Geiger
- Institute of Neurophysiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, 10117 Berlin, Germany
| |
Collapse
|
17
|
Papadopoulos L, Jo S, Zumwalt K, Wehr M, McCormick DA, Mazzucato L. Modulation of metastable ensemble dynamics explains optimal coding at moderate arousal in auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.04.588209. [PMID: 38617286 PMCID: PMC11014582 DOI: 10.1101/2024.04.04.588209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Performance during perceptual decision-making exhibits an inverted-U relationship with arousal, but the underlying network mechanisms remain unclear. Here, we recorded from auditory cortex (A1) of behaving mice during passive tone presentation, while tracking arousal via pupillometry. We found that tone discriminability in A1 ensembles was optimal at intermediate arousal, revealing a population-level neural correlate of the inverted-U relationship. We explained this arousal-dependent coding using a spiking network model with a clustered architecture. Specifically, we show that optimal stimulus discriminability is achieved near a transition between a multi-attractor phase with metastable cluster dynamics (low arousal) and a single-attractor phase (high arousal). Additional signatures of this transition include arousal-induced reductions of overall neural variability and the extent of stimulus-induced variability quenching, which we observed in the empirical data. Altogether, this study elucidates computational principles underlying interactions between pupil-linked arousal, sensory processing, and neural variability, and suggests a role for phase transitions in explaining nonlinear modulations of cortical computations.
Collapse
Affiliation(s)
| | - Suhyun Jo
- Institute of Neuroscience, University of Oregon, Eugene, Oregon
| | - Kevin Zumwalt
- Institute of Neuroscience, University of Oregon, Eugene, Oregon
| | - Michael Wehr
- Institute of Neuroscience, University of Oregon, Eugene, Oregon and Department of Psychology, University of Oregon, Eugene, Oregon
| | - David A McCormick
- Institute of Neuroscience, University of Oregon, Eugene, Oregon and Department of Biology, University of Oregon, Eugene, Oregon
| | - Luca Mazzucato
- Institute of Neuroscience, University of Oregon, Eugene, Oregon
- Department of Biology, University of Oregon, Eugene, Oregon
- Department of Mathematics, University of Oregon, Eugene, Oregon and Department of Physics, University of Oregon, Eugene, Oregon
| |
Collapse
|
18
|
Tian ZQK, Chen K, Li S, McLaughlin DW, Zhou D. Causal connectivity measures for pulse-output network reconstruction: Analysis and applications. Proc Natl Acad Sci U S A 2024; 121:e2305297121. [PMID: 38551842 PMCID: PMC10998614 DOI: 10.1073/pnas.2305297121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 03/03/2024] [Indexed: 04/08/2024] Open
Abstract
The causal connectivity of a network is often inferred to understand network function. It is arguably acknowledged that the inferred causal connectivity relies on the causality measure one applies, and it may differ from the network's underlying structural connectivity. However, the interpretation of causal connectivity remains to be fully clarified, in particular, how causal connectivity depends on causality measures and how causal connectivity relates to structural connectivity. Here, we focus on nonlinear networks with pulse signals as measured output, e.g., neural networks with spike output, and address the above issues based on four commonly utilized causality measures, i.e., time-delayed correlation coefficient, time-delayed mutual information, Granger causality, and transfer entropy. We theoretically show how these causality measures are related to one another when applied to pulse signals. Taking a simulated Hodgkin-Huxley network and a real mouse brain network as two illustrative examples, we further verify the quantitative relations among the four causality measures and demonstrate that the causal connectivity inferred by any of the four well coincides with the underlying network structural connectivity, therefore illustrating a direct link between the causal and structural connectivity. We stress that the structural connectivity of pulse-output networks can be reconstructed pairwise without conditioning on the global information of all other nodes in a network, thus circumventing the curse of dimensionality. Our framework provides a practical and effective approach for pulse-output network reconstruction.
Collapse
Affiliation(s)
- Zhong-qi K. Tian
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Ministry of Education Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai200240, China
| | - Kai Chen
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Ministry of Education Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai200240, China
| | - Songting Li
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Ministry of Education Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai200240, China
| | - David W. McLaughlin
- Courant Institute of Mathematical Sciences, New York University, New York, NY10012
- Center for Neural Science, New York University, New York, NY10012
- Institute of Mathematical Sciences, New York University Shanghai, Shanghai200122, China
- Neuroscience Institute of New York University Langone Health, New York University, New York, NY10016
| | - Douglas Zhou
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai200240, China
- Ministry of Education Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai200240, China
- Shanghai Frontier Science Center of Modern Analysis, Shanghai Jiao Tong University, Shanghai200240, China
| |
Collapse
|
19
|
Donner C, Bartram J, Hornauer P, Kim T, Roqueiro D, Hierlemann A, Obozinski G, Schröter M. Ensemble learning and ground-truth validation of synaptic connectivity inferred from spike trains. PLoS Comput Biol 2024; 20:e1011964. [PMID: 38683881 PMCID: PMC11081509 DOI: 10.1371/journal.pcbi.1011964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 05/09/2024] [Accepted: 03/02/2024] [Indexed: 05/02/2024] Open
Abstract
Probing the architecture of neuronal circuits and the principles that underlie their functional organization remains an important challenge of modern neurosciences. This holds true, in particular, for the inference of neuronal connectivity from large-scale extracellular recordings. Despite the popularity of this approach and a number of elaborate methods to reconstruct networks, the degree to which synaptic connections can be reconstructed from spike-train recordings alone remains controversial. Here, we provide a framework to probe and compare connectivity inference algorithms, using a combination of synthetic ground-truth and in vitro data sets, where the connectivity labels were obtained from simultaneous high-density microelectrode array (HD-MEA) and patch-clamp recordings. We find that reconstruction performance critically depends on the regularity of the recorded spontaneous activity, i.e., their dynamical regime, the type of connectivity, and the amount of available spike-train data. We therefore introduce an ensemble artificial neural network (eANN) to improve connectivity inference. We train the eANN on the validated outputs of six established inference algorithms and show how it improves network reconstruction accuracy and robustness. Overall, the eANN demonstrated strong performance across different dynamical regimes, worked well on smaller datasets, and improved the detection of synaptic connectivity, especially inhibitory connections. Results indicated that the eANN also improved the topological characterization of neuronal networks. The presented methodology contributes to advancing the performance of inference algorithms and facilitates our understanding of how neuronal activity relates to synaptic connectivity.
Collapse
Affiliation(s)
- Christian Donner
- Swiss Data Science Center, ETH Zürich & EPFL, Zürich & Lausanne, Switzerland
| | - Julian Bartram
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Philipp Hornauer
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Taehoon Kim
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Damian Roqueiro
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Andreas Hierlemann
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Guillaume Obozinski
- Swiss Data Science Center, ETH Zürich & EPFL, Zürich & Lausanne, Switzerland
| | - Manuel Schröter
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| |
Collapse
|
20
|
Vareberg AD, Bok I, Eizadi J, Ren X, Hai A. Inference of network connectivity from temporally binned spike trains. J Neurosci Methods 2024; 404:110073. [PMID: 38309313 PMCID: PMC10949361 DOI: 10.1016/j.jneumeth.2024.110073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 01/19/2024] [Accepted: 01/30/2024] [Indexed: 02/05/2024]
Abstract
BACKGROUND Processing neural activity to reconstruct network connectivity is a central focus of neuroscience, yet the spatiotemporal requisites of biological nervous systems are challenging for current neuronal sensing modalities. Consequently, methods that leverage limited data to successfully infer synaptic connections, predict activity at single unit resolution, and decipher their effect on whole systems, can uncover critical information about neural processing. Despite the emergence of powerful methods for inferring connectivity, network reconstruction based on temporally subsampled data remains insufficiently unexplored. NEW METHOD We infer synaptic weights by processing firing rates within variable time bins for a heterogeneous feed-forward network of excitatory, inhibitory, and unconnected units. We assess classification and optimize model parameters for postsynaptic spike train reconstruction. We test our method on a physiological network of leaky integrate-and-fire neurons displaying bursting patterns and assess prediction of postsynaptic activity from microelectrode array data. RESULTS Results reveal parameters for improved prediction and performance and suggest that lower resolution data and limited access to neurons can be preferred. COMPARISON WITH EXISTING METHOD(S) Recent computational methods demonstrate highly improved reconstruction of connectivity from networks of parallel spike trains by considering spike lag, time-varying firing rates, and other underlying dynamics. However, these methods insufficiently explore temporal subsampling representative of novel data types. CONCLUSIONS We provide a framework for reverse engineering neural networks from data with limited temporal quality, describing optimal parameters for each bin size, which can be further improved using non-linear methods and applied to more complicated readouts and connectivity distributions in multiple brain circuits.
Collapse
Affiliation(s)
- Adam D Vareberg
- Department of Biomedical Engineering, University of Wisconsin-Madison, United States; Wisconsin Institute for Translational Neuroengineering (WITNe), University of Wisconsin-Madison, United States
| | - Ilhan Bok
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison, United States; Wisconsin Institute for Translational Neuroengineering (WITNe), University of Wisconsin-Madison, United States
| | - Jenna Eizadi
- Department of Biomedical Engineering, University of Wisconsin-Madison, United States; Wisconsin Institute for Translational Neuroengineering (WITNe), University of Wisconsin-Madison, United States
| | - Xiaoxuan Ren
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison, United States
| | - Aviad Hai
- Department of Biomedical Engineering, University of Wisconsin-Madison, United States; Department of Electrical and Computer Engineering, University of Wisconsin-Madison, United States; Wisconsin Institute for Translational Neuroengineering (WITNe), University of Wisconsin-Madison, United States.
| |
Collapse
|
21
|
Wright J, Bourke P. Markov Blankets and Mirror Symmetries-Free Energy Minimization and Mesocortical Anatomy. ENTROPY (BASEL, SWITZERLAND) 2024; 26:287. [PMID: 38667842 PMCID: PMC11049374 DOI: 10.3390/e26040287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 03/21/2024] [Accepted: 03/25/2024] [Indexed: 04/28/2024]
Abstract
A theoretical account of development in mesocortical anatomy is derived from the free energy principle, operating in a neural field with both Hebbian and anti-Hebbian neural plasticity. An elementary structural unit is proposed, in which synaptic connections at mesoscale are arranged in paired patterns with mirror symmetry. Exchanges of synaptic flux in each pattern form coupled spatial eigenmodes, and the line of mirror reflection between the paired patterns operates as a Markov blanket, so that prediction errors in exchanges between the pairs are minimized. The theoretical analysis is then compared to the outcomes from a biological model of neocortical development, in which neuron precursors are selected by apoptosis for cell body and synaptic connections maximizing synchrony and also minimizing axonal length. It is shown that this model results in patterns of connection with the anticipated mirror symmetries, at micro-, meso- and inter-arial scales, among lateral connections, and in cortical depth. This explains the spatial organization and functional significance of neuron response preferences, and is compatible with the structural form of both columnar and noncolumnar cortex. Multi-way interactions of mirrored representations can provide a preliminary anatomically realistic model of cortical information processing.
Collapse
Affiliation(s)
- James Wright
- Centre for Brain Research, and Department of Psychological Medicine, School of Medicine, University of Auckland, Auckland 1010, New Zealand
| | - Paul Bourke
- School of Social Sciences, Faculty of Arts, Business, Law and Education, University of Western Australia, Perth, WA 6009, Australia
| |
Collapse
|
22
|
Qian P, Manubens-Gil L, Jiang S, Peng H. Non-homogenous axonal bouton distribution in whole-brain single-cell neuronal networks. Cell Rep 2024; 43:113871. [PMID: 38451816 DOI: 10.1016/j.celrep.2024.113871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Revised: 01/08/2024] [Accepted: 02/09/2024] [Indexed: 03/09/2024] Open
Abstract
We examined the distribution of pre-synaptic contacts in axons of mouse neurons and constructed whole-brain single-cell neuronal networks using an extensive dataset of 1,891 fully reconstructed neurons. We found that bouton locations were not homogeneous throughout the axon and among brain regions. As our algorithm was able to generate whole-brain single-cell connectivity matrices from full morphology reconstruction datasets, we further found that non-homogeneous bouton locations have a significant impact on network wiring, including degree distribution, triad census, and community structure. By perturbing neuronal morphology, we further explored the link between anatomical details and network topology. In our in silico exploration, we found that dendritic and axonal tree span would have the greatest impact on network wiring, followed by synaptic contact deletion. Our results suggest that neuroanatomical details must be carefully addressed in studies of whole-brain networks at the single-cell level.
Collapse
Affiliation(s)
- Penghao Qian
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, State Key Laboratory of Digital Medical Engineering, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China; School of Computer Science and Engineering, Southeast University, Nanjing, Jiangsu 210096, China
| | - Linus Manubens-Gil
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, State Key Laboratory of Digital Medical Engineering, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China.
| | - Shengdian Jiang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, State Key Laboratory of Digital Medical Engineering, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China; School of Computer Science and Engineering, Southeast University, Nanjing, Jiangsu 210096, China
| | - Hanchuan Peng
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, State Key Laboratory of Digital Medical Engineering, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China.
| |
Collapse
|
23
|
Lagzi F, Fairhall AL. Emergence of co-tuning in inhibitory neurons as a network phenomenon mediated by randomness, correlations, and homeostatic plasticity. SCIENCE ADVANCES 2024; 10:eadi4350. [PMID: 38507489 DOI: 10.1126/sciadv.adi4350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 02/15/2024] [Indexed: 03/22/2024]
Abstract
Cortical excitatory neurons show clear tuning to stimulus features, but the tuning properties of inhibitory interneurons are ambiguous. While inhibitory neurons have been considered to be largely untuned, some studies show that some parvalbumin-expressing (PV) neurons do show feature selectivity and participate in co-tuned subnetworks with pyramidal neurons. In this study, we first use mean-field theory to demonstrate that a combination of homeostatic plasticity governing the synaptic dynamics of the connections from PV to excitatory neurons, heterogeneity in the excitatory postsynaptic potentials that impinge on PV neurons, and shared correlated input from layer 4 results in the functional and structural self-organization of PV subnetworks. Second, we show that structural and functional feature tuning of PV neurons emerges more clearly at the network level, i.e., that population-level measures identify functional and structural co-tuning of PV neurons that are not evident in pairwise individual-level measures. Finally, we show that such co-tuning can enhance network stability at the cost of reduced feature selectivity.
Collapse
Affiliation(s)
- Fereshteh Lagzi
- Department of Physiology and Biophysics, University of Washington, 1705 NE Pacific Street, Seattle, WA 98195-7290, USA
- Computational Neuroscience Center, University of Washington, 1705 NE Pacific Street, Seattle, WA 98195-7290, USA
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics, University of Washington, 1705 NE Pacific Street, Seattle, WA 98195-7290, USA
- Computational Neuroscience Center, University of Washington, 1705 NE Pacific Street, Seattle, WA 98195-7290, USA
| |
Collapse
|
24
|
Znamenskiy P, Kim MH, Muir DR, Iacaruso MF, Hofer SB, Mrsic-Flogel TD. Functional specificity of recurrent inhibition in visual cortex. Neuron 2024; 112:991-1000.e8. [PMID: 38244539 DOI: 10.1016/j.neuron.2023.12.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/31/2023] [Accepted: 12/19/2023] [Indexed: 01/22/2024]
Abstract
In the neocortex, neural activity is shaped by the interaction of excitatory and inhibitory neurons, defined by the organization of their synaptic connections. Although connections among excitatory pyramidal neurons are sparse and functionally tuned, inhibitory connectivity is thought to be dense and largely unstructured. By measuring in vivo visual responses and synaptic connectivity of parvalbumin-expressing (PV+) inhibitory cells in mouse primary visual cortex, we show that the synaptic weights of their connections to nearby pyramidal neurons are specifically tuned according to the similarity of the cells' responses. Individual PV+ cells strongly inhibit those pyramidal cells that provide them with strong excitation and share their visual selectivity. This structured organization of inhibitory synaptic weights provides a circuit mechanism for tuned inhibition onto pyramidal cells despite dense connectivity, stabilizing activity within feature-specific excitatory ensembles while supporting competition between them.
Collapse
Affiliation(s)
- Petr Znamenskiy
- Specification and Function of Neural Circuits Laboratory, The Francis Crick Institute, 1 Midland Road, London NW1 1AT, UK; Sainsbury Wellcome Centre, 25 Howland Street, London W1T 4JG, UK; Biozentrum, University of Basel, Klingelbergstrasse 70, 4056 Basel, Switzerland.
| | - Mean-Hwan Kim
- Biozentrum, University of Basel, Klingelbergstrasse 70, 4056 Basel, Switzerland
| | - Dylan R Muir
- Biozentrum, University of Basel, Klingelbergstrasse 70, 4056 Basel, Switzerland
| | | | - Sonja B Hofer
- Sainsbury Wellcome Centre, 25 Howland Street, London W1T 4JG, UK; Biozentrum, University of Basel, Klingelbergstrasse 70, 4056 Basel, Switzerland
| | - Thomas D Mrsic-Flogel
- Sainsbury Wellcome Centre, 25 Howland Street, London W1T 4JG, UK; Biozentrum, University of Basel, Klingelbergstrasse 70, 4056 Basel, Switzerland.
| |
Collapse
|
25
|
Ecker A, Egas Santander D, Bolaños-Puchet S, Isbister JB, Reimann MW. Cortical cell assemblies and their underlying connectivity: An in silico study. PLoS Comput Biol 2024; 20:e1011891. [PMID: 38466752 PMCID: PMC10927091 DOI: 10.1371/journal.pcbi.1011891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 02/05/2024] [Indexed: 03/13/2024] Open
Abstract
Recent developments in experimental techniques have enabled simultaneous recordings from thousands of neurons, enabling the study of functional cell assemblies. However, determining the patterns of synaptic connectivity giving rise to these assemblies remains challenging. To address this, we developed a complementary, simulation-based approach, using a detailed, large-scale cortical network model. Using a combination of established methods we detected functional cell assemblies from the stimulus-evoked spiking activity of 186,665 neurons. We studied how the structure of synaptic connectivity underlies assembly composition, quantifying the effects of thalamic innervation, recurrent connectivity, and the spatial arrangement of synapses on dendrites. We determined that these features reduce up to 30%, 22%, and 10% of the uncertainty of a neuron belonging to an assembly. The detected assemblies were activated in a stimulus-specific sequence and were grouped based on their position in the sequence. We found that the different groups were affected to different degrees by the structural features we considered. Additionally, connectivity was more predictive of assembly membership if its direction aligned with the temporal order of assembly activation, if it originated from strongly interconnected populations, and if synapses clustered on dendritic branches. In summary, reversing Hebb's postulate, we showed how cells that are wired together, fire together, quantifying how connectivity patterns interact to shape the emergence of assemblies. This includes a qualitative aspect of connectivity: not just the amount, but also the local structure matters; from the subcellular level in the form of dendritic clustering to the presence of specific network motifs.
Collapse
Affiliation(s)
- András Ecker
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Daniela Egas Santander
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Sirio Bolaños-Puchet
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - James B. Isbister
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Michael W. Reimann
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| |
Collapse
|
26
|
Papo D, Buldú JM. Does the brain behave like a (complex) network? I. Dynamics. Phys Life Rev 2024; 48:47-98. [PMID: 38145591 DOI: 10.1016/j.plrev.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/10/2023] [Indexed: 12/27/2023]
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network structure does not entail that the brain actually works as a network. Asking whether the brain behaves as a network means asking whether network properties count. From the viewpoint of neurophysiology and, possibly, of brain physics, the most substantial issues a network structure may be instrumental in addressing relate to the influence of network properties on brain dynamics and to whether these properties ultimately explain some aspects of brain function. Here, we address the dynamical implications of complex network, examining which aspects and scales of brain activity may be understood to genuinely behave as a network. To do so, we first define the meaning of networkness, and analyse some of its implications. We then examine ways in which brain anatomy and dynamics can be endowed with a network structure and discuss possible ways in which network structure may be shown to represent a genuine organisational principle of brain activity, rather than just a convenient description of its anatomy and dynamics.
Collapse
Affiliation(s)
- D Papo
- Department of Neuroscience and Rehabilitation, Section of Physiology, University of Ferrara, Ferrara, Italy; Center for Translational Neurophysiology, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy.
| | - J M Buldú
- Complex Systems Group & G.I.S.C., Universidad Rey Juan Carlos, Madrid, Spain
| |
Collapse
|
27
|
Lin A, Yang R, Dorkenwald S, Matsliah A, Sterling AR, Schlegel P, Yu SC, McKellar CE, Costa M, Eichler K, Bates AS, Eckstein N, Funke J, Jefferis GSXE, Murthy M. Network Statistics of the Whole-Brain Connectome of Drosophila. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.07.29.551086. [PMID: 37547019 PMCID: PMC10402125 DOI: 10.1101/2023.07.29.551086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
Brains comprise complex networks of neurons and connections. Network analysis applied to the wiring diagrams of brains can offer insights into how brains support computations and regulate information flow. The completion of the first whole-brain connectome of an adult Drosophila, the largest connectome to date, containing 130,000 neurons and millions of connections, offers an unprecedented opportunity to analyze its network properties and topological features. To gain insights into local connectivity, we computed the prevalence of two- and three-node network motifs, examined their strengths and neurotransmitter compositions, and compared these topological metrics with wiring diagrams of other animals. We discovered that the network of the fly brain displays rich club organization, with a large population (30% percent of the connectome) of highly connected neurons. We identified subsets of rich club neurons that may serve as integrators or broadcasters of signals. Finally, we examined subnetworks based on 78 anatomically defined brain regions or neuropils. These data products are shared within the FlyWire Codex and will serve as a foundation for models and experiments exploring the relationship between neural activity and anatomical structure.
Collapse
Affiliation(s)
- Albert Lin
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Center for the Physics of Biological Function, Princeton University, Princeton, NJ, USA
| | - Runzhe Yang
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Computer Science Department, Princeton University, Princeton, NJ, USA
| | - Sven Dorkenwald
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Computer Science Department, Princeton University, Princeton, NJ, USA
| | - Arie Matsliah
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Amy R Sterling
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Philipp Schlegel
- Neurobiology Division, MRC Laboratory of Molecular Biology, Cambridge, UK
- Drosophila Connectomics Group, Department of Zoology, University of Cambridge, Cambridge, UK
| | - Szi-Chieh Yu
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Claire E McKellar
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Marta Costa
- Drosophila Connectomics Group, Department of Zoology, University of Cambridge, Cambridge, UK
| | - Katharina Eichler
- Drosophila Connectomics Group, Department of Zoology, University of Cambridge, Cambridge, UK
| | - Alexander Shakeel Bates
- Neurobiology Division, MRC Laboratory of Molecular Biology, Cambridge, UK
- Drosophila Connectomics Group, Department of Zoology, University of Cambridge, Cambridge, UK
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK
| | - Nils Eckstein
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, USA
| | - Jan Funke
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, USA
| | - Gregory S X E Jefferis
- Neurobiology Division, MRC Laboratory of Molecular Biology, Cambridge, UK
- Drosophila Connectomics Group, Department of Zoology, University of Cambridge, Cambridge, UK
| | - Mala Murthy
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| |
Collapse
|
28
|
Liu Y, Seguin C, Betzel RF, Akarca D, Di Biase MA, Zalesky A. A generative model of the connectome with dynamic axon growth. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.23.581824. [PMID: 38464116 PMCID: PMC10925171 DOI: 10.1101/2024.02.23.581824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
Connectome generative models, otherwise known as generative network models, provide insight into the wiring principles underpinning brain network organization. While these models can approximate numerous statistical properties of empirical networks, they typically fail to explicitly characterize an important contributor to brain organization - axonal growth. Emulating the chemoaffinity guided axonal growth, we provide a novel generative model in which axons dynamically steer the direction of propagation based on distance-dependent chemoattractive forces acting on their growth cones. This simple dynamic growth mechanism, despite being solely geometry-dependent, is shown to generate axonal fiber bundles with brain-like geometry and features of complex network architecture consistent with the human brain, including lognormally distributed connectivity weights, scale-free nodal degrees, small-worldness, and modularity. We demonstrate that our model parameters can be fitted to individual connectomes, enabling connectome dimensionality reduction and comparison of parameters between groups. Our work offers an opportunity to bridge studies of axon guidance and connectome development, providing new avenues for understanding neural development from a computational perspective.
Collapse
Affiliation(s)
- Yuanzhe Liu
- Department of Biomedical Engineering, Faculty of Engineering & Information Technology, The University of Melbourne, Melbourne, VIC, Australia
- Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne, Melbourne, VIC, Australia
| | - Caio Seguin
- Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne, Melbourne, VIC, Australia
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
| | - Richard F. Betzel
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
| | - Danyal Akarca
- Department of Psychiatry, University of Cambridge, Cambridge, UK
- MRC Cognition and Brain Sciences Unit, University of Cambridge, UK
| | - Maria A. Di Biase
- Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne, Melbourne, VIC, Australia
- Department of Psychiatry, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Andrew Zalesky
- Department of Biomedical Engineering, Faculty of Engineering & Information Technology, The University of Melbourne, Melbourne, VIC, Australia
- Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
29
|
Sridhar S, Clayton RH. Fibroblast mediated dynamics in diffusively uncoupled myocytes: a simulation study using 2-cell motifs. Sci Rep 2024; 14:4493. [PMID: 38396245 PMCID: PMC10891142 DOI: 10.1038/s41598-024-54564-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 02/14/2024] [Indexed: 02/25/2024] Open
Abstract
In healthy hearts myocytes are typically coupled to nearest neighbours through gap junctions. Under pathological conditions such as fibrosis, or in scar tissue, or across ablation lines myocytes can uncouple from their neighbours. Electrical conduction may still occur via fibroblasts that not only couple proximal myocytes but can also couple otherwise unconnected regions. We hypothesise that such coupling can alter conduction between myocytes via introduction of delays or by initiation of premature stimuli that can potentially result in reentry or conduction blocks. To test this hypothesis we have developed several 2-cell motifs and investigated the effect of fibroblast mediated electrical coupling between uncoupled myocytes. We have identified various regimes of myocyte behaviour that depend on the strength of gap-junctional conductance, connection topology, and parameters of the myocyte and fibroblast models. These motifs are useful in developing a mechanistic understanding of long-distance coupling on myocyte dynamics and enable the characterisation of interaction between different features such as myocyte and fibroblast properties, coupling strengths and pacing period. They are computationally inexpensive and allow for incorporation of spatial effects such as conduction velocity. They provide a framework for constructing scar tissue boundaries and enable linking of cellular level interactions with scar induced arrhythmia.
Collapse
Affiliation(s)
- S Sridhar
- Department of Computer Science, University of Sheffield, Sheffield, UK.
| | - Richard H Clayton
- Department of Computer Science, University of Sheffield, Sheffield, UK
| |
Collapse
|
30
|
Jagdev G, Yu N. Noise-induced synchrony of two-neuron motifs with asymmetric noise and uneven coupling. Front Comput Neurosci 2024; 18:1347748. [PMID: 38463242 PMCID: PMC10920254 DOI: 10.3389/fncom.2024.1347748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 02/06/2024] [Indexed: 03/12/2024] Open
Abstract
Synchronous dynamics play a pivotal role in various cognitive processes. Previous studies extensively investigate noise-induced synchrony in coupled neural oscillators, with a focus on scenarios featuring uniform noise and equal coupling strengths between neurons. However, real-world or experimental settings frequently exhibit heterogeneity, including deviations from uniformity in coupling and noise patterns. This study investigates noise-induced synchrony in a pair of coupled excitable neurons operating in a heterogeneous environment, where both noise intensity and coupling strength can vary independently. Each neuron is an excitable oscillator, represented by the normal form of Hopf bifurcation (HB). In the absence of stimulus, these neurons remain quiescent but can be triggered by perturbations, such as noise. Typically, noise and coupling exert opposing influences on neural dynamics, with noise diminishing coherence and coupling promoting synchrony. Our results illustrate the ability of asymmetric noise to induce synchronization in such coupled neural oscillators, with synchronization becoming increasingly pronounced as the system approaches the excitation threshold (i.e., HB). Additionally, we find that uneven coupling strengths and noise asymmetries are factors that can promote in-phase synchrony. Notably, we identify an optimal synchronization state when the absolute difference in coupling strengths is maximized, regardless of the specific coupling strengths chosen. Furthermore, we establish a robust relationship between coupling asymmetry and the noise intensity required to maximize synchronization. Specifically, when one oscillator (receiver neuron) receives a strong input from the other oscillator (source neuron) and the source neuron receives significantly weaker or no input from the receiver neuron, synchrony is maximized when the noise applied to the receiver neuron is much weaker than that applied to the source neuron. These findings reveal the significant connection between uneven coupling and asymmetric noise in coupled neuronal oscillators, shedding light on the enhanced propensity for in-phase synchronization in two-neuron motifs with one-way connections compared to those with two-way connections. This research contributes to a deeper understanding of the functional roles of network motifs that may serve within neuronal dynamics.
Collapse
Affiliation(s)
- Gurpreet Jagdev
- Department of Mathematics, Toronto Metropolitan University, Toronto, ON, Canada
| | - Na Yu
- Department of Mathematics, Toronto Metropolitan University, Toronto, ON, Canada
- Institute of Biomedical Engineering, Science and Technology (iBEST), Unity Health Toronto, and Toronto Metropolitan University, Toronto, ON, Canada
| |
Collapse
|
31
|
Liu YH, Baratin A, Cornford J, Mihalas S, Shea-Brown E, Lajoie G. How connectivity structure shapes rich and lazy learning in neural circuits. ARXIV 2024:arXiv:2310.08513v2. [PMID: 37873007 PMCID: PMC10593070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
In theoretical neuroscience, recent work leverages deep learning tools to explore how some network attributes critically influence its learning dynamics. Notably, initial weight distributions with small (resp. large) variance may yield a rich (resp. lazy) regime, where significant (resp. minor) changes to network states and representation are observed over the course of learning. However, in biology, neural circuit connectivity could exhibit a low-rank structure and therefore differs markedly from the random initializations generally used for these studies. As such, here we investigate how the structure of the initial weights -- in particular their effective rank -- influences the network learning regime. Through both empirical and theoretical analyses, we discover that high-rank initializations typically yield smaller network changes indicative of lazier learning, a finding we also confirm with experimentally-driven initial connectivity in recurrent neural networks. Conversely, low-rank initialization biases learning towards richer learning. Importantly, however, as an exception to this rule, we find lazier learning can still occur with a low-rank initialization that aligns with task and data statistics. Our research highlights the pivotal role of initial weight structures in shaping learning regimes, with implications for metabolic costs of plasticity and risks of catastrophic forgetting.
Collapse
|
32
|
Metzner C, Yamakou ME, Voelkl D, Schilling A, Krauss P. Quantifying and Maximizing the Information Flux in Recurrent Neural Networks. Neural Comput 2024; 36:351-384. [PMID: 38363658 DOI: 10.1162/neco_a_01651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 12/04/2023] [Indexed: 02/18/2024]
Abstract
Free-running recurrent neural networks (RNNs), especially probabilistic models, generate an ongoing information flux that can be quantified with the mutual information I[x→(t),x→(t+1)] between subsequent system states x→. Although previous studies have shown that I depends on the statistics of the network's connection weights, it is unclear how to maximize I systematically and how to quantify the flux in large systems where computing the mutual information becomes intractable. Here, we address these questions using Boltzmann machines as model systems. We find that in networks with moderately strong connections, the mutual information I is approximately a monotonic transformation of the root-mean-square averaged Pearson correlations between neuron pairs, a quantity that can be efficiently computed even in large systems. Furthermore, evolutionary maximization of I[x→(t),x→(t+1)] reveals a general design principle for the weight matrices enabling the systematic construction of systems with a high spontaneous information flux. Finally, we simultaneously maximize information flux and the mean period length of cyclic attractors in the state-space of these dynamical networks. Our results are potentially useful for the construction of RNNs that serve as short-time memories or pattern generators.
Collapse
Affiliation(s)
- Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Biophysics Lab, Friedrich-Alexander University of Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Marius E Yamakou
- Department of Data Science, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Dennis Voelkl
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| |
Collapse
|
33
|
Spaeth A, Haussler D, Teodorescu M. Model-Agnostic Neural Mean Field With The Refractory SoftPlus Transfer Function. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.05.579047. [PMID: 38370695 PMCID: PMC10871173 DOI: 10.1101/2024.02.05.579047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Due to the complexity of neuronal networks and the nonlinear dynamics of individual neurons, it is challenging to develop a systems-level model which is accurate enough to be useful yet tractable enough to apply. Mean-field models which extrapolate from single-neuron descriptions to large-scale models can be derived from the neuron's transfer function, which gives its firing rate as a function of its synaptic input. However, analytically derived transfer functions are applicable only to the neurons and noise models from which they were originally derived. In recent work, approximate transfer functions have been empirically derived by fitting a sigmoidal curve, which imposes a maximum firing rate and applies only in the diffusion limit, restricting applications. In this paper, we propose an approximate transfer function called Refractory SoftPlus, which is simple yet applicable to a broad variety of neuron types. Refractory SoftPlus activation functions allow the derivation of simple empirically approximated mean-field models using simulation results, which enables prediction of the response of a network of randomly connected neurons to a time-varying external stimulus with a high degree of accuracy. These models also support an accurate approximate bifurcation analysis as a function of the level of recurrent input. Finally, the model works without assuming large presynaptic rates or small postsynaptic potential size, allowing mean-field models to be developed even for populations with large interaction terms.
Collapse
Affiliation(s)
- Alex Spaeth
- Electrical and Computer Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, United States
| | - David Haussler
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, United States
- Biomolecular Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States
| | - Mircea Teodorescu
- Electrical and Computer Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, United States
- Biomolecular Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, United States
| |
Collapse
|
34
|
Sammons RP, Vezir M, Moreno-Velasquez L, Cano G, Orlando M, Sievers M, Grasso E, Metodieva VD, Kempter R, Schmidt H, Schmitz D. Structure and function of the hippocampal CA3 module. Proc Natl Acad Sci U S A 2024; 121:e2312281120. [PMID: 38289953 PMCID: PMC10861929 DOI: 10.1073/pnas.2312281120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 11/01/2023] [Indexed: 02/01/2024] Open
Abstract
The hippocampal formation is crucial for learning and memory, with submodule CA3 thought to be the substrate of pattern completion. However, the underlying synaptic and computational mechanisms of this network are not well understood. Here, we perform circuit reconstruction of a CA3 module using three dimensional (3D) electron microscopy data and combine this with functional connectivity recordings and computational simulations to determine possible CA3 network mechanisms. Direct measurements of connectivity schemes with both physiological measurements and structural 3D EM revealed a high connectivity rate, multi-fold higher than previously assumed. Mathematical modelling indicated that such CA3 networks can robustly generate pattern completion and replay memory sequences. In conclusion, our data demonstrate that the connectivity scheme of the hippocampal submodule is well suited for efficient memory storage and retrieval.
Collapse
Affiliation(s)
- Rosanna P. Sammons
- Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Neuroscience Research Center, Berlin10117, Germany
| | - Mourat Vezir
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main60528, Germany
| | - Laura Moreno-Velasquez
- Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Neuroscience Research Center, Berlin10117, Germany
| | - Gaspar Cano
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Berlin10115, Germany
| | - Marta Orlando
- Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Neuroscience Research Center, Berlin10117, Germany
| | - Meike Sievers
- Department of Connectomics, Max Planck Institute for Brain Research, Frankfurt am Main60438, Germany
| | - Eleonora Grasso
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main60528, Germany
| | - Verjinia D. Metodieva
- Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Neuroscience Research Center, Berlin10117, Germany
| | - Richard Kempter
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Berlin10115, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin10115, Germany
- Einstein Center for Neurosciences Berlin, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Einstein Center for Neurosciences Berlin, Berlin10117, Germany
| | - Helene Schmidt
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main60528, Germany
| | - Dietmar Schmitz
- Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Neuroscience Research Center, Berlin10117, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin10115, Germany
- Einstein Center for Neurosciences Berlin, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Einstein Center for Neurosciences Berlin, Berlin10117, Germany
- German Center for Neurodegenerative Diseases Berlin, Berlin10117, Germany
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin13125, Germany
| |
Collapse
|
35
|
Potter C, Bassi C, Runyan CA. Simultaneous interneuron labeling reveals population-level interactions among parvalbumin, somatostatin, and pyramidal neurons in cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.01.09.523298. [PMID: 36711788 PMCID: PMC9882008 DOI: 10.1101/2023.01.09.523298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Cortical interneurons shape network activity in cell type-specific ways, and are also influenced by interactions with other cell types. These specific cell-type interactions are understudied, as transgenic labeling methods typically restrict labeling to one neuron type at a time. Although recent methods have enabled post-hoc identification of cell types, these are not available to many labs. Here, we present a method to distinguish between two red fluorophores in vivo, which allowed imaging of activity in somatostatin (SOM), parvalbumin (PV), and putative pyramidal neurons (PYR) in mouse association cortex. We compared population events of elevated activity and observed that the PYR network state corresponded to the ratio between mean SOM and PV neuron activity, demonstrating the importance of simultaneous labeling to explain dynamics. These results extend previous findings in sensory cortex, as activity became sparser and less correlated when the ratio between SOM and PV activity was high.
Collapse
Affiliation(s)
- Christian Potter
- Department of Neuroscience
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA
| | - Constanza Bassi
- Department of Neuroscience
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA
| | - Caroline A. Runyan
- Department of Neuroscience
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA
| |
Collapse
|
36
|
Richardson MJE. Linear and nonlinear integrate-and-fire neurons driven by synaptic shot noise with reversal potentials. Phys Rev E 2024; 109:024407. [PMID: 38491664 DOI: 10.1103/physreve.109.024407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/18/2023] [Indexed: 03/18/2024]
Abstract
The steady-state firing rate and firing-rate response of the leaky and exponential integrate-and-fire models receiving synaptic shot noise with excitatory and inhibitory reversal potentials is examined. For the particular case where the underlying synaptic conductances are exponentially distributed, it is shown that the master equation for a population of such model neurons can be reduced from an integrodifferential form to a more tractable set of three differential equations. The system is nevertheless more challenging analytically than for current-based synapses: where possible, analytical results are provided with an efficient numerical scheme and code provided for other quantities. The increased tractability of the framework developed supports an ongoing critical comparison between models in which synapses are treated with and without reversal potentials, such as recently in the context of networks with balanced excitatory and inhibitory conductances.
Collapse
Affiliation(s)
- Magnus J E Richardson
- Warwick Mathematics Institute, University of Warwick, Coventry CV4 7AL, United Kingdom
| |
Collapse
|
37
|
Chwiłka M, Karbowski J. Explicit mutual information for simple networks and neurons with lognormal activities. Phys Rev E 2024; 109:014117. [PMID: 38366499 DOI: 10.1103/physreve.109.014117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 12/13/2023] [Indexed: 02/18/2024]
Abstract
Networks with stochastic variables described by heavy-tailed lognormal distribution are ubiquitous in nature, and hence they deserve an exact information-theoretic characterization. We derive analytical formulas for mutual information between elements of different networks with correlated lognormally distributed activities. In a special case, we find an explicit expression for mutual information between neurons when neural activities and synaptic weights are lognormally distributed, as suggested by experimental data. Comparison of this expression with the case when these two variables have short tails reveals that mutual information with heavy tails for neurons and synapses is generally larger and can diverge for some finite variances in presynaptic firing rates and synaptic weights. This result suggests that evolution might prefer brains with heterogeneous dynamics to optimize information processing.
Collapse
Affiliation(s)
- Maurycy Chwiłka
- Department of Mathematics, Informatics, and Mechanics, Institute of Applied Mathematics and Mechanics, University of Warsaw, Ulica Banacha 2, 02-097 Warsaw, Poland
| | - Jan Karbowski
- Department of Mathematics, Informatics, and Mechanics, Institute of Applied Mathematics and Mechanics, University of Warsaw, Ulica Banacha 2, 02-097 Warsaw, Poland
| |
Collapse
|
38
|
van der Molen T, Spaeth A, Chini M, Bartram J, Dendukuri A, Zhang Z, Bhaskaran-Nair K, Blauvelt LJ, Petzold LR, Hansma PK, Teodorescu M, Hierlemann A, Hengen KB, Hanganu-Opatz IL, Kosik KS, Sharf T. Protosequences in human cortical organoids model intrinsic states in the developing cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.29.573646. [PMID: 38234832 PMCID: PMC10793448 DOI: 10.1101/2023.12.29.573646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Neuronal firing sequences are thought to be the basic building blocks of neural coding and information broadcasting within the brain. However, when sequences emerge during neurodevelopment remains unknown. We demonstrate that structured firing sequences are present in spontaneous activity of human brain organoids and ex vivo neonatal brain slices from the murine somatosensory cortex. We observed a balance between temporally rigid and flexible firing patterns that are emergent phenomena in human brain organoids and early postnatal murine somatosensory cortex, but not in primary dissociated cortical cultures. Our findings suggest that temporal sequences do not arise in an experience-dependent manner, but are rather constrained by an innate preconfigured architecture established during neurogenesis. These findings highlight the potential for brain organoids to further explore how exogenous inputs can be used to refine neuronal circuits and enable new studies into the genetic mechanisms that govern assembly of functional circuitry during early human brain development.
Collapse
Affiliation(s)
- Tjitse van der Molen
- Neuroscience Research Institute, University of California Santa Barbara, Santa Barbara, CA 93106, USA
- Department of Molecular, Cellular and Developmental Biology, University of California Santa Barbara, Santa Barbara, CA 93106, USA
| | - Alex Spaeth
- UC Santa Cruz Genomics Institute, University of California Santa Cruz, Santa Cruz, CA 95060, USA
- Department of Electrical and Computer Engineering, University of California Santa Cruz, Santa Cruz, CA 95064, USA
| | - Mattia Chini
- Institute of Developmental Neurophysiology, Center for Molecular Neurobiology, Hamburg Center of Neuroscience, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
| | - Julian Bartram
- Department of Biosystems Science and Engineering, ETH Zürich, Klingelbergstrasse 48, 4056 Basel, Switzerland
| | - Aditya Dendukuri
- Department of Computer Science, University of California Santa Barbara, Santa Barbara, CA 93106, USA
| | - Zongren Zhang
- Department of Physics, University of California Santa Barbara, Santa Barbara, CA 93106
| | - Kiran Bhaskaran-Nair
- Department of Biology, Washington University in St. Louis, St. Louis, MO 63130, USA
| | - Lon J. Blauvelt
- UC Santa Cruz Genomics Institute, University of California Santa Cruz, Santa Cruz, CA 95060, USA
| | - Linda R. Petzold
- Department of Computer Science, University of California Santa Barbara, Santa Barbara, CA 93106, USA
| | - Paul K. Hansma
- Neuroscience Research Institute, University of California Santa Barbara, Santa Barbara, CA 93106, USA
- Department of Physics, University of California Santa Barbara, Santa Barbara, CA 93106
| | - Mircea Teodorescu
- UC Santa Cruz Genomics Institute, University of California Santa Cruz, Santa Cruz, CA 95060, USA
- Department of Electrical and Computer Engineering, University of California Santa Cruz, Santa Cruz, CA 95064, USA
| | - Andreas Hierlemann
- Department of Biosystems Science and Engineering, ETH Zürich, Klingelbergstrasse 48, 4056 Basel, Switzerland
| | - Keith B. Hengen
- Department of Biology, Washington University in St. Louis, St. Louis, MO 63130, USA
| | - Ileana L. Hanganu-Opatz
- Institute of Developmental Neurophysiology, Center for Molecular Neurobiology, Hamburg Center of Neuroscience, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
| | - Kenneth S. Kosik
- Neuroscience Research Institute, University of California Santa Barbara, Santa Barbara, CA 93106, USA
- Department of Molecular, Cellular and Developmental Biology, University of California Santa Barbara, Santa Barbara, CA 93106, USA
| | - Tal Sharf
- UC Santa Cruz Genomics Institute, University of California Santa Cruz, Santa Cruz, CA 95060, USA
- Department of Biomolecular Engineering, University of California Santa Cruz, Santa Cruz, CA 95064, USA
- Institute for the Biology of Stem Cells, University of California Santa Cruz, Santa Cruz, CA 95064, USA
| |
Collapse
|
39
|
Ohki T, Kunii N, Chao ZC. Efficient, continual, and generalized learning in the brain - neural mechanism of Mental Schema 2.0. Rev Neurosci 2023; 34:839-868. [PMID: 36960579 DOI: 10.1515/revneuro-2022-0137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 02/26/2023] [Indexed: 03/25/2023]
Abstract
There has been tremendous progress in artificial neural networks (ANNs) over the past decade; however, the gap between ANNs and the biological brain as a learning device remains large. With the goal of closing this gap, this paper reviews learning mechanisms in the brain by focusing on three important issues in ANN research: efficiency, continuity, and generalization. We first discuss the method by which the brain utilizes a variety of self-organizing mechanisms to maximize learning efficiency, with a focus on the role of spontaneous activity of the brain in shaping synaptic connections to facilitate spatiotemporal learning and numerical processing. Then, we examined the neuronal mechanisms that enable lifelong continual learning, with a focus on memory replay during sleep and its implementation in brain-inspired ANNs. Finally, we explored the method by which the brain generalizes learned knowledge in new situations, particularly from the mathematical generalization perspective of topology. Besides a systematic comparison in learning mechanisms between the brain and ANNs, we propose "Mental Schema 2.0," a new computational property underlying the brain's unique learning ability that can be implemented in ANNs.
Collapse
Affiliation(s)
- Takefumi Ohki
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Tokyo 113-0033, Japan
| | - Naoto Kunii
- Department of Neurosurgery, The University of Tokyo, Tokyo 113-0033, Japan
| | - Zenas C Chao
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Tokyo 113-0033, Japan
| |
Collapse
|
40
|
Karbowski J, Urban P. Information encoded in volumes and areas of dendritic spines is nearly maximal across mammalian brains. Sci Rep 2023; 13:22207. [PMID: 38097675 PMCID: PMC10721930 DOI: 10.1038/s41598-023-49321-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 12/06/2023] [Indexed: 12/17/2023] Open
Abstract
Many experiments suggest that long-term information associated with neuronal memory resides collectively in dendritic spines. However, spines can have a limited size due to metabolic and neuroanatomical constraints, which should effectively limit the amount of encoded information in excitatory synapses. This study investigates how much information can be stored in the population of sizes of dendritic spines, and whether it is optimal in any sense. It is shown here, using empirical data for several mammalian brains across different regions and physiological conditions, that dendritic spines nearly maximize entropy contained in their volumes and surface areas for a given mean size in cortical and hippocampal regions. Although both short- and heavy-tailed fitting distributions approach [Formula: see text] of maximal entropy in the majority of cases, the best maximization is obtained primarily for short-tailed gamma distribution. We find that most empirical ratios of standard deviation to mean for spine volumes and areas are in the range [Formula: see text], which is close to the theoretical optimal ratios coming from entropy maximization for gamma and lognormal distributions. On average, the highest entropy is contained in spine length ([Formula: see text] bits per spine), and the lowest in spine volume and area ([Formula: see text] bits), although the latter two are closer to optimality. In contrast, we find that entropy density (entropy per spine size) is always suboptimal. Our results suggest that spine sizes are almost as random as possible given the constraint on their size, and moreover the general principle of entropy maximization is applicable and potentially useful to information and memory storing in the population of cortical and hippocampal excitatory synapses, and to predicting their morphological properties.
Collapse
Affiliation(s)
- Jan Karbowski
- Institute of Applied Mathematics and Mechanics, University of Warsaw, Warsaw, Poland.
| | - Paulina Urban
- Laboratory of Functional and Structural Genomics, Centre of New Technologies, University of Warsaw, Warsaw, Poland
- College of Inter-Faculty Individual Studies in Mathematics and Natural Sciences, University of Warsaw, Warsaw, Poland
- Laboratory of Databases and Business Analytics, National Information Processing Institute, National Research Institute, Warsaw, Poland
| |
Collapse
|
41
|
Timonidis N, Bakker R, Rubio-Teves M, Alonso-Martínez C, Garcia-Amado M, Clascá F, Tiesinga PHE. Translating single-neuron axonal reconstructions into meso-scale connectivity statistics in the mouse somatosensory thalamus. Front Neuroinform 2023; 17:1272243. [PMID: 38107469 PMCID: PMC10722239 DOI: 10.3389/fninf.2023.1272243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 11/13/2023] [Indexed: 12/19/2023] Open
Abstract
Characterizing the connectomic and morphological diversity of thalamic neurons is key for better understanding how the thalamus relays sensory inputs to the cortex. The recent public release of complete single-neuron morphological reconstructions enables the analysis of previously inaccessible connectivity patterns from individual neurons. Here we focus on the Ventral Posteromedial (VPM) nucleus and characterize the full diversity of 257 VPM neurons, obtained by combining data from the MouseLight and Braintell projects. Neurons were clustered according to their most dominantly targeted cortical area and further subdivided by their jointly targeted areas. We obtained a 2D embedding of morphological diversity using the dissimilarity between all pairs of axonal trees. The curved shape of the embedding allowed us to characterize neurons by a 1-dimensional coordinate. The coordinate values were aligned both with the progression of soma position along the dorsal-ventral and lateral-medial axes and with that of axonal terminals along the posterior-anterior and medial-lateral axes, as well as with an increase in the number of branching points, distance from soma and branching width. Taken together, we have developed a novel workflow for linking three challenging aspects of connectomics, namely the topography, higher order connectivity patterns and morphological diversity, with VPM as a test-case. The workflow is linked to a unified access portal that contains the morphologies and integrated with 2D cortical flatmap and subcortical visualization tools. The workflow and resulting processed data have been made available in Python, and can thus be used for modeling and experimentally validating new hypotheses on thalamocortical connectivity.
Collapse
Affiliation(s)
- Nestor Timonidis
- Neuroinformatics Department, Donders Centre for Neuroscience, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Rembrandt Bakker
- Neuroinformatics Department, Donders Centre for Neuroscience, Radboud University Nijmegen, Nijmegen, Netherlands
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Mario Rubio-Teves
- Department of Anatomy and Neuroscience, School of Medicine, Autónoma de Madrid University, Madrid, Spain
| | - Carmen Alonso-Martínez
- Department of Anatomy and Neuroscience, School of Medicine, Autónoma de Madrid University, Madrid, Spain
| | - Maria Garcia-Amado
- Department of Anatomy and Neuroscience, School of Medicine, Autónoma de Madrid University, Madrid, Spain
| | - Francisco Clascá
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Paul H. E. Tiesinga
- Neuroinformatics Department, Donders Centre for Neuroscience, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
42
|
Breffle J, Mokashe S, Qiu S, Miller P. Multistability in neural systems with random cross-connections. BIOLOGICAL CYBERNETICS 2023; 117:485-506. [PMID: 38133664 DOI: 10.1007/s00422-023-00981-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 12/05/2023] [Indexed: 12/23/2023]
Abstract
Neural circuits with multiple discrete attractor states could support a variety of cognitive tasks according to both empirical data and model simulations. We assess the conditions for such multistability in neural systems using a firing rate model framework, in which clusters of similarly responsive neurons are represented as single units, which interact with each other through independent random connections. We explore the range of conditions in which multistability arises via recurrent input from other units while individual units, typically with some degree of self-excitation, lack sufficient self-excitation to become bistable on their own. We find many cases of multistability-defined as the system possessing more than one stable fixed point-in which stable states arise via a network effect, allowing subsets of units to maintain each others' activity because their net input to each other when active is sufficiently positive. In terms of the strength of within-unit self-excitation and standard deviation of random cross-connections, the region of multistability depends on the response function of units. Indeed, multistability can arise with zero self-excitation, purely through zero-mean random cross-connections, if the response function rises supralinearly at low inputs from a value near zero at zero input. We simulate and analyze finite systems, showing that the probability of multistability can peak at intermediate system size, and connect with other literature analyzing similar systems in the infinite-size limit. We find regions of multistability with a bimodal distribution for the number of active units in a stable state. Finally, we find evidence for a log-normal distribution of sizes of attractor basins, which produces Zipf's Law when enumerating the proportion of trials within which random initial conditions lead to a particular stable state of the system.
Collapse
Affiliation(s)
- Jordan Breffle
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA, 02454, USA
| | - Subhadra Mokashe
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA, 02454, USA
| | - Siwei Qiu
- Volen National Center for Complex Systems, Brandeis University, 415 South St, Waltham, MA, 02454, USA
- Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Paul Miller
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA, 02454, USA.
- Volen National Center for Complex Systems, Brandeis University, 415 South St, Waltham, MA, 02454, USA.
- Department of Biology, Brandeis University, 415 South St, Waltham, MA, 02454, USA.
| |
Collapse
|
43
|
Lv S, He E, Luo J, Liu Y, Liang W, Xu S, Zhang K, Yang Y, Wang M, Song Y, Wu Y, Cai X. Using Human-Induced Pluripotent Stem Cell Derived Neurons on Microelectrode Arrays to Model Neurological Disease: A Review. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2301828. [PMID: 37863819 PMCID: PMC10667858 DOI: 10.1002/advs.202301828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 09/04/2023] [Indexed: 10/22/2023]
Abstract
In situ physiological signals of in vitro neural disease models are essential for studying pathogenesis and drug screening. Currently, an increasing number of in vitro neural disease models are established using human-induced pluripotent stem cell (hiPSC) derived neurons (hiPSC-DNs) to overcome interspecific gene expression differences. Microelectrode arrays (MEAs) can be readily interfaced with two-dimensional (2D), and more recently, three-dimensional (3D) neural stem cell-derived in vitro models of the human brain to monitor their physiological activity in real time. Therefore, MEAs are emerging and useful tools to model neurological disorders and disease in vitro using human iPSCs. This is enabling a real-time window into neuronal signaling at the network scale from patient derived. This paper provides a comprehensive review of MEA's role in analyzing neural disease models established by hiPSC-DNs. It covers the significance of MEA fabrication, surface structure and modification schemes for hiPSC-DNs culturing and signal detection. Additionally, this review discusses advances in the development and use of MEA technology to study in vitro neural disease models, including epilepsy, autism spectrum developmental disorder (ASD), and others established using hiPSC-DNs. The paper also highlights the application of MEAs combined with hiPSC-DNs in detecting in vitro neurotoxic substances. Finally, the future development and outlook of multifunctional and integrated devices for in vitro medical diagnostics and treatment are discussed.
Collapse
Affiliation(s)
- Shiya Lv
- State Key Laboratory of Transducer TechnologyAerospace Information Research InstituteChinese Academy of SciencesBeijing100190China
- University of Chinese Academy of SciencesBeijing100049China
| | - Enhui He
- State Key Laboratory of Transducer TechnologyAerospace Information Research InstituteChinese Academy of SciencesBeijing100190China
- University of Chinese Academy of SciencesBeijing100049China
- The State Key Lab of Brain‐Machine IntelligenceZhejiang UniversityHangzhou321100China
| | - Jinping Luo
- State Key Laboratory of Transducer TechnologyAerospace Information Research InstituteChinese Academy of SciencesBeijing100190China
- University of Chinese Academy of SciencesBeijing100049China
| | - Yaoyao Liu
- State Key Laboratory of Transducer TechnologyAerospace Information Research InstituteChinese Academy of SciencesBeijing100190China
- University of Chinese Academy of SciencesBeijing100049China
| | - Wei Liang
- State Key Laboratory of Transducer TechnologyAerospace Information Research InstituteChinese Academy of SciencesBeijing100190China
- University of Chinese Academy of SciencesBeijing100049China
| | - Shihong Xu
- State Key Laboratory of Transducer TechnologyAerospace Information Research InstituteChinese Academy of SciencesBeijing100190China
- University of Chinese Academy of SciencesBeijing100049China
| | - Kui Zhang
- State Key Laboratory of Transducer TechnologyAerospace Information Research InstituteChinese Academy of SciencesBeijing100190China
- University of Chinese Academy of SciencesBeijing100049China
| | - Yan Yang
- State Key Laboratory of Transducer TechnologyAerospace Information Research InstituteChinese Academy of SciencesBeijing100190China
- University of Chinese Academy of SciencesBeijing100049China
| | - Mixia Wang
- State Key Laboratory of Transducer TechnologyAerospace Information Research InstituteChinese Academy of SciencesBeijing100190China
- University of Chinese Academy of SciencesBeijing100049China
| | - Yilin Song
- State Key Laboratory of Transducer TechnologyAerospace Information Research InstituteChinese Academy of SciencesBeijing100190China
- University of Chinese Academy of SciencesBeijing100049China
| | - Yirong Wu
- State Key Laboratory of Transducer TechnologyAerospace Information Research InstituteChinese Academy of SciencesBeijing100190China
- University of Chinese Academy of SciencesBeijing100049China
| | - Xinxia Cai
- State Key Laboratory of Transducer TechnologyAerospace Information Research InstituteChinese Academy of SciencesBeijing100190China
- University of Chinese Academy of SciencesBeijing100049China
| |
Collapse
|
44
|
Navarro P, Oweiss K. Compressive sensing of functional connectivity maps from patterned optogenetic stimulation of neuronal ensembles. PATTERNS (NEW YORK, N.Y.) 2023; 4:100845. [PMID: 37876895 PMCID: PMC10591201 DOI: 10.1016/j.patter.2023.100845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 04/04/2023] [Accepted: 08/25/2023] [Indexed: 10/26/2023]
Abstract
Mapping functional connectivity between neurons is an essential step toward probing the neural computations mediating behavior. Accurately determining synaptic connectivity maps in populations of neurons is challenging in terms of yield, accuracy, and experimental time. Here, we developed a compressive sensing approach to reconstruct synaptic connectivity maps based on random two-photon cell-targeted optogenetic stimulation and membrane voltage readout of many putative postsynaptic neurons. Using a biophysical network model of interconnected populations of excitatory and inhibitory neurons, we characterized mapping recall and precision as a function of network observability, sparsity, number of neurons stimulated, off-target stimulation, synaptic reliability, propagation latency, and network topology. We found that mapping can be achieved with far fewer measurements than the standard pairwise sequential approach, with network sparsity and synaptic reliability serving as primary determinants of the performance. Our results suggest a rapid and efficient method to reconstruct functional connectivity of sparsely connected neuronal networks.
Collapse
Affiliation(s)
- Phillip Navarro
- Electrical and Computer Engineering Department, University of Florida, Gainesville, FL 32611, USA
| | - Karim Oweiss
- Electrical and Computer Engineering Department, University of Florida, Gainesville, FL 32611, USA
- Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
- Department of Neurology, University of Florida, Gainesville, FL 32611, USA
- Department of Neuroscience, McKnight Brain Institute, University of Florida, Gainesville, FL 32611, USA
| |
Collapse
|
45
|
Matsuda K, Shirakami A, Nakajima R, Akutsu T, Shimono M. Whole-Brain Evaluation of Cortical Microconnectomes. eNeuro 2023; 10:ENEURO.0094-23.2023. [PMID: 37903612 PMCID: PMC10616907 DOI: 10.1523/eneuro.0094-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 09/08/2023] [Accepted: 09/30/2023] [Indexed: 11/01/2023] Open
Abstract
The brain is an organ that functions as a network of many elements connected in a nonuniform manner. In the brain, the neocortex is evolutionarily newest and is thought to be primarily responsible for the high intelligence of mammals. In the mature mammalian brain, all cortical regions are expected to have some degree of homology, but have some variations of local circuits to achieve specific functions performed by individual regions. However, few cellular-level studies have examined how the networks within different cortical regions differ. This study aimed to find rules for systematic changes of connectivity (microconnectomes) across 16 different cortical region groups. We also observed unknown trends in basic parameters in vitro such as firing rate and layer thickness across brain regions. Results revealed that the frontal group shows unique characteristics such as dense active neurons, thick cortex, and strong connections with deeper layers. This suggests the frontal side of the cortex is inherently capable of driving, even in isolation and that frontal nodes provide the driving force generating a global pattern of spontaneous synchronous activity, such as the default mode network. This finding provides a new hypothesis explaining why disruption in the frontal region causes a large impact on mental health.
Collapse
Affiliation(s)
- Kouki Matsuda
- Graduate Schools of Medicine, Kyoto University, 53 Kawaramachi, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Arata Shirakami
- Graduate Schools of Medicine, Kyoto University, 53 Kawaramachi, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Ryota Nakajima
- Graduate Schools of Medicine, Kyoto University, 53 Kawaramachi, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Tatsuya Akutsu
- Bioinformatics Center, Institute for Chemical Research, Kyoto University, Gokasho, Uji, Kyoto 611-0011, Japan
| | - Masanori Shimono
- Graduate Schools of Medicine, Kyoto University, 53 Kawaramachi, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
- Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita-shi, Osaka 565-0871
| |
Collapse
|
46
|
de Kock CPJ, Feldmeyer D. Shared and divergent principles of synaptic transmission between cortical excitatory neurons in rodent and human brain. Front Synaptic Neurosci 2023; 15:1274383. [PMID: 37731775 PMCID: PMC10508294 DOI: 10.3389/fnsyn.2023.1274383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 08/21/2023] [Indexed: 09/22/2023] Open
Abstract
Information transfer between principal neurons in neocortex occurs through (glutamatergic) synaptic transmission. In this focussed review, we provide a detailed overview on the strength of synaptic neurotransmission between pairs of excitatory neurons in human and laboratory animals with a specific focus on data obtained using patch clamp electrophysiology. We reach two major conclusions: (1) the synaptic strength, measured as unitary excitatory postsynaptic potential (or uEPSP), is remarkably consistent across species, cortical regions, layers and/or cell-types (median 0.5 mV, interquartile range 0.4-1.0 mV) with most variability associated with the cell-type specific connection studied (min 0.1-max 1.4 mV), (2) synaptic function cannot be generalized across human and rodent, which we exemplify by discussing the differences in anatomical and functional properties of pyramidal-to-pyramidal connections within human and rodent cortical layers 2 and 3. With only a handful of studies available on synaptic transmission in human, it is obvious that much remains unknown to date. Uncovering the shared and divergent principles of synaptic transmission across species however, will almost certainly be a pivotal step toward understanding human cognitive ability and brain function in health and disease.
Collapse
Affiliation(s)
- Christiaan P. J. de Kock
- Center for Neurogenomics and Cognitive Research, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Dirk Feldmeyer
- Research Center Juelich, Institute of Neuroscience and Medicine, Jülich, Germany
- Department of Psychiatry, Psychotherapy, and Psychosomatics, RWTH Aachen University Hospital, Aachen, Germany
- Jülich-Aachen Research Alliance, Translational Brain Medicine (JARA Brain), Aachen, Germany
| |
Collapse
|
47
|
Zhao H, Shao C, Shi Z, He S, Gong Z. The Intrinsic Similarity of Topological Structure in Biological Neural Networks. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:3292-3305. [PMID: 37224366 DOI: 10.1109/tcbb.2023.3279443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Most previous studies mainly have focused on the analysis of structural properties of individual neuronal networks from C. elegans. In recent years, an increasing number of synapse-level neural maps, also known as biological neural networks, have been reconstructed. However, it is not clear whether there are intrinsic similarities of structural properties of biological neural networks from different brain compartments or species. To explore this issue, we collected nine connectomes at synaptic resolution including C. elegans, and analyzed their structural properties. We found that these biological neural networks possess small-world properties and modules. Excluding the Drosophila larval visual system, these networks have rich clubs. The distributions of synaptic connection strength for these networks can be fitted by the truncated pow-law distributions. Additionally, compared with the power-law model, a log-normal distribution is a better model to fit the complementary cumulative distribution function (CCDF) of degree for these neuronal networks. Moreover, we also observed that these neural networks belong to the same superfamily based on the significance profile (SP) of small subgraphs in the network. Taken together, these findings suggest that biological neural networks share intrinsic similarities in their topological structure, revealing some principles underlying the formation of biological neural networks within and across species.
Collapse
|
48
|
Yuan Y, Zhu Y, Wang J, Li R, Xu X, Fang T, Huo H, Wan L, Li Q, Liu N, Yang S. Incorporating structural plasticity into self-organization recurrent networks for sequence learning. Front Neurosci 2023; 17:1224752. [PMID: 37592946 PMCID: PMC10427342 DOI: 10.3389/fnins.2023.1224752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 07/13/2023] [Indexed: 08/19/2023] Open
Abstract
Introduction Spiking neural networks (SNNs), inspired by biological neural networks, have received a surge of interest due to its temporal encoding. Biological neural networks are driven by multiple plasticities, including spike timing-dependent plasticity (STDP), structural plasticity, and homeostatic plasticity, making network connection patterns and weights to change continuously during the lifecycle. However, it is unclear how these plasticities interact to shape neural networks and affect neural signal processing. Method Here, we propose a reward-modulated self-organization recurrent network with structural plasticity (RSRN-SP) to investigate this issue. Specifically, RSRN-SP uses spikes to encode information, and incorporate multiple plasticities including reward-modulated spike timing-dependent plasticity (R-STDP), homeostatic plasticity, and structural plasticity. On the one hand, combined with homeostatic plasticity, R-STDP is presented to guide the updating of synaptic weights. On the other hand, structural plasticity is utilized to simulate the growth and pruning of synaptic connections. Results and discussion Extensive experiments for sequential learning tasks are conducted to demonstrate the representational ability of the RSRN-SP, including counting task, motion prediction, and motion generation. Furthermore, the simulations also indicate that the characteristics arose from the RSRN-SP are consistent with biological observations.
Collapse
Affiliation(s)
- Ye Yuan
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Yongtong Zhu
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Jiaqi Wang
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Ruoshi Li
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Xin Xu
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Tao Fang
- Automation of Department, Shanghai Jiao Tong University, Shanghai, China
| | - Hong Huo
- Automation of Department, Shanghai Jiao Tong University, Shanghai, China
| | - Lihong Wan
- Origin Dynamics Intelligent Robot Co., Ltd., Zhengzhou, China
| | - Qingdu Li
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Na Liu
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Shiyan Yang
- Eco-Environmental Protection Institution, Shanghai Academy of Agricultural Sciences, Shanghai, China
| |
Collapse
|
49
|
Levenstein D, Okun M. Logarithmically scaled, gamma distributed neuronal spiking. J Physiol 2023; 601:3055-3069. [PMID: 36086892 PMCID: PMC10952267 DOI: 10.1113/jp282758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 07/28/2022] [Indexed: 11/08/2022] Open
Abstract
Naturally log-scaled quantities abound in the nervous system. Distributions of these quantities have non-intuitive properties, which have implications for data analysis and the understanding of neural circuits. Here, we review the log-scaled statistics of neuronal spiking and the relevant analytical probability distributions. Recent work using log-scaling revealed that interspike intervals of forebrain neurons segregate into discrete modes reflecting spiking at different timescales and are each well-approximated by a gamma distribution. Each neuron spends most of the time in an irregular spiking 'ground state' with the longest intervals, which determines the mean firing rate of the neuron. Across the entire neuronal population, firing rates are log-scaled and well approximated by the gamma distribution, with a small number of highly active neurons and an overabundance of low rate neurons (the 'dark matter'). These results are intricately linked to a heterogeneous balanced operating regime, which confers upon neuronal circuits multiple computational advantages and has evolutionarily ancient origins.
Collapse
Affiliation(s)
- Daniel Levenstein
- Department of Neurology and NeurosurgeryMcGill UniversityMontrealQCCanada
- MilaMontréalQCCanada
| | - Michael Okun
- Department of Psychology and Neuroscience InstituteUniversity of SheffieldSheffieldUK
| |
Collapse
|
50
|
Hadjiabadi D, Soltesz I. From single-neuron dynamics to higher-order circuit motifs in control and pathological brain networks. J Physiol 2023; 601:3011-3024. [PMID: 35815823 PMCID: PMC10655857 DOI: 10.1113/jp282749] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 06/27/2022] [Indexed: 11/08/2022] Open
Abstract
The convergence of advanced single-cell in vivo functional imaging techniques, computational modelling tools and graph-based network analytics has heralded new opportunities to study single-cell dynamics across large-scale networks, providing novel insights into principles of brain communication and pointing towards potential new strategies for treating neurological disorders. A major recent finding has been the identification of unusually richly connected hub cells that have capacity to synchronize networks and may also be critical in network dysfunction. While hub neurons are traditionally defined by measures that consider solely the number and strength of connections, novel higher-order graph analytics now enables the mining of massive networks for repeating subgraph patterns called motifs. As an illustration of the power offered by higher-order analysis of neuronal networks, we highlight how recent methodological advances uncovered a new functional cell type, the superhub, that is predicted to play a major role in regulating network dynamics. Finally, we discuss open questions that will be critical for assessing the importance of higher-order cellular-scale network analytics in understanding brain function in health and disease.
Collapse
Affiliation(s)
- Darian Hadjiabadi
- Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Ivan Soltesz
- Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|