1
|
Voges N, Lima V, Hausmann J, Brovelli A, Battaglia D. Decomposing Neural Circuit Function into Information Processing Primitives. J Neurosci 2024; 44:e0157232023. [PMID: 38050070 PMCID: PMC10866194 DOI: 10.1523/jneurosci.0157-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 09/01/2023] [Accepted: 09/19/2023] [Indexed: 12/06/2023] Open
Abstract
It is challenging to measure how specific aspects of coordinated neural dynamics translate into operations of information processing and, ultimately, cognitive functions. An obstacle is that simple circuit mechanisms-such as self-sustained or propagating activity and nonlinear summation of inputs-do not directly give rise to high-level functions. Nevertheless, they already implement simple the information carried by neural activity. Here, we propose that distinct functions, such as stimulus representation, working memory, or selective attention, stem from different combinations and types of low-level manipulations of information or information processing primitives. To test this hypothesis, we combine approaches from information theory with simulations of multi-scale neural circuits involving interacting brain regions that emulate well-defined cognitive functions. Specifically, we track the information dynamics emergent from patterns of neural dynamics, using quantitative metrics to detect where and when information is actively buffered, transferred or nonlinearly merged, as possible modes of low-level processing (storage, transfer and modification). We find that neuronal subsets maintaining representations in working memory or performing attentional gain modulation are signaled by their boosted involvement in operations of information storage or modification, respectively. Thus, information dynamic metrics, beyond detecting which network units participate in cognitive processing, also promise to specify how and when they do it, that is, through which type of primitive computation, a capability that may be exploited for the analysis of experimental recordings.
Collapse
Affiliation(s)
- Nicole Voges
- Institut de Neurosciences de La Timone, UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
- Institute for Language, Communication and the Brain (ILCB), Aix-Marseille Université, Marseille 13005, France
| | - Vinicius Lima
- Institut de Neurosciences des Systèmes (INS), UMR 1106, Aix-Marseille Université, Marseille 13005, France
| | - Johannes Hausmann
- R&D Department, Hyland Switzerland Sarl, Corcelles NE 2035, Switzerland
| | - Andrea Brovelli
- Institut de Neurosciences de La Timone, UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
- Institute for Language, Communication and the Brain (ILCB), Aix-Marseille Université, Marseille 13005, France
| | - Demian Battaglia
- Institute for Language, Communication and the Brain (ILCB), Aix-Marseille Université, Marseille 13005, France
- Institut de Neurosciences des Systèmes (INS), UMR 1106, Aix-Marseille Université, Marseille 13005, France
- University of Strasbourg Institute for Advanced Studies (USIAS), Strasbourg 67000, France
| |
Collapse
|
2
|
O'Neill KM, Anderson ED, Mukherjee S, Gandu S, McEwan SA, Omelchenko A, Rodriguez AR, Losert W, Meaney DF, Babadi B, Firestein BL. Time-dependent homeostatic mechanisms underlie brain-derived neurotrophic factor action on neural circuitry. Commun Biol 2023; 6:1278. [PMID: 38110605 PMCID: PMC10728104 DOI: 10.1038/s42003-023-05638-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 11/27/2023] [Indexed: 12/20/2023] Open
Abstract
Plasticity and homeostatic mechanisms allow neural networks to maintain proper function while responding to physiological challenges. Despite previous work investigating morphological and synaptic effects of brain-derived neurotrophic factor (BDNF), the most prevalent growth factor in the central nervous system, how exposure to BDNF manifests at the network level remains unknown. Here we report that BDNF treatment affects rodent hippocampal network dynamics during development and recovery from glutamate-induced excitotoxicity in culture. Importantly, these effects are not obvious when traditional activity metrics are used, so we delve more deeply into network organization, functional analyses, and in silico simulations. We demonstrate that BDNF partially restores homeostasis by promoting recovery of weak and medium connections after injury. Imaging and computational analyses suggest these effects are caused by changes to inhibitory neurons and connections. From our in silico simulations, we find that BDNF remodels the network by indirectly strengthening weak excitatory synapses after injury. Ultimately, our findings may explain the difficulties encountered in preclinical and clinical trials with BDNF and also offer information for future trials to consider.
Collapse
Affiliation(s)
- Kate M O'Neill
- Department of Cell Biology and Neuroscience, Rutgers University, Piscataway, NJ, USA
- Biomedical Engineering Graduate Program, Rutgers University, Piscataway, NJ, USA
- Institute for Physical Science & Technology, University of Maryland, College Park, MD, USA
| | - Erin D Anderson
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Shoutik Mukherjee
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, USA
| | - Srinivasa Gandu
- Department of Cell Biology and Neuroscience, Rutgers University, Piscataway, NJ, USA
- Cell and Developmental Biology Graduate Program, Rutgers University, Piscataway, NJ, USA
| | - Sara A McEwan
- Department of Cell Biology and Neuroscience, Rutgers University, Piscataway, NJ, USA
- Neuroscience Graduate Program, Rutgers University, Piscataway, NJ, USA
| | - Anton Omelchenko
- Department of Cell Biology and Neuroscience, Rutgers University, Piscataway, NJ, USA
- Neuroscience Graduate Program, Rutgers University, Piscataway, NJ, USA
| | - Ana R Rodriguez
- Department of Cell Biology and Neuroscience, Rutgers University, Piscataway, NJ, USA
- Biomedical Engineering Graduate Program, Rutgers University, Piscataway, NJ, USA
| | - Wolfgang Losert
- Department of Physics, University of Maryland, College Park, MD, USA
- Institute for Physical Science & Technology, University of Maryland, College Park, MD, USA
| | - David F Meaney
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neurosurgery, University of Pennsylvania, Philadelphia, PA, USA
| | - Behtash Babadi
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, USA
| | - Bonnie L Firestein
- Department of Cell Biology and Neuroscience, Rutgers University, Piscataway, NJ, USA.
| |
Collapse
|
3
|
Yan M, Zhang WH, Wang H, Wong KYM. Bimodular continuous attractor neural networks with static and moving stimuli. Phys Rev E 2023; 107:064302. [PMID: 37464697 DOI: 10.1103/physreve.107.064302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Accepted: 05/08/2023] [Indexed: 07/20/2023]
Abstract
We investigated the dynamical behaviors of bimodular continuous attractor neural networks, each processing a modality of sensory input and interacting with each other. We found that when bumps coexist in both modules, the position of each bump is shifted towards the other input when the intermodular couplings are excitatory and is shifted away when inhibitory. When one intermodular coupling is excitatory while another is moderately inhibitory, temporally modulated population spikes can be generated. On further increase of the inhibitory coupling, momentary spikes will emerge. In the regime of bump coexistence, bump heights are primarily strengthened by excitatory intermodular couplings, but there is a lesser weakening effect due to a bump being displaced from the direct input. When bimodular networks serve as decoders of multisensory integration, we extend the Bayesian framework to show that excitatory and inhibitory couplings encode attractive and repulsive priors, respectively. At low disparity, the bump positions decode the posterior means in the Bayesian framework, whereas at high disparity, multiple steady states exist. In the regime of multiple steady states, the less stable state can be accessed if the input causing the more stable state arrives after a sufficiently long delay. When one input is moving, the bump in the corresponding module is pinned when the moving stimulus is weak, unpinned at intermediate stimulus strength, and tracks the input at strong stimulus strength, and the stimulus strengths for these transitions increase with the velocity of the moving stimulus. These results are important to understanding multisensory integration of static and dynamic stimuli.
Collapse
Affiliation(s)
- Min Yan
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong SAR, People's Republic of China
| | - Wen-Hao Zhang
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, Texas 75390, USA
- O'Donnell Brain Institute, UT Southwestern Medical Center, Dallas, Texas 75390, USA
| | - He Wang
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong SAR, People's Republic of China
- Hong Kong University of Science and Technology, Shenzhen Research Institute, Shenzhen 518057, China
| | - K Y Michael Wong
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong SAR, People's Republic of China
| |
Collapse
|
4
|
Revisiting horizontal connectivity rules in V1: from like-to-like towards like-to-all. Brain Struct Funct 2022; 227:1279-1295. [PMID: 35122520 DOI: 10.1007/s00429-022-02455-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Accepted: 01/03/2022] [Indexed: 01/15/2023]
Abstract
Horizontal connections in the primary visual cortex of carnivores, ungulates and primates organize on a near-regular lattice. Given the similar length scale for the regularity found in cortical orientation maps, the currently accepted theoretical standpoint is that these maps are underpinned by a like-to-like connectivity rule: horizontal axons connect preferentially to neurons with similar preferred orientation. However, there is reason to doubt the rule's explanatory power, since a growing number of quantitative studies show that the like-to-like connectivity preference and bias mostly observed at short-range scale, are highly variable on a neuron-to-neuron level and depend on the origin of the presynaptic neuron. Despite the wide availability of published data, the accepted model of visual processing has never been revised. Here, we review three lines of independent evidence supporting a much-needed revision of the like-to-like connectivity rule, ranging from anatomy to population functional measures, computational models and to theoretical approaches. We advocate an alternative, distance-dependent connectivity rule that is consistent with new structural and functional evidence: from like-to-like bias at short horizontal distance to like-to-all at long horizontal distance. This generic rule accounts for the observed high heterogeneity in interactions between the orientation and retinotopic domains, that we argue is necessary to process non-trivial stimuli in a task-dependent manner.
Collapse
|
5
|
Davis ZW, Benigno GB, Fletterman C, Desbordes T, Steward C, Sejnowski TJ, H Reynolds J, Muller L. Spontaneous traveling waves naturally emerge from horizontal fiber time delays and travel through locally asynchronous-irregular states. Nat Commun 2021; 12:6057. [PMID: 34663796 PMCID: PMC8523565 DOI: 10.1038/s41467-021-26175-1] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Accepted: 09/17/2021] [Indexed: 11/25/2022] Open
Abstract
Studies of sensory-evoked neuronal responses often focus on mean spike rates, with fluctuations treated as internally-generated noise. However, fluctuations of spontaneous activity, often organized as traveling waves, shape stimulus-evoked responses and perceptual sensitivity. The mechanisms underlying these waves are unknown. Further, it is unclear whether waves are consistent with the low rate and weakly correlated “asynchronous-irregular” dynamics observed in cortical recordings. Here, we describe a large-scale computational model with topographically-organized connectivity and conduction delays relevant to biological scales. We find that spontaneous traveling waves are a general property of these networks. The traveling waves that occur in the model are sparse, with only a small fraction of neurons participating in any individual wave. Consequently, they do not induce measurable spike correlations and remain consistent with locally asynchronous irregular states. Further, by modulating local network state, they can shape responses to incoming inputs as observed in vivo. Spontaneous traveling cortical waves shape neural responses. Using a large-scale computational model, the authors show that transmission delays shape locally asynchronous spiking dynamics into traveling waves without inducing correlations and boost responses to external input, as observed in vivo.
Collapse
Affiliation(s)
- Zachary W Davis
- The Salk Institute for Biological Studies, La Jolla, CA, USA.
| | - Gabriel B Benigno
- Department of Applied Mathematics, Western University, London, ON, Canada.,Brain and Mind Institute, Western University, London, ON, Canada
| | | | - Theo Desbordes
- The Salk Institute for Biological Studies, La Jolla, CA, USA
| | | | | | - John H Reynolds
- The Salk Institute for Biological Studies, La Jolla, CA, USA.
| | - Lyle Muller
- Department of Applied Mathematics, Western University, London, ON, Canada. .,Brain and Mind Institute, Western University, London, ON, Canada.
| |
Collapse
|
6
|
Dąbrowska PA, Voges N, von Papen M, Ito J, Dahmen D, Riehle A, Brochier T, Grün S. On the Complexity of Resting State Spiking Activity in Monkey Motor Cortex. Cereb Cortex Commun 2021; 2:tgab033. [PMID: 34296183 PMCID: PMC8271144 DOI: 10.1093/texcom/tgab033] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 04/16/2021] [Accepted: 04/23/2021] [Indexed: 11/13/2022] Open
Abstract
Resting state has been established as a classical paradigm of brain activity studies, mostly based on large-scale measurements such as functional magnetic resonance imaging or magneto- and electroencephalography. This term typically refers to a behavioral state characterized by the absence of any task or stimuli. The corresponding neuronal activity is often called idle or ongoing. Numerous modeling studies on spiking neural networks claim to mimic such idle states, but compare their results with task- or stimulus-driven experiments, or to results from experiments with anesthetized subjects. Both approaches might lead to misleading conclusions. To provide a proper basis for comparing physiological and simulated network dynamics, we characterize simultaneously recorded single neurons' spiking activity in monkey motor cortex at rest and show the differences from spontaneous and task- or stimulus-induced movement conditions. We also distinguish between rest with open eyes and sleepy rest with eyes closed. The resting state with open eyes shows a significantly higher dimensionality, reduced firing rates, and less balance between population level excitation and inhibition than behavior-related states.
Collapse
Affiliation(s)
- Paulina Anna Dąbrowska
- Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany
| | - Nicole Voges
- Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany.,RWTH Aachen University, Aachen 52062, Germany
| | - Michael von Papen
- Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany
| | - Junji Ito
- Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany
| | - Alexa Riehle
- Institut de Neurosciences de la Timone, CNRS-AMU, Marseille 13005, France.,Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany
| | - Thomas Brochier
- Institut de Neurosciences de la Timone, CNRS-AMU, Marseille 13005, France
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen 52056, Germany
| |
Collapse
|
7
|
Naze S, Proix T, Atasoy S, Kozloski JR. Robustness of connectome harmonics to local gray matter and long-range white matter connectivity changes. Neuroimage 2021; 224:117364. [PMID: 32947015 PMCID: PMC7779370 DOI: 10.1016/j.neuroimage.2020.117364] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 08/14/2020] [Accepted: 09/07/2020] [Indexed: 12/15/2022] Open
Abstract
Recently, it has been proposed that the harmonic patterns emerging from the brain's structural connectivity underlie the resting state networks of the human brain. These harmonic patterns, termed connectome harmonics, are estimated as the Laplace eigenfunctions of the combined gray and white matters connectivity matrices and yield a connectome-specific extension of the well-known Fourier basis. However, it remains unclear how topological properties of the combined connectomes constrain the precise shape of the connectome harmonics and their relationships to the resting state networks. Here, we systematically study how alterations of the local and long-range connectivity matrices affect the spatial patterns of connectome harmonics. Specifically, the proportion of local gray matter homogeneous connectivity versus long-range white-matter heterogeneous connectivity is varied by means of weight-based matrix thresholding, distance-based matrix trimming, and several types of matrix randomizations. We demonstrate that the proportion of local gray matter connections plays a crucial role for the emergence of wide-spread, functionally meaningful, and originally published connectome harmonic patterns. This finding is robust for several different cortical surface templates, mesh resolutions, or widths of the local diffusion kernel. Finally, using the connectome harmonic framework, we also provide a proof-of-concept for how targeted structural changes such as the atrophy of inter-hemispheric callosal fibers and gray matter alterations may predict functional deficits associated with neurodegenerative conditions.
Collapse
Affiliation(s)
- Sébastien Naze
- IBM T.J. Watson Research Center, Yorktown Heights, New York, USA; IBM Research Australia, Melbourne, Victoria, Australia.
| | - Timothée Proix
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Selen Atasoy
- Department of Psychiatry, University of Oxford, UK
| | - James R Kozloski
- IBM T.J. Watson Research Center, Yorktown Heights, New York, USA
| |
Collapse
|
8
|
Ludl AA, Soriano J. Impact of Physical Obstacles on the Structural and Effective Connectivity of in silico Neuronal Circuits. Front Comput Neurosci 2020; 14:77. [PMID: 32982710 PMCID: PMC7488194 DOI: 10.3389/fncom.2020.00077] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 07/21/2020] [Indexed: 11/13/2022] Open
Abstract
Scaffolds and patterned substrates are among the most successful strategies to dictate the connectivity between neurons in culture. Here, we used numerical simulations to investigate the capacity of physical obstacles placed on a flat substrate to shape structural connectivity, and in turn collective dynamics and effective connectivity, in biologically-realistic neuronal networks. We considered μ-sized obstacles placed in mm-sized networks. Three main obstacle shapes were explored, namely crosses, circles and triangles of isosceles profile. They occupied either a small area fraction of the substrate or populated it entirely in a periodic manner. From the point of view of structure, all obstacles promoted short length-scale connections, shifted the in- and out-degree distributions toward lower values, and increased the modularity of the networks. The capacity of obstacles to shape distinct structural traits depended on their density and the ratio between axonal length and substrate diameter. For high densities, different features were triggered depending on obstacle shape, with crosses trapping axons in their vicinity and triangles funneling axons along the reverse direction of their tip. From the point of view of dynamics, obstacles reduced the capacity of networks to spontaneously activate, with triangles in turn strongly dictating the direction of activity propagation. Effective connectivity networks, inferred using transfer entropy, exhibited distinct modular traits, indicating that the presence of obstacles facilitated the formation of local effective microcircuits. Our study illustrates the potential of physical constraints to shape structural blueprints and remodel collective activity, and may guide investigations aimed at mimicking organizational traits of biological neuronal circuits.
Collapse
Affiliation(s)
- Adriaan-Alexander Ludl
- Computational Biology Unit, Department of Informatics, University of Bergen, Bergen, Norway.,Departament de Física de la Matèria Condensada, Universitat de Barcelona, Barcelona, Spain.,Universitat de Barcelona Institute of Complex Systems (UBICS), Barcelona, Spain
| | - Jordi Soriano
- Departament de Física de la Matèria Condensada, Universitat de Barcelona, Barcelona, Spain.,Universitat de Barcelona Institute of Complex Systems (UBICS), Barcelona, Spain
| |
Collapse
|
9
|
Gutzen R, von Papen M, Trensch G, Quaglio P, Grün S, Denker M. Reproducible Neural Network Simulations: Statistical Methods for Model Validation on the Level of Network Activity Data. Front Neuroinform 2018; 12:90. [PMID: 30618696 PMCID: PMC6305903 DOI: 10.3389/fninf.2018.00090] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Accepted: 11/14/2018] [Indexed: 11/13/2022] Open
Abstract
Computational neuroscience relies on simulations of neural network models to bridge the gap between the theory of neural networks and the experimentally observed activity dynamics in the brain. The rigorous validation of simulation results against reference data is thus an indispensable part of any simulation workflow. Moreover, the availability of different simulation environments and levels of model description require also validation of model implementations against each other to evaluate their equivalence. Despite rapid advances in the formalized description of models, data, and analysis workflows, there is no accepted consensus regarding the terminology and practical implementation of validation workflows in the context of neural simulations. This situation prevents the generic, unbiased comparison between published models, which is a key element of enhancing reproducibility of computational research in neuroscience. In this study, we argue for the establishment of standardized statistical test metrics that enable the quantitative validation of network models on the level of the population dynamics. Despite the importance of validating the elementary components of a simulation, such as single cell dynamics, building networks from validated building blocks does not entail the validity of the simulation on the network scale. Therefore, we introduce a corresponding set of validation tests and present an example workflow that practically demonstrates the iterative model validation of a spiking neural network model against its reproduction on the SpiNNaker neuromorphic hardware system. We formally implement the workflow using a generic Python library that we introduce for validation tests on neural network activity data. Together with the companion study (Trensch et al., 2018), the work presents a consistent definition, formalization, and implementation of the verification and validation process for neural network simulations.
Collapse
Affiliation(s)
- Robin Gutzen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Michael von Papen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Guido Trensch
- Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Jülich Research Centre, Jülich, Germany
| | - Pietro Quaglio
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Michael Denker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
10
|
Senk J, Carde C, Hagen E, Kuhlen TW, Diesmann M, Weyers B. VIOLA-A Multi-Purpose and Web-Based Visualization Tool for Neuronal-Network Simulation Output. Front Neuroinform 2018; 12:75. [PMID: 30467469 PMCID: PMC6236002 DOI: 10.3389/fninf.2018.00075] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Accepted: 10/10/2018] [Indexed: 11/13/2022] Open
Abstract
Neuronal network models and corresponding computer simulations are invaluable tools to aid the interpretation of the relationship between neuron properties, connectivity, and measured activity in cortical tissue. Spatiotemporal patterns of activity propagating across the cortical surface as observed experimentally can for example be described by neuronal network models with layered geometry and distance-dependent connectivity. In order to cover the surface area captured by today's experimental techniques and to achieve sufficient self-consistency, such models contain millions of nerve cells. The interpretation of the resulting stream of multi-modal and multi-dimensional simulation data calls for integrating interactive visualization steps into existing simulation-analysis workflows. Here, we present a set of interactive visualization concepts called views for the visual analysis of activity data in topological network models, and a corresponding reference implementation VIOLA (VIsualization Of Layer Activity). The software is a lightweight, open-source, web-based, and platform-independent application combining and adapting modern interactive visualization paradigms, such as coordinated multiple views, for massively parallel neurophysiological data. For a use-case demonstration we consider spiking activity data of a two-population, layered point-neuron network model incorporating distance-dependent connectivity subject to a spatially confined excitation originating from an external population. With the multiple coordinated views, an explorative and qualitative assessment of the spatiotemporal features of neuronal activity can be performed upfront of a detailed quantitative data analysis of specific aspects of the data. Interactive multi-view analysis therefore assists existing data analysis workflows. Furthermore, ongoing efforts including the European Human Brain Project aim at providing online user portals for integrated model development, simulation, analysis, and provenance tracking, wherein interactive visual analysis tools are one component. Browser-compatible, web-technology based solutions are therefore required. Within this scope, with VIOLA we provide a first prototype.
Collapse
Affiliation(s)
- Johanna Senk
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Corto Carde
- Visual Computing Institute, RWTH Aachen University, Aachen, Germany
- JARA - High-Performance Computing, Aachen, Germany
- IMT Atlantique Bretagne-Pays de la Loire, Brest, France
| | - Espen Hagen
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, University of Oslo, Oslo, Norway
| | - Torsten W. Kuhlen
- Visual Computing Institute, RWTH Aachen University, Aachen, Germany
- JARA - High-Performance Computing, Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Benjamin Weyers
- Visual Computing Institute, RWTH Aachen University, Aachen, Germany
- JARA - High-Performance Computing, Aachen, Germany
| |
Collapse
|
11
|
Concurrence of form and function in developing networks and its role in synaptic pruning. Nat Commun 2018; 9:2236. [PMID: 29884799 PMCID: PMC5993834 DOI: 10.1038/s41467-018-04537-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2017] [Accepted: 05/03/2018] [Indexed: 02/07/2023] Open
Abstract
A fundamental question in neuroscience is how structure and function of neural systems are related. We study this interplay by combining a familiar auto-associative neural network with an evolving mechanism for the birth and death of synapses. A feedback loop then arises leading to two qualitatively different types of behaviour. In one, the network structure becomes heterogeneous and dissasortative, and the system displays good memory performance; furthermore, the structure is optimised for the particular memory patterns stored during the process. In the other, the structure remains homogeneous and incapable of pattern retrieval. These findings provide an inspiring picture of brain structure and dynamics that is compatible with experimental results on early brain development, and may help to explain synaptic pruning. Other evolving networks—such as those of protein interactions—might share the basic ingredients for this feedback loop and other questions, and indeed many of their structural features are as predicted by our model. How structure and function coevolve in developing brains is little understood. Here, the authors study a coupled model of network development and memory, and find that due to the feedback networks with some initial memory capacity evolve into heterogeneous structures with high memory performance.
Collapse
|
12
|
Rankin J, Chavane F. Neural field model to reconcile structure with function in primary visual cortex. PLoS Comput Biol 2017; 13:e1005821. [PMID: 29065120 PMCID: PMC5669491 DOI: 10.1371/journal.pcbi.1005821] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2017] [Revised: 11/03/2017] [Accepted: 10/14/2017] [Indexed: 11/19/2022] Open
Abstract
Voltage-sensitive dye imaging experiments in primary visual cortex (V1) have shown that local, oriented visual stimuli elicit stable orientation-selective activation within the stimulus retinotopic footprint. The cortical activation dynamically extends far beyond the retinotopic footprint, but the peripheral spread stays non-selective-a surprising finding given a number of anatomo-functional studies showing the orientation specificity of long-range connections. Here we use a computational model to investigate this apparent discrepancy by studying the expected population response using known published anatomical constraints. The dynamics of input-driven localized states were simulated in a planar neural field model with multiple sub-populations encoding orientation. The realistic connectivity profile has parameters controlling the clustering of long-range connections and their orientation bias. We found substantial overlap between the anatomically relevant parameter range and a steep decay in orientation selective activation that is consistent with the imaging experiments. In this way our study reconciles the reported orientation bias of long-range connections with the functional expression of orientation selective neural activity. Our results demonstrate this sharp decay is contingent on three factors, that long-range connections are sufficiently diffuse, that the orientation bias of these connections is in an intermediate range (consistent with anatomy) and that excitation is sufficiently balanced by inhibition. Conversely, our modelling results predict that, for reduced inhibition strength, spurious orientation selective activation could be generated through long-range lateral connections. Furthermore, if the orientation bias of lateral connections is very strong, or if inhibition is particularly weak, the network operates close to an instability leading to unbounded cortical activation.
Collapse
Affiliation(s)
- James Rankin
- Department of Mathematics, University of Exeter, Exeter, United Kingdom
- Center for Neural Science, New York University, New York, New York, United States of America
| | - Frédéric Chavane
- Institut de Neurosciences de la Timone, CNRS & Aix-Marseille Université, Faculté de Médecine, Marseille, France
| |
Collapse
|
13
|
Hahne J, Dahmen D, Schuecker J, Frommer A, Bolten M, Helias M, Diesmann M. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator. Front Neuroinform 2017; 11:34. [PMID: 28596730 PMCID: PMC5442232 DOI: 10.3389/fninf.2017.00034] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2016] [Accepted: 05/01/2017] [Indexed: 01/21/2023] Open
Abstract
Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.
Collapse
Affiliation(s)
- Jan Hahne
- School of Mathematics and Natural Sciences, Bergische Universität WuppertalWuppertal, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
| | - Jannis Schuecker
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
| | - Andreas Frommer
- School of Mathematics and Natural Sciences, Bergische Universität WuppertalWuppertal, Germany
| | - Matthias Bolten
- School of Mathematics and Natural Sciences, Bergische Universität WuppertalWuppertal, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen UniversityAachen, Germany
| |
Collapse
|
14
|
Taouali W, Benvenuti G, Wallisch P, Chavane F, Perrinet LU. Testing the odds of inherent vs. observed overdispersion in neural spike counts. J Neurophysiol 2016; 115:434-44. [PMID: 26445864 PMCID: PMC4760471 DOI: 10.1152/jn.00194.2015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Accepted: 10/04/2015] [Indexed: 01/15/2023] Open
Abstract
The repeated presentation of an identical visual stimulus in the receptive field of a neuron may evoke different spiking patterns at each trial. Probabilistic methods are essential to understand the functional role of this variance within the neural activity. In that case, a Poisson process is the most common model of trial-to-trial variability. For a Poisson process, the variance of the spike count is constrained to be equal to the mean, irrespective of the duration of measurements. Numerous studies have shown that this relationship does not generally hold. Specifically, a majority of electrophysiological recordings show an "overdispersion" effect: responses that exhibit more intertrial variability than expected from a Poisson process alone. A model that is particularly well suited to quantify overdispersion is the Negative-Binomial distribution model. This model is well-studied and widely used but has only recently been applied to neuroscience. In this article, we address three main issues. First, we describe how the Negative-Binomial distribution provides a model apt to account for overdispersed spike counts. Second, we quantify the significance of this model for any neurophysiological data by proposing a statistical test, which quantifies the odds that overdispersion could be due to the limited number of repetitions (trials). We apply this test to three neurophysiological data sets along the visual pathway. Finally, we compare the performance of this model to the Poisson model on a population decoding task. We show that the decoding accuracy is improved when accounting for overdispersion, especially under the hypothesis of tuned overdispersion.
Collapse
Affiliation(s)
- Wahiba Taouali
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France; and
| | - Giacomo Benvenuti
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France; and
| | - Pascal Wallisch
- Center for Neural Science, New York University, New York, New York
| | - Frédéric Chavane
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France; and
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France; and
| |
Collapse
|
15
|
A geometric network model of intrinsic grey-matter connectivity of the human brain. Sci Rep 2015; 5:15397. [PMID: 26503036 PMCID: PMC4621526 DOI: 10.1038/srep15397] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Accepted: 09/16/2015] [Indexed: 01/26/2023] Open
Abstract
Network science provides a general framework for analysing the large-scale brain networks that naturally arise from modern neuroimaging studies, and a key goal in theoretical neuroscience is to understand the extent to which these neural architectures influence the dynamical processes they sustain. To date, brain network modelling has largely been conducted at the macroscale level (i.e. white-matter tracts), despite growing evidence of the role that local grey matter architecture plays in a variety of brain disorders. Here, we present a new model of intrinsic grey matter connectivity of the human connectome. Importantly, the new model incorporates detailed information on cortical geometry to construct 'shortcuts' through the thickness of the cortex, thus enabling spatially distant brain regions, as measured along the cortical surface, to communicate. Our study indicates that structures based on human brain surface information differ significantly, both in terms of their topological network characteristics and activity propagation properties, when compared against a variety of alternative geometries and generative algorithms. In particular, this might help explain histological patterns of grey matter connectivity, highlighting that observed connection distances may have arisen to maximise information processing ability, and that such gains are consistent with (and enhanced by) the presence of short-cut connections.
Collapse
|
16
|
Role of input correlations in shaping the variability and noise correlations of evoked activity in the neocortex. J Neurosci 2015; 35:8611-25. [PMID: 26041927 DOI: 10.1523/jneurosci.4536-14.2015] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Recent analysis of evoked activity recorded across different brain regions and tasks revealed a marked decrease in noise correlations and trial-by-trial variability. Given the importance of correlations and variability for information processing within the rate coding paradigm, several mechanisms have been proposed to explain the reduction in these quantities despite an increase in firing rates. These models suggest that anatomical clusters and/or tightly balanced excitation-inhibition can generate intrinsic network dynamics that may exhibit a reduction in noise correlations and trial-by-trial variability when perturbed by an external input. Such mechanisms based on the recurrent feedback crucially ignore the contribution of feedforward input to the statistics of the evoked activity. Therefore, we investigated how statistical properties of the feedforward input shape the statistics of the evoked activity. Specifically, we focused on the effect of input correlation structure on the noise correlations and trial-by-trial variability. We show that the ability of neurons to transfer the input firing rate, correlation, and variability to the output depends on the correlations within the presynaptic pool of a neuron, and that an input with even weak within-correlations can be sufficient to reduce noise correlations and trial-by-trial variability, without requiring any specific recurrent connectivity structure. In general, depending on the ongoing activity state, feedforward input could either increase or decrease noise correlation and trial-by-trial variability. Thus, we propose that evoked activity statistics are jointly determined by the feedforward and feedback inputs.
Collapse
|
17
|
Schmeltzer C, Kihara AH, Sokolov IM, Rüdiger S. Degree Correlations Optimize Neuronal Network Sensitivity to Sub-Threshold Stimuli. PLoS One 2015; 10:e0121794. [PMID: 26115374 PMCID: PMC4482728 DOI: 10.1371/journal.pone.0121794] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2014] [Accepted: 01/02/2015] [Indexed: 11/19/2022] Open
Abstract
Information processing in the brain crucially depends on the topology of the neuronal connections. We investigate how the topology influences the response of a population of leaky integrate-and-fire neurons to a stimulus. We devise a method to calculate firing rates from a self-consistent system of equations taking into account the degree distribution and degree correlations in the network. We show that assortative degree correlations strongly improve the sensitivity for weak stimuli and propose that such networks possess an advantage in signal processing. We moreover find that there exists an optimum in assortativity at an intermediate level leading to a maximum in input/output mutual information.
Collapse
Affiliation(s)
| | | | | | - Sten Rüdiger
- Institut für Physik, Humboldt-Universität zu Berlin, Germany
- * E-mail:
| |
Collapse
|
18
|
Roy D, Sigala R, Breakspear M, McIntosh AR, Jirsa VK, Deco G, Ritter P. Using the Virtual Brain to Reveal the Role of Oscillations and Plasticity in Shaping Brain's Dynamical Landscape. Brain Connect 2014; 4:791-811. [DOI: 10.1089/brain.2014.0252] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Affiliation(s)
- Dipanjan Roy
- Department of Neurology, Charité—University Medicine, Berlin, Germany
- Bernstein Focus State Dependencies of Learning & Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Rodrigo Sigala
- Department of Neurology, Charité—University Medicine, Berlin, Germany
- Bernstein Focus State Dependencies of Learning & Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Michael Breakspear
- Division of Mental Health Research, Queensland Institute of Medical Research, Brisbane, QLD, Australia
- School of Psychiatry, University of New South Wales and The Black Dog Institute, Sydney, NSW, Australia
- The Royal Brisbane and Woman's Hospital, Brisbane, QLD, Australia
| | | | - Viktor K. Jirsa
- Institut de Neurosciences des Systèmes UMR INSERM 1106, Aix-Marseille Université Faculté de Médecine, Marseille, France
| | - Gustavo Deco
- Center for Brain and Cognition, Universitat Pompeu Fabra, ICREA (Institut Catala Recerca i Estudis Avancats), Barcelona, Spain
| | - Petra Ritter
- Department of Neurology, Charité—University Medicine, Berlin, Germany
- Bernstein Focus State Dependencies of Learning & Bernstein Center for Computational Neuroscience, Berlin, Germany
- Minerva Research Group BrainModes, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Berlin School of Mind and Brain & Mind and Brain Institute, Humboldt University, Berlin, Germany
| |
Collapse
|
19
|
McDonnell MD, Ward LM. Small modifications to network topology can induce stochastic bistable spiking dynamics in a balanced cortical model. PLoS One 2014; 9:e88254. [PMID: 24743633 PMCID: PMC3990528 DOI: 10.1371/journal.pone.0088254] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2013] [Accepted: 01/06/2014] [Indexed: 12/27/2022] Open
Abstract
Directed random graph models frequently are used successfully in modeling the population dynamics of networks of cortical neurons connected by chemical synapses. Experimental results consistently reveal that neuronal network topology is complex, however, in the sense that it differs statistically from a random network, and differs for classes of neurons that are physiologically different. This suggests that complex network models whose subnetworks have distinct topological structure may be a useful, and more biologically realistic, alternative to random networks. Here we demonstrate that the balanced excitation and inhibition frequently observed in small cortical regions can transiently disappear in otherwise standard neuronal-scale models of fluctuation-driven dynamics, solely because the random network topology was replaced by a complex clustered one, whilst not changing the in-degree of any neurons. In this network, a small subset of cells whose inhibition comes only from outside their local cluster are the cause of bistable population dynamics, where different clusters of these cells irregularly switch back and forth from a sparsely firing state to a highly active state. Transitions to the highly active state occur when a cluster of these cells spikes sufficiently often to cause strong unbalanced positive feedback to each other. Transitions back to the sparsely firing state rely on occasional large fluctuations in the amount of non-local inhibition received. Neurons in the model are homogeneous in their intrinsic dynamics and in-degrees, but differ in the abundance of various directed feedback motifs in which they participate. Our findings suggest that (i) models and simulations should take into account complex structure that varies for neuron and synapse classes; (ii) differences in the dynamics of neurons with similar intrinsic properties may be caused by their membership in distinctive local networks; (iii) it is important to identify neurons that share physiological properties and location, but differ in their connectivity.
Collapse
Affiliation(s)
- Mark D. McDonnell
- Computational and Theoretical Neuroscience Laboratory, Institute for Telecommunications Research, University of South Australia, Mawson Lakes, South Australia, Australia
- * E-mail:
| | - Lawrence M. Ward
- Department of Psychology and Brain Research Centre, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
20
|
van Ooyen A, Carnell A, de Ridder S, Tarigan B, Mansvelder HD, Bijma F, de Gunst M, van Pelt J. Independently outgrowing neurons and geometry-based synapse formation produce networks with realistic synaptic connectivity. PLoS One 2014; 9:e85858. [PMID: 24454938 PMCID: PMC3894200 DOI: 10.1371/journal.pone.0085858] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2013] [Accepted: 12/03/2013] [Indexed: 11/18/2022] Open
Abstract
Neuronal signal integration and information processing in cortical networks critically depend on the organization of synaptic connectivity. During development, neurons can form synaptic connections when their axonal and dendritic arborizations come within close proximity of each other. Although many signaling cues are thought to be involved in guiding neuronal extensions, the extent to which accidental appositions between axons and dendrites can already account for synaptic connectivity remains unclear. To investigate this, we generated a local network of cortical L2/3 neurons that grew out independently of each other and that were not guided by any extracellular cues. Synapses were formed when axonal and dendritic branches came by chance within a threshold distance of each other. Despite the absence of guidance cues, we found that the emerging synaptic connectivity showed a good agreement with available experimental data on spatial locations of synapses on dendrites and axons, number of synapses by which neurons are connected, connection probability between neurons, distance between connected neurons, and pattern of synaptic connectivity. The connectivity pattern had a small-world topology but was not scale free. Together, our results suggest that baseline synaptic connectivity in local cortical circuits may largely result from accidentally overlapping axonal and dendritic branches of independently outgrowing neurons.
Collapse
Affiliation(s)
- Arjen van Ooyen
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, The Netherlands
| | - Andrew Carnell
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, The Netherlands
| | - Sander de Ridder
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, The Netherlands
| | - Bernadetta Tarigan
- Department of Mathematics, VU University Amsterdam, Amsterdam, The Netherlands
| | - Huibert D. Mansvelder
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, The Netherlands
| | - Fetsje Bijma
- Department of Mathematics, VU University Amsterdam, Amsterdam, The Netherlands
| | - Mathisca de Gunst
- Department of Mathematics, VU University Amsterdam, Amsterdam, The Netherlands
| | - Jaap van Pelt
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
21
|
Kutchko KM, Fröhlich F. Emergence of metastable state dynamics in interconnected cortical networks with propagation delays. PLoS Comput Biol 2013; 9:e1003304. [PMID: 24204238 PMCID: PMC3812055 DOI: 10.1371/journal.pcbi.1003304] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2013] [Accepted: 09/11/2013] [Indexed: 01/01/2023] Open
Abstract
The importance of the large number of thin-diameter and unmyelinated axons that connect different cortical areas is unknown. The pronounced propagation delays in these axons may prevent synchronization of cortical networks and therefore hinder efficient information integration and processing. Yet, such global information integration across cortical areas is vital for higher cognitive function. We hypothesized that delays in communication between cortical areas can disrupt synchronization and therefore enhance the set of activity trajectories and computations interconnected networks can perform. To evaluate this hypothesis, we studied the effect of long-range cortical projections with propagation delays in interconnected large-scale cortical networks that exhibited spontaneous rhythmic activity. Long-range connections with delays caused the emergence of metastable, spatio-temporally distinct activity states between which the networks spontaneously transitioned. Interestingly, the observed activity patterns correspond to macroscopic network dynamics such as globally synchronized activity, propagating wave fronts, and spiral waves that have been previously observed in neurophysiological recordings from humans and animal models. Transient perturbations with simulated transcranial alternating current stimulation (tACS) confirmed the multistability of the interconnected networks by switching the networks between these metastable states. Our model thus proposes that slower long-range connections enrich the landscape of activity states and represent a parsimonious mechanism for the emergence of multistability in cortical networks. These results further provide a mechanistic link between the known deficits in connectivity and cortical state dynamics in neuropsychiatric illnesses such as schizophrenia and autism, as well as suggest non-invasive brain stimulation as an effective treatment for these illnesses.
Collapse
Affiliation(s)
- Katrina M. Kutchko
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
- Curriculum in Bioinformatics and Computational Biology, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
| | - Flavio Fröhlich
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
- Curriculum in Bioinformatics and Computational Biology, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
- Department of Cell Biology and Physiology, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
- Department of Biomedical Engineering, University of North Carolina at Chapel Hill, Chapel Hill North Carolina, United States of America
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
- * E-mail:
| |
Collapse
|
22
|
Kaplan BA, Lansner A, Masson GS, Perrinet LU. Anisotropic connectivity implements motion-based prediction in a spiking neural network. Front Comput Neurosci 2013; 7:112. [PMID: 24062680 PMCID: PMC3775506 DOI: 10.3389/fncom.2013.00112] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2013] [Accepted: 07/25/2013] [Indexed: 12/28/2022] Open
Abstract
Predictive coding hypothesizes that the brain explicitly infers upcoming sensory input to establish a coherent representation of the world. Although it is becoming generally accepted, it is not clear on which level spiking neural networks may implement predictive coding and what function their connectivity may have. We present a network model of conductance-based integrate-and-fire neurons inspired by the architecture of retinotopic cortical areas that assumes predictive coding is implemented through network connectivity, namely in the connection delays and in selectiveness for the tuning properties of source and target cells. We show that the applied connection pattern leads to motion-based prediction in an experiment tracking a moving dot. In contrast to our proposed model, a network with random or isotropic connectivity fails to predict the path when the moving dot disappears. Furthermore, we show that a simple linear decoding approach is sufficient to transform neuronal spiking activity into a probabilistic estimate for reading out the target trajectory.
Collapse
Affiliation(s)
- Bernhard A. Kaplan
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
- Department of Numerical Analysis and Computer Science, Stockholm UniversityStockholm, Sweden
| | - Guillaume S. Masson
- Institut de Neurosciences de la Timone, UMR7289, Centre National de la Recherche Scientifique & Aix-Marseille UniversitéMarseille, France
| | - Laurent U. Perrinet
- Institut de Neurosciences de la Timone, UMR7289, Centre National de la Recherche Scientifique & Aix-Marseille UniversitéMarseille, France
| |
Collapse
|
23
|
Srinivasa N, Jiang Q. Stable learning of functional maps in self-organizing spiking neural networks with continuous synaptic plasticity. Front Comput Neurosci 2013; 7:10. [PMID: 23450808 PMCID: PMC3583036 DOI: 10.3389/fncom.2013.00010] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2012] [Accepted: 02/09/2013] [Indexed: 11/13/2022] Open
Abstract
This study describes a spiking model that self-organizes for stable formation and maintenance of orientation and ocular dominance maps in the visual cortex (V1). This self-organization process simulates three development phases: an early experience-independent phase, a late experience-independent phase and a subsequent refinement phase during which experience acts to shape the map properties. The ocular dominance maps that emerge accommodate the two sets of monocular inputs that arise from the lateral geniculate nucleus (LGN) to layer 4 of V1. The orientation selectivity maps that emerge feature well-developed iso-orientation domains and fractures. During the last two phases of development the orientation preferences at some locations appear to rotate continuously through ±180° along circular paths and referred to as pinwheel-like patterns but without any corresponding point discontinuities in the orientation gradient maps. The formation of these functional maps is driven by balanced excitatory and inhibitory currents that are established via synaptic plasticity based on spike timing for both excitatory and inhibitory synapses. The stability and maintenance of the formed maps with continuous synaptic plasticity is enabled by homeostasis caused by inhibitory plasticity. However, a prolonged exposure to repeated stimuli does alter the formed maps over time due to plasticity. The results from this study suggest that continuous synaptic plasticity in both excitatory neurons and interneurons could play a critical role in the formation, stability, and maintenance of functional maps in the cortex.
Collapse
Affiliation(s)
- Narayan Srinivasa
- Center for Neural and Emergent Systems, HRL Laboratories LLC Malibu, CA, USA
| | | |
Collapse
|