1
|
Papo D, Buldú JM. Does the brain behave like a (complex) network? I. Dynamics. Phys Life Rev 2024; 48:47-98. [PMID: 38145591 DOI: 10.1016/j.plrev.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/10/2023] [Indexed: 12/27/2023]
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network structure does not entail that the brain actually works as a network. Asking whether the brain behaves as a network means asking whether network properties count. From the viewpoint of neurophysiology and, possibly, of brain physics, the most substantial issues a network structure may be instrumental in addressing relate to the influence of network properties on brain dynamics and to whether these properties ultimately explain some aspects of brain function. Here, we address the dynamical implications of complex network, examining which aspects and scales of brain activity may be understood to genuinely behave as a network. To do so, we first define the meaning of networkness, and analyse some of its implications. We then examine ways in which brain anatomy and dynamics can be endowed with a network structure and discuss possible ways in which network structure may be shown to represent a genuine organisational principle of brain activity, rather than just a convenient description of its anatomy and dynamics.
Collapse
Affiliation(s)
- D Papo
- Department of Neuroscience and Rehabilitation, Section of Physiology, University of Ferrara, Ferrara, Italy; Center for Translational Neurophysiology, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy.
| | - J M Buldú
- Complex Systems Group & G.I.S.C., Universidad Rey Juan Carlos, Madrid, Spain
| |
Collapse
|
2
|
Nardin M, Csicsvari J, Tkačik G, Savin C. The Structure of Hippocampal CA1 Interactions Optimizes Spatial Coding across Experience. J Neurosci 2023; 43:8140-8156. [PMID: 37758476 PMCID: PMC10697404 DOI: 10.1523/jneurosci.0194-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/11/2023] [Accepted: 09/14/2023] [Indexed: 10/03/2023] Open
Abstract
Although much is known about how single neurons in the hippocampus represent an animal's position, how circuit interactions contribute to spatial coding is less well understood. Using a novel statistical estimator and theoretical modeling, both developed in the framework of maximum entropy models, we reveal highly structured CA1 cell-cell interactions in male rats during open field exploration. The statistics of these interactions depend on whether the animal is in a familiar or novel environment. In both conditions the circuit interactions optimize the encoding of spatial information, but for regimes that differ in the informativeness of their spatial inputs. This structure facilitates linear decodability, making the information easy to read out by downstream circuits. Overall, our findings suggest that the efficient coding hypothesis is not only applicable to individual neuron properties in the sensory periphery, but also to neural interactions in the central brain.SIGNIFICANCE STATEMENT Local circuit interactions play a key role in neural computation and are dynamically shaped by experience. However, measuring and assessing their effects during behavior remains a challenge. Here, we combine techniques from statistical physics and machine learning to develop new tools for determining the effects of local network interactions on neural population activity. This approach reveals highly structured local interactions between hippocampal neurons, which make the neural code more precise and easier to read out by downstream circuits, across different levels of experience. More generally, the novel combination of theory and data analysis in the framework of maximum entropy models enables traditional neural coding questions to be asked in naturalistic settings.
Collapse
Affiliation(s)
- Michele Nardin
- Institute of Science and Technology Austria, Klosterneuburg AT-3400, Austria
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147
| | - Jozsef Csicsvari
- Institute of Science and Technology Austria, Klosterneuburg AT-3400, Austria
| | - Gašper Tkačik
- Institute of Science and Technology Austria, Klosterneuburg AT-3400, Austria
| | - Cristina Savin
- Center for Neural Science, New York University, New York, New York 10003
- Center for Data Science, New York University, New York, New York 10011
| |
Collapse
|
3
|
Rahman M, Nemenman I. Inferring local structure from pairwise correlations. Phys Rev E 2023; 108:034410. [PMID: 37849214 DOI: 10.1103/physreve.108.034410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 09/11/2023] [Indexed: 10/19/2023]
Abstract
To construct models of large, multivariate complex systems, such as those in biology, one needs to constrain which variables are allowed to interact. This can be viewed as detecting "local" structures among the variables. In the context of a simple toy model of two-dimensional natural and synthetic images, we show that pairwise correlations between the variables-even when severely undersampled-provide enough information to recover local relations, including the dimensionality of the data, and to reconstruct arrangement of pixels in fully scrambled images. This proves to be successful even though higher order interaction structures are present in our data. We build intuition behind the success, which we hope might contribute to modeling complex, multivariate systems and to explaining the success of modern attention-based machine learning approaches.
Collapse
Affiliation(s)
- Mahajabin Rahman
- Department of Physics, Emory University, Atlanta, Georgia 30322, USA
| | - Ilya Nemenman
- Department of Physics, Department of Biology, and Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, Georgia 30322, USA
| |
Collapse
|
4
|
Hernández DG, Sober SJ, Nemenman I. Unsupervised Bayesian Ising Approximation for decoding neural activity and other biological dictionaries. eLife 2022; 11:68192. [PMID: 35315769 PMCID: PMC8989415 DOI: 10.7554/elife.68192] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 03/19/2022] [Indexed: 11/13/2022] Open
Abstract
The problem of deciphering how low-level patterns (action potentials in the brain, amino acids in a protein, etc.) drive high-level biological features (sensorimotor behavior, enzymatic function) represents the central challenge of quantitative biology. The lack of general methods for doing so from the size of datasets that can be collected experimentally severely limits our understanding of the biological world. For example, in neuroscience, some sensory and motor codes have been shown to consist of precisely timed multi-spike patterns. However, the combinatorial complexity of such pattern codes have precluded development of methods for their comprehensive analysis. Thus, just as it is hard to predict a protein's function based on its sequence, we still do not understand how to accurately predict an organism's behavior based on neural activity. Here we introduce the unsupervised Bayesian Ising Approximation (uBIA) for solving this class of problems. We demonstrate its utility in an application to neural data, detecting precisely timed spike patterns that code for specific motor behaviors in a songbird vocal system. In data recorded during singing from neurons in a vocal control region, our method detects such codewords with an arbitrary number of spikes, does so from small data sets, and accounts for dependencies in occurrences of codewords. Detecting such comprehensive motor control dictionaries can improve our understanding of skilled motor control and the neural bases of sensorimotor learning in animals. To further illustrate the utility of uBIA, used it to identify the distinct sets of activity patterns that encode vocal motor exploration versus typical song production. Crucially, our method can be used not only for analysis of neural systems, but also for understanding the structure of correlations in other biological and nonbiological datasets.
Collapse
Affiliation(s)
- Damián G Hernández
- Department of Medical Physics, Centro Atómico Bariloche and Instituto Balseiro, Bariloche, Argentina
| | - Samuel J Sober
- Department of Biology, Emory University, Atlanta, United States
| | - Ilya Nemenman
- Department of Physics, Emory University, Atlanta, United States
| |
Collapse
|
5
|
Juanico DEO. Neuronal Population Transitions Across a Quiescent-to-Active Frontier and Bifurcation. Front Physiol 2022; 13:840546. [PMID: 35222095 PMCID: PMC8867020 DOI: 10.3389/fphys.2022.840546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 01/12/2022] [Indexed: 11/13/2022] Open
Abstract
The mechanistic understanding of why neuronal population activity hovers on criticality remains unresolved despite the availability of experimental results. Without a coherent mathematical framework, the presence of power-law scaling is not straightforward to reconcile with findings implying epileptiform activity. Although multiple pictures have been proposed to relate the power-law scaling of avalanche statistics to phase transitions, the existence of a phase boundary in parameter space is until now an assumption. Herein, a framework based on differential inclusions, which departs from approaches constructed from differential equations, is shown to offer an adequate consolidation of evidences apparently connected to criticality and those linked to hyperexcitability. Through this framework, the phase boundary is elucidated in a parameter space spanned by variables representing levels of excitation and inhibition in a neuronal network. The interpretation of neuronal populations based on this approach offers insights on the role of pharmacological and endocrinal signaling in the homeostatic regulation of neuronal population activity.
Collapse
Affiliation(s)
- Drandreb Earl O. Juanico
- DataSc/ense TechnoCoRe, Technological Institute of the Philippines, Quezon City, Philippines
- NICER Program, Center for Advanced Batteries, Quezon City, Philippines
| |
Collapse
|
6
|
Parametric Copula-GP model for analyzing multidimensional neuronal and behavioral relationships. PLoS Comput Biol 2022; 18:e1009799. [PMID: 35089913 PMCID: PMC8827448 DOI: 10.1371/journal.pcbi.1009799] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 02/09/2022] [Accepted: 01/02/2022] [Indexed: 11/19/2022] Open
Abstract
One of the main goals of current systems neuroscience is to understand how neuronal populations integrate sensory information to inform behavior. However, estimating stimulus or behavioral information that is encoded in high-dimensional neuronal populations is challenging. We propose a method based on parametric copulas which allows modeling joint distributions of neuronal and behavioral variables characterized by different statistics and timescales. To account for temporal or spatial changes in dependencies between variables, we model varying copula parameters by means of Gaussian Processes (GP). We validate the resulting Copula-GP framework on synthetic data and on neuronal and behavioral recordings obtained in awake mice. We show that the use of a parametric description of the high-dimensional dependence structure in our method provides better accuracy in mutual information estimation in higher dimensions compared to other non-parametric methods. Moreover, by quantifying the redundancy between neuronal and behavioral variables, our model exposed the location of the reward zone in an unsupervised manner (i.e., without using any explicit cues about the task structure). These results demonstrate that the Copula-GP framework is particularly useful for the analysis of complex multidimensional relationships between neuronal, sensory and behavioral variables. Understanding the relationship between a set of variables is a common problem in many fields, such as weather forecast or stock market data. In neuroscience, one of the main challenges is to characterize the dependencies between neuronal activity, sensory stimuli and behavioral outputs. A method of choice for modeling such statistical dependencies is based on copulas, which disentangle dependencies from single variable statistics. To account for changes in dependencies, we model changes in copula parameters by means of Gaussian Processes, conditioned on a task-related variable. The novelty of our approach includes 1) explicit modeling of the dependencies; and 2) combining different copulas to describe experimentally observed variability. We validate the goodness-of-fit as well as information estimates on synthetic data and on recordings from the visual cortex of mice performing a behavioral task. Our parametric model demonstrates significantly better performance in describing high dimensional dependencies compared to other commonly used techniques. We demonstrate that our model can estimate information and predict behaviorally-relevant parameters of the task without providing any explicit cues to the model. Our results indicate that our model is interpretable in the context of neuroscience applications, scalable to large datasets and suitable for accurate statistical modeling and information estimation.
Collapse
|
7
|
Hurwitz C, Kudryashova N, Onken A, Hennig MH. Building population models for large-scale neural recordings: Opportunities and pitfalls. Curr Opin Neurobiol 2021; 70:64-73. [PMID: 34411907 DOI: 10.1016/j.conb.2021.07.003] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 06/11/2021] [Accepted: 07/14/2021] [Indexed: 11/15/2022]
Abstract
Modern recording technologies now enable simultaneous recording from large numbers of neurons. This has driven the development of new statistical models for analyzing and interpreting neural population activity. Here, we provide a broad overview of recent developments in this area. We compare and contrast different approaches, highlight strengths and limitations, and discuss biological and mechanistic insights that these methods provide.
Collapse
Affiliation(s)
- Cole Hurwitz
- University of Edinburgh, Institute for Adaptive and Neural Computation Edinburgh, EH8 9AB, United Kingdom
| | - Nina Kudryashova
- University of Edinburgh, Institute for Adaptive and Neural Computation Edinburgh, EH8 9AB, United Kingdom
| | - Arno Onken
- University of Edinburgh, Institute for Adaptive and Neural Computation Edinburgh, EH8 9AB, United Kingdom
| | - Matthias H Hennig
- University of Edinburgh, Institute for Adaptive and Neural Computation Edinburgh, EH8 9AB, United Kingdom.
| |
Collapse
|
8
|
Zhao X, Plata G, Dixit PD. SiGMoiD: A super-statistical generative model for binary data. PLoS Comput Biol 2021; 17:e1009275. [PMID: 34358223 PMCID: PMC8372922 DOI: 10.1371/journal.pcbi.1009275] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 08/18/2021] [Accepted: 07/13/2021] [Indexed: 11/29/2022] Open
Abstract
In modern computational biology, there is great interest in building probabilistic models to describe collections of a large number of co-varying binary variables. However, current approaches to build generative models rely on modelers’ identification of constraints and are computationally expensive to infer when the number of variables is large (N~100). Here, we address both these issues with Super-statistical Generative Model for binary Data (SiGMoiD). SiGMoiD is a maximum entropy-based framework where we imagine the data as arising from super-statistical system; individual binary variables in a given sample are coupled to the same ‘bath’ whose intensive variables vary from sample to sample. Importantly, unlike standard maximum entropy approaches where modeler specifies the constraints, the SiGMoiD algorithm infers them directly from the data. Due to this optimal choice of constraints, SiGMoiD allows us to model collections of a very large number (N>1000) of binary variables. Finally, SiGMoiD offers a reduced dimensional description of the data, allowing us to identify clusters of similar data points as well as binary variables. We illustrate the versatility of SiGMoiD using multiple datasets spanning several time- and length-scales. Collectively varying binary variables are ubiquitous in modern biology. Given that the number of possible configurations of these systems typically far exceeds the number of available samples, generative models have become an essential tool in quantitative descriptions of binary data. The state-of-the-art approaches to build generative models have several conceptual limitations. Specifically, they rely on the modeler choosing system-appropriate constraints, which can be challenging in systems with many complex interactions. Moreover, they are computationally expensive to infer when the number of variables is large (N~100). To address this issue, we propose a theoretical generalization of the maximum entropy approach that allows us to model very high dimensional data; at least an order of magnitude higher than what is currently possible. This framework will be a significant advancement in the computational analysis of covarying binary variables.
Collapse
Affiliation(s)
- Xiaochuan Zhao
- Department of Physics, University of Florida, Gainesville, Florida, United States of America
| | - Germán Plata
- Elanco Animal Health, Greenfield, Indiana, United States of America
| | - Purushottam D. Dixit
- Department of Physics, University of Florida, Gainesville, Florida, United States of America
- Genetics Institute, University of Florida, Gainesville, Florida, United States of America
- * E-mail:
| |
Collapse
|
9
|
Korhonen O, Zanin M, Papo D. Principles and open questions in functional brain network reconstruction. Hum Brain Mapp 2021; 42:3680-3711. [PMID: 34013636 PMCID: PMC8249902 DOI: 10.1002/hbm.25462] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 03/11/2021] [Accepted: 04/10/2021] [Indexed: 12/12/2022] Open
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network representation involves often covert theoretical assumptions and methodological choices which affect the way networks are reconstructed from experimental data, and ultimately the resulting network properties and their interpretation. Here, we review some fundamental conceptual underpinnings and technical issues associated with brain network reconstruction, and discuss how their mutual influence concurs in clarifying the organization of brain function.
Collapse
Affiliation(s)
- Onerva Korhonen
- Department of Computer ScienceAalto University, School of ScienceHelsinki
- Centre for Biomedical TechnologyUniversidad Politécnica de MadridPozuelo de Alarcón
| | - Massimiliano Zanin
- Instituto de Física Interdisciplinar y Sistemas Complejos IFISC (CSIC‐UIB), Campus UIBPalma de MallorcaSpain
| | - David Papo
- Fondazione Istituto Italiano di TecnologiaFerrara
- Department of Neuroscience and Rehabilitation, Section of PhysiologyUniversity of FerraraFerrara
| |
Collapse
|
10
|
Bittner SR, Palmigiano A, Piet AT, Duan CA, Brody CD, Miller KD, Cunningham J. Interrogating theoretical models of neural computation with emergent property inference. eLife 2021; 10:e56265. [PMID: 34323690 PMCID: PMC8321557 DOI: 10.7554/elife.56265] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 06/30/2021] [Indexed: 11/13/2022] Open
Abstract
A cornerstone of theoretical neuroscience is the circuit model: a system of equations that captures a hypothesized neural mechanism. Such models are valuable when they give rise to an experimentally observed phenomenon -- whether behavioral or a pattern of neural activity -- and thus can offer insights into neural computation. The operation of these circuits, like all models, critically depends on the choice of model parameters. A key step is then to identify the model parameters consistent with observed phenomena: to solve the inverse problem. In this work, we present a novel technique, emergent property inference (EPI), that brings the modern probabilistic modeling toolkit to theoretical neuroscience. When theorizing circuit models, theoreticians predominantly focus on reproducing computational properties rather than a particular dataset. Our method uses deep neural networks to learn parameter distributions with these computational properties. This methodology is introduced through a motivational example of parameter inference in the stomatogastric ganglion. EPI is then shown to allow precise control over the behavior of inferred parameters and to scale in parameter dimension better than alternative techniques. In the remainder of this work, we present novel theoretical findings in models of primary visual cortex and superior colliculus, which were gained through the examination of complex parametric structure captured by EPI. Beyond its scientific contribution, this work illustrates the variety of analyses possible once deep learning is harnessed towards solving theoretical inverse problems.
Collapse
Affiliation(s)
- Sean R Bittner
- Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | | | - Alex T Piet
- Princeton Neuroscience InstitutePrincetonUnited States
- Princeton UniversityPrincetonUnited States
- Allen Institute for Brain ScienceSeattleUnited States
| | - Chunyu A Duan
- Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
| | - Carlos D Brody
- Princeton Neuroscience InstitutePrincetonUnited States
- Princeton UniversityPrincetonUnited States
- Howard Hughes Medical InstituteChevy ChaseUnited States
| | - Kenneth D Miller
- Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - John Cunningham
- Department of Statistics, Columbia UniversityNew YorkUnited States
| |
Collapse
|
11
|
Bondanelli G, Deneux T, Bathellier B, Ostojic S. Network dynamics underlying OFF responses in the auditory cortex. eLife 2021; 10:e53151. [PMID: 33759763 PMCID: PMC8057817 DOI: 10.7554/elife.53151] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 03/19/2021] [Indexed: 11/13/2022] Open
Abstract
Across sensory systems, complex spatio-temporal patterns of neural activity arise following the onset (ON) and offset (OFF) of stimuli. While ON responses have been widely studied, the mechanisms generating OFF responses in cortical areas have so far not been fully elucidated. We examine here the hypothesis that OFF responses are single-cell signatures of recurrent interactions at the network level. To test this hypothesis, we performed population analyses of two-photon calcium recordings in the auditory cortex of awake mice listening to auditory stimuli, and compared them to linear single-cell and network models. While the single-cell model explained some prominent features of the data, it could not capture the structure across stimuli and trials. In contrast, the network model accounted for the low-dimensional organization of population responses and their global structure across stimuli, where distinct stimuli activated mostly orthogonal dimensions in the neural state-space.
Collapse
Affiliation(s)
- Giulio Bondanelli
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’études cognitives, ENS, PSL University, INSERMParisFrance
- Neural Computation Laboratory, Center for Human Technologies, Istituto Italiano di Tecnologia (IIT)GenoaItaly
| | - Thomas Deneux
- Départment de Neurosciences Intégratives et Computationelles (ICN), Institut des Neurosciences Paris-Saclay (NeuroPSI), UMR 9197 CNRS, Université Paris SudGif-sur-YvetteFrance
| | - Brice Bathellier
- Départment de Neurosciences Intégratives et Computationelles (ICN), Institut des Neurosciences Paris-Saclay (NeuroPSI), UMR 9197 CNRS, Université Paris SudGif-sur-YvetteFrance
- Institut Pasteur, INSERM, Institut de l’AuditionParisFrance
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’études cognitives, ENS, PSL University, INSERMParisFrance
| |
Collapse
|
12
|
Regonia PR, Takamura M, Nakano T, Ichikawa N, Fermin A, Okada G, Okamoto Y, Yamawaki S, Ikeda K, Yoshimoto J. Modeling Heterogeneous Brain Dynamics of Depression and Melancholia Using Energy Landscape Analysis. Front Psychiatry 2021; 12:780997. [PMID: 34899435 PMCID: PMC8656401 DOI: 10.3389/fpsyt.2021.780997] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 11/01/2021] [Indexed: 11/13/2022] Open
Abstract
Our current understanding of melancholic depression is shaped by its position in the depression spectrum. The lack of consensus on how it should be treated-whether as a subtype of depression, or as a distinct disorder altogethe-interferes with the recovery of suffering patients. In this study, we analyzed brain state energy landscape models of melancholic depression, in contrast to healthy and non-melancholic energy landscapes. Our analyses showed significant group differences on basin energy, basin frequency, and transition dynamics in several functional brain networks such as basal ganglia, dorsal default mode, and left executive control networks. Furthermore, we found evidences suggesting the connection between energy landscape characteristics (basin characteristics) and depressive symptom scores (BDI-II and SHAPS). These results indicate that melancholic depression is distinguishable from its non-melancholic counterpart, not only in terms of depression severity, but also in brain dynamics.
Collapse
Affiliation(s)
- Paul Rossener Regonia
- Division of Information Science, Graduate School of Science and Technology, Nara Institute of Science and Technology, Ikoma, Japan.,Department of Computer Science, College of Engineering, University of the Philippines Diliman, Quezon City, Philippines
| | - Masahiro Takamura
- Center for Brain, Mind and KANSEI Research Sciences, Hiroshima University, Hiroshima, Japan.,Department of Neurology, Faculty of Medicine, Shimane University, Izumo, Japan
| | - Takashi Nakano
- Division of Information Science, Graduate School of Science and Technology, Nara Institute of Science and Technology, Ikoma, Japan.,School of Medicine, Fujita Health University, Toyoake, Japan
| | - Naho Ichikawa
- Center for Brain, Mind and KANSEI Research Sciences, Hiroshima University, Hiroshima, Japan
| | - Alan Fermin
- Center for Brain, Mind and KANSEI Research Sciences, Hiroshima University, Hiroshima, Japan
| | - Go Okada
- Department of Psychiatry and Neurosciences, Hiroshima University, Hiroshima, Japan
| | - Yasumasa Okamoto
- Center for Brain, Mind and KANSEI Research Sciences, Hiroshima University, Hiroshima, Japan.,Department of Psychiatry and Neurosciences, Hiroshima University, Hiroshima, Japan
| | - Shigeto Yamawaki
- Center for Brain, Mind and KANSEI Research Sciences, Hiroshima University, Hiroshima, Japan
| | - Kazushi Ikeda
- Division of Information Science, Graduate School of Science and Technology, Nara Institute of Science and Technology, Ikoma, Japan
| | - Junichiro Yoshimoto
- Division of Information Science, Graduate School of Science and Technology, Nara Institute of Science and Technology, Ikoma, Japan
| |
Collapse
|
13
|
Wilting J, Priesemann V. 25 years of criticality in neuroscience - established results, open controversies, novel concepts. Curr Opin Neurobiol 2019; 58:105-111. [PMID: 31546053 DOI: 10.1016/j.conb.2019.08.002] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Accepted: 08/25/2019] [Indexed: 12/19/2022]
Abstract
Twenty-five years ago, Dunkelmann and Radons (1994) showed that neural networks can self-organize to a critical state. In models, the critical state offers a number of computational advantages. Thus this hypothesis, and in particular the experimental work by Beggs and Plenz (2003), has triggered an avalanche of research, with thousands of studies referring to it. Nonetheless, experimental results are still contradictory. How is it possible, that a hypothesis has attracted active research for decades, but nonetheless remains controversial? We discuss the experimental and conceptual controversy, and then present a parsimonious solution that (i) unifies the contradictory experimental results, (ii) avoids disadvantages of a critical state, and (iii) enables rapid, adaptive tuning of network properties to task requirements.
Collapse
Affiliation(s)
- J Wilting
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - V Priesemann
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein-Center for Computational Neuroscience, Göttingen, Germany
| |
Collapse
|
14
|
Saxena S, Cunningham JP. Towards the neural population doctrine. Curr Opin Neurobiol 2019; 55:103-111. [DOI: 10.1016/j.conb.2019.02.002] [Citation(s) in RCA: 110] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 01/30/2019] [Accepted: 02/07/2019] [Indexed: 01/06/2023]
|
15
|
Paninski L, Cunningham JP. Neural data science: accelerating the experiment-analysis-theory cycle in large-scale neuroscience. Curr Opin Neurobiol 2019; 50:232-241. [PMID: 29738986 DOI: 10.1016/j.conb.2018.04.007] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2017] [Revised: 03/12/2018] [Accepted: 04/06/2018] [Indexed: 01/01/2023]
Abstract
Modern large-scale multineuronal recording methodologies, including multielectrode arrays, calcium imaging, and optogenetic techniques, produce single-neuron resolution data of a magnitude and precision that were the realm of science fiction twenty years ago. The major bottlenecks in systems and circuit neuroscience no longer lie in simply collecting data from large neural populations, but also in understanding this data: developing novel scientific questions, with corresponding analysis techniques and experimental designs to fully harness these new capabilities and meaningfully interrogate these questions. Advances in methods for signal processing, network analysis, dimensionality reduction, and optimal control-developed in lockstep with advances in experimental neurotechnology-promise major breakthroughs in multiple fundamental neuroscience problems. These trends are clear in a broad array of subfields of modern neuroscience; this review focuses on recent advances in methods for analyzing neural time-series data with single-neuronal precision.
Collapse
Affiliation(s)
- L Paninski
- Department of Statistics, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States; Department of Neuroscience, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States.
| | - J P Cunningham
- Department of Statistics, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States
| |
Collapse
|