1
|
Li Y, Zhu X, Qi Y, Wang Y. Revealing unexpected complex encoding but simple decoding mechanisms in motor cortex via separating behaviorally relevant neural signals. eLife 2024; 12:RP87881. [PMID: 39120996 PMCID: PMC11315449 DOI: 10.7554/elife.87881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/11/2024] Open
Abstract
In motor cortex, behaviorally relevant neural responses are entangled with irrelevant signals, which complicates the study of encoding and decoding mechanisms. It remains unclear whether behaviorally irrelevant signals could conceal some critical truth. One solution is to accurately separate behaviorally relevant and irrelevant signals at both single-neuron and single-trial levels, but this approach remains elusive due to the unknown ground truth of behaviorally relevant signals. Therefore, we propose a framework to define, extract, and validate behaviorally relevant signals. Analyzing separated signals in three monkeys performing different reaching tasks, we found neural responses previously considered to contain little information actually encode rich behavioral information in complex nonlinear ways. These responses are critical for neuronal redundancy and reveal movement behaviors occupy a higher-dimensional neural space than previously expected. Surprisingly, when incorporating often-ignored neural dimensions, behaviorally relevant signals can be decoded linearly with comparable performance to nonlinear decoding, suggesting linear readout may be performed in motor cortex. Our findings prompt that separating behaviorally relevant signals may help uncover more hidden cortical mechanisms.
Collapse
Affiliation(s)
- Yangang Li
- Qiushi Academy for Advanced Studies, Zhejiang UniversityHangzhouChina
- Nanhu Brain-Computer Interface InstituteHangzhouChina
- College of Computer Science and Technology, Zhejiang UniversityHangzhouChina
- The State Key Lab of Brain-Machine Intelligence, Zhejiang UniversityHangzhouChina
| | - Xinyun Zhu
- Qiushi Academy for Advanced Studies, Zhejiang UniversityHangzhouChina
- Nanhu Brain-Computer Interface InstituteHangzhouChina
- College of Computer Science and Technology, Zhejiang UniversityHangzhouChina
- The State Key Lab of Brain-Machine Intelligence, Zhejiang UniversityHangzhouChina
| | - Yu Qi
- Nanhu Brain-Computer Interface InstituteHangzhouChina
- College of Computer Science and Technology, Zhejiang UniversityHangzhouChina
- The State Key Lab of Brain-Machine Intelligence, Zhejiang UniversityHangzhouChina
- Affiliated Mental Health Center & Hangzhou Seventh People’s Hospital and the MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University School of MedicineHangzhouChina
| | - Yueming Wang
- Qiushi Academy for Advanced Studies, Zhejiang UniversityHangzhouChina
- Nanhu Brain-Computer Interface InstituteHangzhouChina
- College of Computer Science and Technology, Zhejiang UniversityHangzhouChina
- The State Key Lab of Brain-Machine Intelligence, Zhejiang UniversityHangzhouChina
- Affiliated Mental Health Center & Hangzhou Seventh People’s Hospital and the MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University School of MedicineHangzhouChina
| |
Collapse
|
2
|
Hoshal BD, Holmes CM, Bojanek K, Salisbury J, Berry MJ, Marre O, Palmer SE. Stimulus invariant aspects of the retinal code drive discriminability of natural scenes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.08.552526. [PMID: 37609259 PMCID: PMC10441377 DOI: 10.1101/2023.08.08.552526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Everything that the brain sees must first be encoded by the retina, which maintains a reliable representation of the visual world in many different, complex natural scenes while also adapting to stimulus changes. This study quantifies whether and how the brain selectively encodes stimulus features about scene identity in complex naturalistic environments. While a wealth of previous work has dug into the static and dynamic features of the population code in retinal ganglion cells, less is known about how populations form both flexible and reliable encoding in natural moving scenes. We record from the larval salamander retina responding to five different natural movies, over many repeats, and use these data to characterize the population code in terms of single-cell fluctuations in rate and pairwise couplings between cells. Decomposing the population code into independent and cell-cell interactions reveals how broad scene structure is encoded in the retinal output. while the single-cell activity adapts to different stimuli, the population structure captured in the sparse, strong couplings is consistent across natural movies as well as synthetic stimuli. We show that these interactions contribute to encoding scene identity. We also demonstrate that this structure likely arises in part from shared bipolar cell input as well as from gap junctions between retinal ganglion cells and amacrine cells.
Collapse
|
3
|
Fortunato C, Bennasar-Vázquez J, Park J, Chang JC, Miller LE, Dudman JT, Perich MG, Gallego JA. Nonlinear manifolds underlie neural population activity during behaviour. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.07.18.549575. [PMID: 37503015 PMCID: PMC10370078 DOI: 10.1101/2023.07.18.549575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
There is rich variety in the activity of single neurons recorded during behaviour. Yet, these diverse single neuron responses can be well described by relatively few patterns of neural co-modulation. The study of such low-dimensional structure of neural population activity has provided important insights into how the brain generates behaviour. Virtually all of these studies have used linear dimensionality reduction techniques to estimate these population-wide co-modulation patterns, constraining them to a flat "neural manifold". Here, we hypothesised that since neurons have nonlinear responses and make thousands of distributed and recurrent connections that likely amplify such nonlinearities, neural manifolds should be intrinsically nonlinear. Combining neural population recordings from monkey, mouse, and human motor cortex, and mouse striatum, we show that: 1) neural manifolds are intrinsically nonlinear; 2) their nonlinearity becomes more evident during complex tasks that require more varied activity patterns; and 3) manifold nonlinearity varies across architecturally distinct brain regions. Simulations using recurrent neural network models confirmed the proposed relationship between circuit connectivity and manifold nonlinearity, including the differences across architecturally distinct regions. Thus, neural manifolds underlying the generation of behaviour are inherently nonlinear, and properly accounting for such nonlinearities will be critical as neuroscientists move towards studying numerous brain regions involved in increasingly complex and naturalistic behaviours.
Collapse
Affiliation(s)
- Cátia Fortunato
- Department of Bioengineering, Imperial College London, London UK
| | | | - Junchol Park
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn VA, USA
| | - Joanna C. Chang
- Department of Bioengineering, Imperial College London, London UK
| | - Lee E. Miller
- Department of Neurosciences, Northwestern University, Chicago IL, USA
- Department of Biomedical Engineering, Northwestern University, Chicago IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago IL, USA, and Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Joshua T. Dudman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn VA, USA
| | - Matthew G. Perich
- Department of Neurosciences, Faculté de médecine, Université de Montréal, Montréal, Québec, Canada
- Québec Artificial Intelligence Institute (MILA), Montréal, Québec, Canada
| | - Juan A. Gallego
- Department of Bioengineering, Imperial College London, London UK
| |
Collapse
|
4
|
Camaglia F, Nemenman I, Mora T, Walczak AM. Bayesian estimation of the Kullback-Leibler divergence for categorical systems using mixtures of Dirichlet priors. Phys Rev E 2024; 109:024305. [PMID: 38491647 DOI: 10.1103/physreve.109.024305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 01/18/2024] [Indexed: 03/18/2024]
Abstract
In many applications in biology, engineering, and economics, identifying similarities and differences between distributions of data from complex processes requires comparing finite categorical samples of discrete counts. Statistical divergences quantify the difference between two distributions. However, their estimation is very difficult and empirical methods often fail, especially when the samples are small. We develop a Bayesian estimator of the Kullback-Leibler divergence between two probability distributions that makes use of a mixture of Dirichlet priors on the distributions being compared. We study the properties of the estimator on two examples: probabilities drawn from Dirichlet distributions and random strings of letters drawn from Markov chains. We extend the approach to the squared Hellinger divergence. Both estimators outperform other estimation techniques, with better results for data with a large number of categories and for higher values of divergences.
Collapse
Affiliation(s)
- Francesco Camaglia
- Laboratoire de physique de l'École normale supérieure, CNRS, PSL University, Sorbonne Université and Université de Paris, 75005 Paris, France
| | - Ilya Nemenman
- Department of Physics, Department of Biology, and Initiative for Theory and Modeling of Living Systems, Emory University, Atlanta, Georgia 30322, USA
| | - Thierry Mora
- Laboratoire de physique de l'École normale supérieure, CNRS, PSL University, Sorbonne Université and Université de Paris, 75005 Paris, France
| | - Aleksandra M Walczak
- Laboratoire de physique de l'École normale supérieure, CNRS, PSL University, Sorbonne Université and Université de Paris, 75005 Paris, France
| |
Collapse
|
5
|
Mahuas G, Marre O, Mora T, Ferrari U. Small-correlation expansion to quantify information in noisy sensory systems. Phys Rev E 2023; 108:024406. [PMID: 37723816 DOI: 10.1103/physreve.108.024406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 06/26/2023] [Indexed: 09/20/2023]
Abstract
Neural networks encode information through their collective spiking activity in response to external stimuli. This population response is noisy and strongly correlated, with a complex interplay between correlations induced by the stimulus, and correlations caused by shared noise. Understanding how these correlations affect information transmission has so far been limited to pairs or small groups of neurons, because the curse of dimensionality impedes the evaluation of mutual information in larger populations. Here, we develop a small-correlation expansion to compute the stimulus information carried by a large population of neurons, yielding interpretable analytical expressions in terms of the neurons' firing rates and pairwise correlations. We validate the approximation on synthetic data and demonstrate its applicability to electrophysiological recordings in the vertebrate retina, allowing us to quantify the effects of noise correlations between neurons and of memory in single neurons.
Collapse
Affiliation(s)
- Gabriel Mahuas
- Institut de la Vision, Sorbonne Université, CNRS, INSERM, 17 rue Moreau, 75012 Paris, France
- Laboratoire de Physique de École Normale Supérieure, CNRS, PSL University, Sorbonne University, Université Paris-Cité, 24 rue Lhomond, 75005 Paris, France
| | - Olivier Marre
- Institut de la Vision, Sorbonne Université, CNRS, INSERM, 17 rue Moreau, 75012 Paris, France
| | - Thierry Mora
- Laboratoire de Physique de École Normale Supérieure, CNRS, PSL University, Sorbonne University, Université Paris-Cité, 24 rue Lhomond, 75005 Paris, France
| | - Ulisse Ferrari
- Institut de la Vision, Sorbonne Université, CNRS, INSERM, 17 rue Moreau, 75012 Paris, France
| |
Collapse
|
6
|
Affiliation(s)
- Max Dabagia
- School of Computer Science, Georgia Institute of Technology, Atlanta, GA, USA
| | - Konrad P Kording
- Department of Biomedical Engineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Eva L Dyer
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, USA.
| |
Collapse
|
7
|
Abstract
An ultimate goal in retina science is to understand how the neural circuit of the retina processes natural visual scenes. Yet most studies in laboratories have long been performed with simple, artificial visual stimuli such as full-field illumination, spots of light, or gratings. The underlying assumption is that the features of the retina thus identified carry over to the more complex scenario of natural scenes. As the application of corresponding natural settings is becoming more commonplace in experimental investigations, this assumption is being put to the test and opportunities arise to discover processing features that are triggered by specific aspects of natural scenes. Here, we review how natural stimuli have been used to probe, refine, and complement knowledge accumulated under simplified stimuli, and we discuss challenges and opportunities along the way toward a comprehensive understanding of the encoding of natural scenes. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Dimokratis Karamanlis
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Helene Marianne Schreyer
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
8
|
Hernández DG, Sober SJ, Nemenman I. Unsupervised Bayesian Ising Approximation for decoding neural activity and other biological dictionaries. eLife 2022; 11:e68192. [PMID: 35315769 PMCID: PMC8989415 DOI: 10.7554/elife.68192] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 03/19/2022] [Indexed: 11/13/2022] Open
Abstract
The problem of deciphering how low-level patterns (action potentials in the brain, amino acids in a protein, etc.) drive high-level biological features (sensorimotor behavior, enzymatic function) represents the central challenge of quantitative biology. The lack of general methods for doing so from the size of datasets that can be collected experimentally severely limits our understanding of the biological world. For example, in neuroscience, some sensory and motor codes have been shown to consist of precisely timed multi-spike patterns. However, the combinatorial complexity of such pattern codes have precluded development of methods for their comprehensive analysis. Thus, just as it is hard to predict a protein's function based on its sequence, we still do not understand how to accurately predict an organism's behavior based on neural activity. Here, we introduce the unsupervised Bayesian Ising Approximation (uBIA) for solving this class of problems. We demonstrate its utility in an application to neural data, detecting precisely timed spike patterns that code for specific motor behaviors in a songbird vocal system. In data recorded during singing from neurons in a vocal control region, our method detects such codewords with an arbitrary number of spikes, does so from small data sets, and accounts for dependencies in occurrences of codewords. Detecting such comprehensive motor control dictionaries can improve our understanding of skilled motor control and the neural bases of sensorimotor learning in animals. To further illustrate the utility of uBIA, we used it to identify the distinct sets of activity patterns that encode vocal motor exploration versus typical song production. Crucially, our method can be used not only for analysis of neural systems, but also for understanding the structure of correlations in other biological and nonbiological datasets.
Collapse
Affiliation(s)
- Damián G Hernández
- Department of Medical Physics, Centro Atómico Bariloche and Instituto BalseiroBarilocheArgentina
- Department of Physics, Emory UniversityAtlantaUnited States
| | - Samuel J Sober
- Department of Biology, Emory UniversityAtlantaUnited States
| | - Ilya Nemenman
- Department of Physics, Emory UniversityAtlantaUnited States
- Department of Biology, Emory UniversityAtlantaUnited States
- Initiative in Theory and Modeling of Living SystemsAtlantaUnited States
| |
Collapse
|
9
|
Sokoloski S, Aschner A, Coen-Cagli R. Modelling the neural code in large populations of correlated neurons. eLife 2021; 10:64615. [PMID: 34608865 PMCID: PMC8577837 DOI: 10.7554/elife.64615] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Accepted: 10/01/2021] [Indexed: 01/02/2023] Open
Abstract
Neurons respond selectively to stimuli, and thereby define a code that associates stimuli with population response patterns. Certain correlations within population responses (noise correlations) significantly impact the information content of the code, especially in large populations. Understanding the neural code thus necessitates response models that quantify the coding properties of modelled populations, while fitting large-scale neural recordings and capturing noise correlations. In this paper, we propose a class of response model based on mixture models and exponential families. We show how to fit our models with expectation-maximization, and that they capture diverse variability and covariability in recordings of macaque primary visual cortex. We also show how they facilitate accurate Bayesian decoding, provide a closed-form expression for the Fisher information, and are compatible with theories of probabilistic population coding. Our framework could allow researchers to quantitatively validate the predictions of neural coding theories against both large-scale neural recordings and cognitive performance.
Collapse
Affiliation(s)
- Sacha Sokoloski
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, United States.,Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
| | - Amir Aschner
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, United States
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, United States.,Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, United States
| |
Collapse
|
10
|
The geometry of neuronal representations during rule learning reveals complementary roles of cingulate cortex and putamen. Neuron 2021; 109:839-851.e9. [DOI: 10.1016/j.neuron.2020.12.027] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 11/07/2020] [Accepted: 12/30/2020] [Indexed: 11/22/2022]
|
11
|
Brackbill N, Rhoades C, Kling A, Shah NP, Sher A, Litke AM, Chichilnisky EJ. Reconstruction of natural images from responses of primate retinal ganglion cells. eLife 2020; 9:e58516. [PMID: 33146609 PMCID: PMC7752138 DOI: 10.7554/elife.58516] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Accepted: 11/02/2020] [Indexed: 11/23/2022] Open
Abstract
The visual message conveyed by a retinal ganglion cell (RGC) is often summarized by its spatial receptive field, but in principle also depends on the responses of other RGCs and natural image statistics. This possibility was explored by linear reconstruction of natural images from responses of the four numerically-dominant macaque RGC types. Reconstructions were highly consistent across retinas. The optimal reconstruction filter for each RGC - its visual message - reflected natural image statistics, and resembled the receptive field only when nearby, same-type cells were included. ON and OFF cells conveyed largely independent, complementary representations, and parasol and midget cells conveyed distinct features. Correlated activity and nonlinearities had statistically significant but minor effects on reconstruction. Simulated reconstructions, using linear-nonlinear cascade models of RGC light responses that incorporated measured spatial properties and nonlinearities, produced similar results. Spatiotemporal reconstructions exhibited similar spatial properties, suggesting that the results are relevant for natural vision.
Collapse
Affiliation(s)
- Nora Brackbill
- Department of Physics, Stanford UniversityStanfordUnited States
| | - Colleen Rhoades
- Department of Bioengineering, Stanford UniversityStanfordUnited States
| | - Alexandra Kling
- Department of Neurosurgery, Stanford School of MedicineStanfordUnited States
- Department of Ophthalmology, Stanford UniversityStanfordUnited States
- Hansen Experimental Physics Laboratory, Stanford UniversityStanfordUnited States
| | - Nishal P Shah
- Department of Electrical Engineering, Stanford UniversityStanfordUnited States
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa CruzSanta CruzUnited States
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa CruzSanta CruzUnited States
| | - EJ Chichilnisky
- Department of Neurosurgery, Stanford School of MedicineStanfordUnited States
- Department of Ophthalmology, Stanford UniversityStanfordUnited States
- Hansen Experimental Physics Laboratory, Stanford UniversityStanfordUnited States
| |
Collapse
|
12
|
Shah NP, Chichilnisky EJ. Computational challenges and opportunities for a bi-directional artificial retina. J Neural Eng 2020; 17:055002. [PMID: 33089827 DOI: 10.1088/1741-2552/aba8b1] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
A future artificial retina that can restore high acuity vision in blind people will rely on the capability to both read (observe) and write (control) the spiking activity of neurons using an adaptive, bi-directional and high-resolution device. Although current research is focused on overcoming the technical challenges of building and implanting such a device, exploiting its capabilities to achieve more acute visual perception will also require substantial computational advances. Using high-density large-scale recording and stimulation in the primate retina with an ex vivo multi-electrode array lab prototype, we frame several of the major computational problems, and describe current progress and future opportunities in solving them. First, we identify cell types and locations from spontaneous activity in the blind retina, and then efficiently estimate their visual response properties by using a low-dimensional manifold of inter-retina variability learned from a large experimental dataset. Second, we estimate retinal responses to a large collection of relevant electrical stimuli by passing current patterns through an electrode array, spike sorting the resulting recordings and using the results to develop a model of evoked responses. Third, we reproduce the desired responses for a given visual target by temporally dithering a diverse collection of electrical stimuli within the integration time of the visual system. Together, these novel approaches may substantially enhance artificial vision in a next-generation device.
Collapse
Affiliation(s)
- Nishal P Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA, United States of America. Department of Neurosurgery, Stanford University, Stanford, CA, United States of America. Author to whom any correspondence should be addressed
| | | |
Collapse
|
13
|
Rozenblit F, Gollisch T. What the salamander eye has been telling the vision scientist's brain. Semin Cell Dev Biol 2020; 106:61-71. [PMID: 32359891 PMCID: PMC7493835 DOI: 10.1016/j.semcdb.2020.04.010] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 04/16/2020] [Accepted: 04/16/2020] [Indexed: 12/30/2022]
Abstract
Salamanders have been habitual residents of research laboratories for more than a century, and their history in science is tightly interwoven with vision research. Nevertheless, many vision scientists - even those working with salamanders - may be unaware of how much our knowledge about vision, and particularly the retina, has been shaped by studying salamanders. In this review, we take a tour through the salamander history in vision science, highlighting the main contributions of salamanders to our understanding of the vertebrate retina. We further point out specificities of the salamander visual system and discuss the perspectives of this animal system for future vision research.
Collapse
Affiliation(s)
- Fernando Rozenblit
- Department of Ophthalmology, University Medical Center Göttingen, 37073, Göttingen, Germany; Bernstein Center for Computational Neuroscience Göttingen, 37077, Göttingen, Germany
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, 37073, Göttingen, Germany; Bernstein Center for Computational Neuroscience Göttingen, 37077, Göttingen, Germany.
| |
Collapse
|
14
|
Bojanek K, Zhu Y, MacLean J. Cyclic transitions between higher order motifs underlie sustained asynchronous spiking in sparse recurrent networks. PLoS Comput Biol 2020; 16:e1007409. [PMID: 32997658 PMCID: PMC7549833 DOI: 10.1371/journal.pcbi.1007409] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 10/12/2020] [Accepted: 07/28/2020] [Indexed: 12/26/2022] Open
Abstract
A basic—yet nontrivial—function which neocortical circuitry must satisfy is the ability to maintain stable spiking activity over time. Stable neocortical activity is asynchronous, critical, and low rate, and these features of spiking dynamics contribute to efficient computation and optimal information propagation. However, it remains unclear how neocortex maintains this asynchronous spiking regime. Here we algorithmically construct spiking neural network models, each composed of 5000 neurons. Network construction synthesized topological statistics from neocortex with a set of objective functions identifying naturalistic low-rate, asynchronous, and critical activity. We find that simulations run on the same topology exhibit sustained asynchronous activity under certain sets of initial membrane voltages but truncated activity under others. Synchrony, rate, and criticality do not provide a full explanation of this dichotomy. Consequently, in order to achieve mechanistic understanding of sustained asynchronous activity, we summarized activity as functional graphs where edges between units are defined by pairwise spike dependencies. We then analyzed the intersection between functional edges and synaptic connectivity- i.e. recruitment networks. Higher-order patterns, such as triplet or triangle motifs, have been tied to cooperativity and integration. We find, over time in each sustained simulation, low-variance periodic transitions between isomorphic triangle motifs in the recruitment networks. We quantify the phenomenon as a Markov process and discover that if the network fails to engage this stereotyped regime of motif dominance “cycling”, spiking activity truncates early. Cycling of motif dominance generalized across manipulations of synaptic weights and topologies, demonstrating the robustness of this regime for maintenance of network activity. Our results point to the crucial role of excitatory higher-order patterns in sustaining asynchronous activity in sparse recurrent networks. They also provide a possible explanation why such connectivity and activity patterns have been prominently reported in neocortex. Neocortical spiking activity tends to be low-rate and non-rhythmic, and to operate near the critical point of a phase transition. It remains unclear how this kind of spiking activity can be maintained within a neuronal network. Neurons are leaky and individual synaptic connections are sparse and weak, making the maintenance of an asynchronous regime a nontrivial problem. Higher order patterns involving more than two units abound in neocortex, and several lines of evidence suggest that they may be instrumental for brain function. For example, stable activity in vivo displays elevated clustering dominated by specific three-node (triplet) motifs. In this study we demonstrate a link between the maintenance of asynchronous activity and triplet motifs. We algorithmically build spiking neural network models to mimic the topology of neocortex and the spiking statistics that characterize wakefulness. We show that higher order coordination of synapses is always present during sustained asynchronous activity. Coordination takes the form of transitions in time between specific triangle motifs. These motifs summarize the way spikes traverse the underlying synaptic topology. The results of our model are consistent with numerous experimental observations, and their generalizability to other weakly and sparsely connected networks is predicted.
Collapse
Affiliation(s)
- Kyle Bojanek
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
| | - Yuqing Zhu
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
| | - Jason MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
- Department of Neurobiology, University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, Chicago, Illinois, United States of America
- * E-mail:
| |
Collapse
|
15
|
Berry MJ, Tkačik G. Clustering of Neural Activity: A Design Principle for Population Codes. Front Comput Neurosci 2020; 14:20. [PMID: 32231528 PMCID: PMC7082423 DOI: 10.3389/fncom.2020.00020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 02/18/2020] [Indexed: 11/24/2022] Open
Abstract
We propose that correlations among neurons are generically strong enough to organize neural activity patterns into a discrete set of clusters, which can each be viewed as a population codeword. Our reasoning starts with the analysis of retinal ganglion cell data using maximum entropy models, showing that the population is robustly in a frustrated, marginally sub-critical, or glassy, state. This leads to an argument that neural populations in many other brain areas might share this structure. Next, we use latent variable models to show that this glassy state possesses well-defined clusters of neural activity. Clusters have three appealing properties: (i) clusters exhibit error correction, i.e., they are reproducibly elicited by the same stimulus despite variability at the level of constituent neurons; (ii) clusters encode qualitatively different visual features than their constituent neurons; and (iii) clusters can be learned by downstream neural circuits in an unsupervised fashion. We hypothesize that these properties give rise to a "learnable" neural code which the cortical hierarchy uses to extract increasingly complex features without supervision or reinforcement.
Collapse
Affiliation(s)
- Michael J. Berry
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States
| | - Gašper Tkačik
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| |
Collapse
|
16
|
Betzel RF, Wood KC, Angeloni C, Neimark Geffen M, Bassett DS. Stability of spontaneous, correlated activity in mouse auditory cortex. PLoS Comput Biol 2019; 15:e1007360. [PMID: 31815941 PMCID: PMC6968873 DOI: 10.1371/journal.pcbi.1007360] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 01/17/2020] [Accepted: 08/24/2019] [Indexed: 12/31/2022] Open
Abstract
Neural systems can be modeled as complex networks in which neural elements are represented as nodes linked to one another through structural or functional connections. The resulting network can be analyzed using mathematical tools from network science and graph theory to quantify the system’s topological organization and to better understand its function. Here, we used two-photon calcium imaging to record spontaneous activity from the same set of cells in mouse auditory cortex over the course of several weeks. We reconstruct functional networks in which cells are linked to one another by edges weighted according to the correlation of their fluorescence traces. We show that the networks exhibit modular structure across multiple topological scales and that these multi-scale modules unfold as part of a hierarchy. We also show that, on average, network architecture becomes increasingly dissimilar over time, with similarity decaying monotonically with the distance (in time) between sessions. Finally, we show that a small fraction of cells maintain strongly-correlated activity over multiple days, forming a stable temporal core surrounded by a fluctuating and variable periphery. Our work indicates a framework for studying spontaneous activity measured by two-photon calcium imaging using computational methods and graphical models from network science. The methods are flexible and easily extended to additional datasets, opening the possibility of studying cellular level network organization of neural systems and how that organization is modulated by stimuli or altered in models of disease. Neurons coordinate their activity with one another, forming networks that help support adaptive, flexible behavior. Still, little is known about the organization of these networks at the cellular scale and their stability over time. Here, we reconstruct networks from calcium imaging data recorded in mouse primary auditory cortex. We show that these networks exhibit spatially constrained, hierarchical modular structure, which may facilitate specialized information processing. However, we show that connection weights and modular structure are also variable over time, changing on a timescale of days and adopting novel network configurations. Despite this, a small subset of neurons maintain their connections to one another and preserve their modular organization across time, forming a stable temporal core surrounded by a flexible periphery. These findings represent a conceptual bridge linking network analyses of macroscale and cellular-level neuroimaging data. They also represent a complementary approach to existing circuits- and systems-based interrogation of nervous system function, opening the door for deeper and more targeted analysis in the future.
Collapse
Affiliation(s)
- Richard F Betzel
- Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America.,Department of Psychological and Brain Sciences, Indiana University, Bloomington, Indiana, United States of America.,Cognitive Science Program, Indiana University, Bloomington, Indiana, United States of America.,Program in Neuroscience, Indiana University, Bloomington, Indiana, United States of America.,Network Science Institute, Indiana University, Bloomington, Indiana, United States of America
| | - Katherine C Wood
- Department of Otorhinolaryngology: HNS, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Christopher Angeloni
- Department of Otorhinolaryngology: HNS, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Maria Neimark Geffen
- Department of Otorhinolaryngology: HNS, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Danielle S Bassett
- Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America.,Department of Electrical and Systems Engineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America.,Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America.,Department of Physics & Astronomy, College of Arts & Sciences, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America.,Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America.,Santa Fe Institute, Santa Fa, New Mexico, United States of America
| |
Collapse
|
17
|
Levy DR, Tamir T, Kaufman M, Parabucki A, Weissbrod A, Schneidman E, Yizhar O. Dynamics of social representation in the mouse prefrontal cortex. Nat Neurosci 2019; 22:2013-2022. [PMID: 31768051 DOI: 10.1038/s41593-019-0531-z] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Accepted: 10/04/2019] [Indexed: 01/05/2023]
Abstract
The prefrontal cortex (PFC) plays an important role in regulating social functions in mammals, and its dysfunction has been linked to social deficits in neurodevelopmental disorders. Yet little is known of how the PFC encodes social information and how social representations may be altered in such disorders. Here, we show that neurons in the medial PFC of freely behaving male mice preferentially respond to socially relevant olfactory cues. Population activity patterns in this region differed between social and nonsocial stimuli and underwent experience-dependent refinement. In mice lacking the autism-associated gene Cntnap2, both the categorization of sensory stimuli and the refinement of social representations were impaired. Noise levels in spontaneous population activity were higher in Cntnap2 knockouts and correlated with the degree to which social representations were disrupted. Our findings elucidate the encoding of social sensory cues in the medial PFC and provide a link between altered prefrontal dynamics and autism-associated social dysfunction.
Collapse
Affiliation(s)
- Dana Rubi Levy
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Tal Tamir
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Maya Kaufman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Ana Parabucki
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Aharon Weissbrod
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Elad Schneidman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Ofer Yizhar
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel.
| |
Collapse
|
18
|
Papo D. Gauging Functional Brain Activity: From Distinguishability to Accessibility. Front Physiol 2019; 10:509. [PMID: 31139089 PMCID: PMC6517676 DOI: 10.3389/fphys.2019.00509] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Accepted: 04/11/2019] [Indexed: 11/13/2022] Open
Abstract
Standard neuroimaging techniques provide non-invasive access not only to human brain anatomy but also to its physiology. The activity recorded with these techniques is generally called functional imaging, but what is observed per se is an instance of dynamics, from which functional brain activity should be extracted. Distinguishing between bare dynamics and genuine function is a highly non-trivial task, but a crucially important one when comparing experimental observations and interpreting their significance. Here we illustrate how neuroimaging's ability to extract genuine functional brain activity is bounded by functional representations' structure. To do so, we first provide a simple definition of functional brain activity from a system-level brain imaging perspective. We then review how the properties of the space on which brain activity is represented induce relations on observed imaging data which allow determining the extent to which two observations are functionally distinguishable and quantifying how far apart they are. It is also proposed that genuine functional distances would require defining accessibility, i.e., how a given observed condition can be accessed from another given one, under the dynamics of some neurophysiological process. We show how these properties result from the structure defined on dynamical data and dynamics-to-function projections, and consider some implications that the way and extent to which these are defined have for the interpretation of experimental data from standard system-level brain recording techniques.
Collapse
Affiliation(s)
- David Papo
- SCALab, UMR CNRS 9193, Université de Lille, Villeneuve d’Ascq, France
| |
Collapse
|
19
|
Medial Prefrontal Cortex Population Activity Is Plastic Irrespective of Learning. J Neurosci 2019; 39:3470-3483. [PMID: 30814311 DOI: 10.1523/jneurosci.1370-17.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2017] [Revised: 01/09/2019] [Accepted: 01/11/2019] [Indexed: 11/21/2022] Open
Abstract
The prefrontal cortex (PFC) is thought to learn the relationships between actions and their outcomes. But little is known about what changes to population activity in PFC are specific to learning these relationships. Here we characterize the plasticity of population activity in the medial PFC (mPFC) of male rats learning rules on a Y-maze. First, we show that the population always changes its patterns of joint activity between the periods of sleep either side of a training session on the maze, regardless of successful rule learning during training. Next, by comparing the structure of population activity in sleep and training, we show that this population plasticity differs between learning and nonlearning sessions. In learning sessions, the changes in population activity in post-training sleep incorporate the changes to the population activity during training on the maze. In nonlearning sessions, the changes in sleep and training are unrelated. Finally, we show evidence that the nonlearning and learning forms of population plasticity are driven by different neuron-level changes, with the nonlearning form entirely accounted for by independent changes to the excitability of individual neurons, and the learning form also including changes to firing rate couplings between neurons. Collectively, our results suggest two different forms of population plasticity in mPFC during the learning of action-outcome relationships: one a persistent change in population activity structure decoupled from overt rule-learning, and the other a directional change driven by feedback during behavior.SIGNIFICANCE STATEMENT The PFC is thought to represent our knowledge about what action is worth doing in which context. But we do not know how the activity of neurons in PFC collectively changes when learning which actions are relevant. Here we show, in a trial-and-error task, that population activity in PFC is persistently changing, regardless of learning. Only during episodes of clear learning of relevant actions are the accompanying changes to population activity carried forward into sleep, suggesting a long-lasting form of neural plasticity. Our results suggest that representations of relevant actions in PFC are acquired by reward imposing a direction onto ongoing population plasticity.
Collapse
|
20
|
Gardella C, Marre O, Mora T. Modeling the Correlated Activity of Neural Populations: A Review. Neural Comput 2018; 31:233-269. [PMID: 30576613 DOI: 10.1162/neco_a_01154] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The principles of neural encoding and computations are inherently collective and usually involve large populations of interacting neurons with highly correlated activities. While theories of neural function have long recognized the importance of collective effects in populations of neurons, only in the past two decades has it become possible to record from many cells simultaneously using advanced experimental techniques with single-spike resolution and to relate these correlations to function and behavior. This review focuses on the modeling and inference approaches that have been recently developed to describe the correlated spiking activity of populations of neurons. We cover a variety of models describing correlations between pairs of neurons, as well as between larger groups, synchronous or delayed in time, with or without the explicit influence of the stimulus, and including or not latent variables. We discuss the advantages and drawbacks or each method, as well as the computational challenges related to their application to recordings of ever larger populations.
Collapse
Affiliation(s)
- Christophe Gardella
- Laboratoire de physique statistique, CNRS, Sorbonne Université, Université Paris-Diderot, and École normale supérieure, 75005 Paris, France, and Institut de la Vision, INSERM, CNRS, and Sorbonne Université, 75012 Paris, France
| | - Olivier Marre
- Institut de la Vision, INSERM, CNRS, and Sorbonne Université, 75012 Paris, France
| | - Thierry Mora
- Laboratoire de physique statistique, CNRS, Sorbonne Université, Université Paris-Diderot, and École normale supérieure, 75005 Paris, France
| |
Collapse
|
21
|
Quaglio P, Rostami V, Torre E, Grün S. Methods for identification of spike patterns in massively parallel spike trains. BIOLOGICAL CYBERNETICS 2018; 112:57-80. [PMID: 29651582 PMCID: PMC5908877 DOI: 10.1007/s00422-018-0755-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2017] [Accepted: 03/26/2018] [Indexed: 06/08/2023]
Abstract
Temporally, precise correlations between simultaneously recorded neurons have been interpreted as signatures of cell assemblies, i.e., groups of neurons that form processing units. Evidence for this hypothesis was found on the level of pairwise correlations in simultaneous recordings of few neurons. Increasing the number of simultaneously recorded neurons increases the chances to detect cell assembly activity due to the larger sample size. Recent technological advances have enabled the recording of 100 or more neurons in parallel. However, these massively parallel spike train data require novel statistical tools to be analyzed for correlations, because they raise considerable combinatorial and multiple testing issues. Recently, various of such methods have started to develop. First approaches were based on population or pairwise measures of synchronization, and later led to methods for the detection of various types of higher-order synchronization and of spatio-temporal patterns. The latest techniques combine data mining with analysis of statistical significance. Here, we give a comparative overview of these methods, of their assumptions and of the types of correlations they can detect.
Collapse
Affiliation(s)
- Pietro Quaglio
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.
| | - Vahid Rostami
- Computational Systems Neuroscience, Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
| | - Emiliano Torre
- Chair of Risk, Safety and Uncertainty Quantification, ETH Zürich, Zurich, Switzerland
- Risk Center, ETH Zürich, Zurich, Switzerland
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
22
|
Abstract
The brain has no direct access to physical stimuli but only to the spiking activity evoked in sensory organs. It is unclear how the brain can learn representations of the stimuli based on those noisy, correlated responses alone. Here we show how to build an accurate distance map of responses solely from the structure of the population activity of retinal ganglion cells. We introduce the Temporal Restricted Boltzmann Machine to learn the spatiotemporal structure of the population activity and use this model to define a distance between spike trains. We show that this metric outperforms existing neural distances at discriminating pairs of stimuli that are barely distinguishable. The proposed method provides a generic and biologically plausible way to learn to associate similar stimuli based on their spiking responses, without any other knowledge of these stimuli.
Collapse
Affiliation(s)
- Christophe Gardella
- Laboratoire de physique statistique, Centre National de la Recherche Scientifique, Sorbonne University, University Paris-Diderot, École normale supérieure, PSL University, 75005 Paris, France
- Institut de la Vision, Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Sorbonne University, 75012 Paris, France
| | - Olivier Marre
- Institut de la Vision, Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Sorbonne University, 75012 Paris, France
| | - Thierry Mora
- Laboratoire de physique statistique, Centre National de la Recherche Scientifique, Sorbonne University, University Paris-Diderot, École normale supérieure, PSL University, 75005 Paris, France;
| |
Collapse
|
23
|
Ioffe ML, Berry MJ. The structured 'low temperature' phase of the retinal population code. PLoS Comput Biol 2017; 13:e1005792. [PMID: 29020014 PMCID: PMC5654267 DOI: 10.1371/journal.pcbi.1005792] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2016] [Revised: 10/23/2017] [Accepted: 09/26/2017] [Indexed: 11/19/2022] Open
Abstract
Recent advances in experimental techniques have allowed the simultaneous recordings of populations of hundreds of neurons, fostering a debate about the nature of the collective structure of population neural activity. Much of this debate has focused on the empirical findings of a phase transition in the parameter space of maximum entropy models describing the measured neural probability distributions, interpreting this phase transition to indicate a critical tuning of the neural code. Here, we instead focus on the possibility that this is a first-order phase transition which provides evidence that the real neural population is in a 'structured', collective state. We show that this collective state is robust to changes in stimulus ensemble and adaptive state. We find that the pattern of pairwise correlations between neurons has a strength that is well within the strongly correlated regime and does not require fine tuning, suggesting that this state is generic for populations of 100+ neurons. We find a clear correspondence between the emergence of a phase transition, and the emergence of attractor-like structure in the inferred energy landscape. A collective state in the neural population, in which neural activity patterns naturally form clusters, provides a consistent interpretation for our results.
Collapse
Affiliation(s)
- Mark L. Ioffe
- Department of Physics, Princeton University, Princeton, New Jersey, United States of America
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
- * E-mail:
| | - Michael J. Berry
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
| |
Collapse
|
24
|
Savin C, Tkačik G. Maximum entropy models as a tool for building precise neural controls. Curr Opin Neurobiol 2017; 46:120-126. [DOI: 10.1016/j.conb.2017.08.001] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Revised: 07/31/2017] [Accepted: 08/03/2017] [Indexed: 12/27/2022]
|
25
|
Loback A, Prentice J, Ioffe M, Berry Ii M. Noise-Robust Modes of the Retinal Population Code Have the Geometry of "Ridges" and Correspond to Neuronal Communities. Neural Comput 2017; 29:3119-3180. [PMID: 28957022 DOI: 10.1162/neco_a_01011] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
An appealing new principle for neural population codes is that correlations among neurons organize neural activity patterns into a discrete set of clusters, which can each be viewed as a noise-robust population codeword. Previous studies assumed that these codewords corresponded geometrically with local peaks in the probability landscape of neural population responses. Here, we analyze multiple data sets of the responses of approximately 150 retinal ganglion cells and show that local probability peaks are absent under broad, nonrepeated stimulus ensembles, which are characteristic of natural behavior. However, we find that neural activity still forms noise-robust clusters in this regime, albeit clusters with a different geometry. We start by defining a soft local maximum, which is a local probability maximum when constrained to a fixed spike count. Next, we show that soft local maxima are robustly present and can, moreover, be linked across different spike count levels in the probability landscape to form a ridge. We found that these ridges comprise combinations of spiking and silence in the neural population such that all of the spiking neurons are members of the same neuronal community, a notion from network theory. We argue that a neuronal community shares many of the properties of Donald Hebb's classic cell assembly and show that a simple, biologically plausible decoding algorithm can recognize the presence of a specific neuronal community.
Collapse
Affiliation(s)
- Adrianna Loback
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Jason Prentice
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Mark Ioffe
- Physics Department, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Michael Berry Ii
- Princeton Neuroscience Institute and Molecular Biology Department, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
26
|
Gallego JA, Perich MG, Miller LE, Solla SA. Neural Manifolds for the Control of Movement. Neuron 2017; 94:978-984. [PMID: 28595054 DOI: 10.1016/j.neuron.2017.05.025] [Citation(s) in RCA: 278] [Impact Index Per Article: 39.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2016] [Revised: 05/11/2017] [Accepted: 05/18/2017] [Indexed: 10/19/2022]
Abstract
The analysis of neural dynamics in several brain cortices has consistently uncovered low-dimensional manifolds that capture a significant fraction of neural variability. These neural manifolds are spanned by specific patterns of correlated neural activity, the "neural modes." We discuss a model for neural control of movement in which the time-dependent activation of these neural modes is the generator of motor behavior. This manifold-based view of motor cortex may lead to a better understanding of how the brain controls movement.
Collapse
Affiliation(s)
- Juan A Gallego
- Department of Physiology, Northwestern University, Chicago, IL 60611, USA; Neural and Cognitive Engineering Group, Centre for Robotics and Automation CSIC-UPM, Arganda del Rey 28500, Spain
| | - Matthew G Perich
- Department of Biomedical Engineering, Northwestern University, Evanston, IL 60208, USA
| | - Lee E Miller
- Department of Physiology, Northwestern University, Chicago, IL 60611, USA; Department of Biomedical Engineering, Northwestern University, Evanston, IL 60208, USA; Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL 60611, USA
| | - Sara A Solla
- Department of Physiology, Northwestern University, Chicago, IL 60611, USA; Department of Physics and Astronomy, Northwestern University, Evanston, IL 60208, USA.
| |
Collapse
|
27
|
Onken A, Liu JK, Karunasekara PPCR, Delis I, Gollisch T, Panzeri S. Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains. PLoS Comput Biol 2016; 12:e1005189. [PMID: 27814363 PMCID: PMC5096699 DOI: 10.1371/journal.pcbi.1005189] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Accepted: 10/11/2016] [Indexed: 11/21/2022] Open
Abstract
Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding.
Collapse
Affiliation(s)
- Arno Onken
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
| | - Jian K. Liu
- Department of Ophthalmology, University Medical Center Goettingen, Goettingen, Germany
- Bernstein Center for Computational Neuroscience Goettingen, Goettingen, Germany
| | - P. P. Chamanthi R. Karunasekara
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| | - Ioannis Delis
- Department of Biomedical Engineering, Columbia University, New York, New York, United States of America
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Goettingen, Goettingen, Germany
- Bernstein Center for Computational Neuroscience Goettingen, Goettingen, Germany
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
| |
Collapse
|
28
|
Huang H, Toyoizumi T. Clustering of neural code words revealed by a first-order phase transition. Phys Rev E 2016; 93:062416. [PMID: 27415307 DOI: 10.1103/physreve.93.062416] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Indexed: 12/23/2022]
Abstract
A network of neurons in the central nervous system collectively represents information by its spiking activity states. Typically observed states, i.e., code words, occupy only a limited portion of the state space due to constraints imposed by network interactions. Geometrical organization of code words in the state space, critical for neural information processing, is poorly understood due to its high dimensionality. Here, we explore the organization of neural code words using retinal data by computing the entropy of code words as a function of Hamming distance from a particular reference codeword. Specifically, we report that the retinal code words in the state space are divided into multiple distinct clusters separated by entropy-gaps, and that this structure is shared with well-known associative memory networks in a recallable phase. Our analysis also elucidates a special nature of the all-silent state. The all-silent state is surrounded by the densest cluster of code words and located within a reachable distance from most code words. This code-word space structure quantitatively predicts typical deviation of a state-trajectory from its initial state. Altogether, our findings reveal a non-trivial heterogeneous structure of the code-word space that shapes information representation in a biological network.
Collapse
Affiliation(s)
- Haiping Huang
- RIKEN Brain Science Institute, Wako-shi, Saitama 351-0198, Japan
| | - Taro Toyoizumi
- RIKEN Brain Science Institute, Wako-shi, Saitama 351-0198, Japan
| |
Collapse
|
29
|
Doiron B, Litwin-Kumar A, Rosenbaum R, Ocker GK, Josić K. The mechanics of state-dependent neural correlations. Nat Neurosci 2016; 19:383-93. [PMID: 26906505 DOI: 10.1038/nn.4242] [Citation(s) in RCA: 173] [Impact Index Per Article: 21.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2015] [Accepted: 01/12/2016] [Indexed: 12/12/2022]
Abstract
Simultaneous recordings from large neural populations are becoming increasingly common. An important feature of population activity is the trial-to-trial correlated fluctuation of spike train outputs from recorded neuron pairs. Similar to the firing rate of single neurons, correlated activity can be modulated by a number of factors, from changes in arousal and attentional state to learning and task engagement. However, the physiological mechanisms that underlie these changes are not fully understood. We review recent theoretical results that identify three separate mechanisms that modulate spike train correlations: changes in input correlations, internal fluctuations and the transfer function of single neurons. We first examine these mechanisms in feedforward pathways and then show how the same approach can explain the modulation of correlations in recurrent networks. Such mechanistic constraints on the modulation of population activity will be important in statistical analyses of high-dimensional neural data.
Collapse
Affiliation(s)
- Brent Doiron
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, USA
| | - Ashok Litwin-Kumar
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, USA.,Center for Theoretical Neuroscience, Columbia University, New York, New York, USA
| | - Robert Rosenbaum
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, USA.,Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana, USA.,Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, Indiana, USA
| | - Gabriel K Ocker
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, USA.,Allen Institute for Brain Science, Seattle, Washington, USA
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, Texas, USA.,Department of Biology and Biochemistry, University of Houston, Houston, Texas, USA
| |
Collapse
|
30
|
Schneidman E. Towards the design principles of neural population codes. Curr Opin Neurobiol 2016; 37:133-140. [PMID: 27016639 DOI: 10.1016/j.conb.2016.03.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2016] [Revised: 03/01/2016] [Accepted: 03/02/2016] [Indexed: 12/18/2022]
Abstract
The ability to record the joint activity of large groups of neurons would allow for direct study of information representation and computation at the level of whole circuits in the brain. The combinatorial space of potential population activity patterns and neural noise imply that it would be impossible to directly map the relations between stimuli and population responses. Understanding of large neural population codes therefore depends on identifying simplifying design principles. We review recent results showing that strongly correlated population codes can be explained using minimal models that rely on low order relations among cells. We discuss the implications for large populations, and how such models allow for mapping the semantic organization of the neural codebook and stimulus space, and decoding.
Collapse
Affiliation(s)
- Elad Schneidman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel.
| |
Collapse
|