1
|
Stan PL, Smith MA. Recent Visual Experience Reshapes V4 Neuronal Activity and Improves Perceptual Performance. J Neurosci 2024; 44:e1764232024. [PMID: 39187380 PMCID: PMC11466072 DOI: 10.1523/jneurosci.1764-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 07/10/2024] [Accepted: 08/13/2024] [Indexed: 08/28/2024] Open
Abstract
Recent visual experience heavily influences our visual perception, but how neuronal activity is reshaped to alter and improve perceptual discrimination remains unknown. We recorded from populations of neurons in visual cortical area V4 while two male rhesus macaque monkeys performed a natural image change detection task under different experience conditions. We found that maximizing the recent experience with a particular image led to an improvement in the ability to detect a change in that image. This improvement was associated with decreased neural responses to the image, consistent with neuronal changes previously seen in studies of adaptation and expectation. We found that the magnitude of behavioral improvement was correlated with the magnitude of response suppression. Furthermore, this suppression of activity led to an increase in signal separation, providing evidence that a reduction in activity can improve stimulus encoding. Within populations of neurons, greater recent experience was associated with decreased trial-to-trial shared variability, indicating that a reduction in variability is a key means by which experience influences perception. Taken together, the results of our study contribute to an understanding of how recent visual experience can shape our perception and behavior through modulating activity patterns in the mid-level visual cortex.
Collapse
Affiliation(s)
- Patricia L Stan
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Matthew A Smith
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| |
Collapse
|
2
|
Wu S, Huang C, Snyder AC, Smith MA, Doiron B, Yu BM. Automated customization of large-scale spiking network models to neuronal population activity. NATURE COMPUTATIONAL SCIENCE 2024; 4:690-705. [PMID: 39285002 DOI: 10.1038/s43588-024-00688-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 08/08/2024] [Indexed: 09/22/2024]
Abstract
Understanding brain function is facilitated by constructing computational models that accurately reproduce aspects of brain activity. Networks of spiking neurons capture the underlying biophysics of neuronal circuits, yet their activity's dependence on model parameters is notoriously complex. As a result, heuristic methods have been used to configure spiking network models, which can lead to an inability to discover activity regimes complex enough to match large-scale neuronal recordings. Here we propose an automatic procedure, Spiking Network Optimization using Population Statistics (SNOPS), to customize spiking network models that reproduce the population-wide covariability of large-scale neuronal recordings. We first confirmed that SNOPS accurately recovers simulated neural activity statistics. Then, we applied SNOPS to recordings in macaque visual and prefrontal cortices and discovered previously unknown limitations of spiking network models. Taken together, SNOPS can guide the development of network models, thereby enabling deeper insight into how networks of neurons give rise to brain function.
Collapse
Affiliation(s)
- Shenghao Wu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Chengcheng Huang
- Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| | - Adam C Snyder
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Matthew A Smith
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Brent Doiron
- Department of Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
- Neural Basis of Cognition, Pittsburgh, PA, USA.
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
3
|
Horrocks EAB, Rodrigues FR, Saleem AB. Flexible neural population dynamics govern the speed and stability of sensory encoding in mouse visual cortex. Nat Commun 2024; 15:6415. [PMID: 39080254 PMCID: PMC11289260 DOI: 10.1038/s41467-024-50563-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 07/15/2024] [Indexed: 08/02/2024] Open
Abstract
Time courses of neural responses underlie real-time sensory processing and perception. How these temporal dynamics change may be fundamental to how sensory systems adapt to different perceptual demands. By simultaneously recording from hundreds of neurons in mouse primary visual cortex, we examined neural population responses to visual stimuli at sub-second timescales, during different behavioural states. We discovered that during active behavioural states characterised by locomotion, single-neurons shift from transient to sustained response modes, facilitating rapid emergence of visual stimulus tuning. Differences in single-neuron response dynamics were associated with changes in temporal dynamics of neural correlations, including faster stabilisation of stimulus-evoked changes in the structure of correlations during locomotion. Using Factor Analysis, we examined temporal dynamics of latent population responses and discovered that trajectories of population activity make more direct transitions between baseline and stimulus-encoding neural states during locomotion. This could be partly explained by dampening of oscillatory dynamics present during stationary behavioural states. Functionally, changes in temporal response dynamics collectively enabled faster, more stable and more efficient encoding of new visual information during locomotion. These findings reveal a principle of how sensory systems adapt to perceptual demands, where flexible neural population dynamics govern the speed and stability of sensory encoding.
Collapse
Affiliation(s)
- Edward A B Horrocks
- Institute of Behavioural Neuroscience, University College London, London, WC1V 0AP, UK.
| | - Fabio R Rodrigues
- Institute of Behavioural Neuroscience, University College London, London, WC1V 0AP, UK
| | - Aman B Saleem
- Institute of Behavioural Neuroscience, University College London, London, WC1V 0AP, UK.
| |
Collapse
|
4
|
Morales-Gregorio A, Kurth AC, Ito J, Kleinjohann A, Barthélemy FV, Brochier T, Grün S, van Albada SJ. Neural manifolds in V1 change with top-down signals from V4 targeting the foveal region. Cell Rep 2024; 43:114371. [PMID: 38923458 DOI: 10.1016/j.celrep.2024.114371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 03/25/2024] [Accepted: 05/31/2024] [Indexed: 06/28/2024] Open
Abstract
High-dimensional brain activity is often organized into lower-dimensional neural manifolds. However, the neural manifolds of the visual cortex remain understudied. Here, we study large-scale multi-electrode electrophysiological recordings of macaque (Macaca mulatta) areas V1, V4, and DP with a high spatiotemporal resolution. We find that the population activity of V1 contains two separate neural manifolds, which correlate strongly with eye closure (eyes open/closed) and have distinct dimensionalities. Moreover, we find strong top-down signals from V4 to V1, particularly to the foveal region of V1, which are significantly stronger during the eyes-open periods. Finally, in silico simulations of a balanced spiking neuron network qualitatively reproduce the experimental findings. Taken together, our analyses and simulations suggest that top-down signals modulate the population activity of V1. We postulate that the top-down modulation during the eyes-open periods prepares V1 for fast and efficient visual responses, resulting in a type of visual stand-by state.
Collapse
Affiliation(s)
- Aitor Morales-Gregorio
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Institute of Zoology, University of Cologne, Cologne, Germany.
| | - Anno C Kurth
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; RWTH Aachen University, Aachen, Germany
| | - Junji Ito
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
| | - Alexander Kleinjohann
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Frédéric V Barthélemy
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Institut de Neurosciences de la Timone (INT), CNRS and Aix-Marseille Université, Marseille, France
| | - Thomas Brochier
- Institut de Neurosciences de la Timone (INT), CNRS and Aix-Marseille Université, Marseille, France
| | - Sonja Grün
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany; JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Sacha J van Albada
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Institute of Zoology, University of Cologne, Cologne, Germany
| |
Collapse
|
5
|
Stan PL, Smith MA. Recent visual experience reshapes V4 neuronal activity and improves perceptual performance. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.27.555026. [PMID: 37693510 PMCID: PMC10491105 DOI: 10.1101/2023.08.27.555026] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
Recent visual experience heavily influences our visual perception, but how this is mediated by the reshaping of neuronal activity to alter and improve perceptual discrimination remains unknown. We recorded from populations of neurons in visual cortical area V4 while monkeys performed a natural image change detection task under different experience conditions. We found that maximizing the recent experience with a particular image led to an improvement in the ability to detect a change in that image. This improvement was associated with decreased neural responses to the image, consistent with neuronal changes previously seen in studies of adaptation and expectation. We found that the magnitude of behavioral improvement was correlated with the magnitude of response suppression. Furthermore, this suppression of activity led to an increase in signal separation, providing evidence that a reduction in activity can improve stimulus encoding. Within populations of neurons, greater recent experience was associated with decreased trial-to-trial shared variability, indicating that a reduction in variability is a key means by which experience influences perception. Taken together, the results of our study contribute to an understanding of how recent visual experience can shape our perception and behavior through modulating activity patterns in mid-level visual cortex.
Collapse
|
6
|
Manley J, Lu S, Barber K, Demas J, Kim H, Meyer D, Traub FM, Vaziri A. Simultaneous, cortex-wide dynamics of up to 1 million neurons reveal unbounded scaling of dimensionality with neuron number. Neuron 2024; 112:1694-1709.e5. [PMID: 38452763 PMCID: PMC11098699 DOI: 10.1016/j.neuron.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 05/18/2023] [Accepted: 02/14/2024] [Indexed: 03/09/2024]
Abstract
The brain's remarkable properties arise from the collective activity of millions of neurons. Widespread application of dimensionality reduction to multi-neuron recordings implies that neural dynamics can be approximated by low-dimensional "latent" signals reflecting neural computations. However, can such low-dimensional representations truly explain the vast range of brain activity, and if not, what is the appropriate resolution and scale of recording to capture them? Imaging neural activity at cellular resolution and near-simultaneously across the mouse cortex, we demonstrate an unbounded scaling of dimensionality with neuron number in populations up to 1 million neurons. Although half of the neural variance is contained within sixteen dimensions correlated with behavior, our discovered scaling of dimensionality corresponds to an ever-increasing number of neuronal ensembles without immediate behavioral or sensory correlates. The activity patterns underlying these higher dimensions are fine grained and cortex wide, highlighting that large-scale, cellular-resolution recording is required to uncover the full substrates of neuronal computations.
Collapse
Affiliation(s)
- Jason Manley
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Sihao Lu
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Kevin Barber
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Jeffrey Demas
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Hyewon Kim
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - David Meyer
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Francisca Martínez Traub
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA.
| |
Collapse
|
7
|
Pan X, Coen-Cagli R, Schwartz O. Probing the Structure and Functional Properties of the Dropout-Induced Correlated Variability in Convolutional Neural Networks. Neural Comput 2024; 36:621-644. [PMID: 38457752 PMCID: PMC11164410 DOI: 10.1162/neco_a_01652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 12/04/2023] [Indexed: 03/10/2024]
Abstract
Computational neuroscience studies have shown that the structure of neural variability to an unchanged stimulus affects the amount of information encoded. Some artificial deep neural networks, such as those with Monte Carlo dropout layers, also have variable responses when the input is fixed. However, the structure of the trial-by-trial neural covariance in neural networks with dropout has not been studied, and its role in decoding accuracy is unknown. We studied the above questions in a convolutional neural network model with dropout in both the training and testing phases. We found that trial-by-trial correlation between neurons (i.e., noise correlation) is positive and low dimensional. Neurons that are close in a feature map have larger noise correlation. These properties are surprisingly similar to the findings in the visual cortex. We further analyzed the alignment of the main axes of the covariance matrix. We found that different images share a common trial-by-trial noise covariance subspace, and they are aligned with the global signal covariance. This evidence that the noise covariance is aligned with signal covariance suggests that noise covariance in dropout neural networks reduces network accuracy, which we further verified directly with a trial-shuffling procedure commonly used in neuroscience. These findings highlight a previously overlooked aspect of dropout layers that can affect network performance. Such dropout networks could also potentially be a computational model of neural variability.
Collapse
Affiliation(s)
- Xu Pan
- Department of Computer Science, University of Miami, Coral Gables, FL 33146, U.S.A.
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Dominick Purpura Department of Neuroscience, and Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY 10461, U.S.A.
| | - Odelia Schwartz
- Department of Computer Science, University of Miami, Coral Gables, FL 33146, U.S.A.
| |
Collapse
|
8
|
Gort J. Emergence of Universal Computations Through Neural Manifold Dynamics. Neural Comput 2024; 36:227-270. [PMID: 38101328 DOI: 10.1162/neco_a_01631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/05/2023] [Indexed: 12/17/2023]
Abstract
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology, and nonlinear dynamics in order to provide plausible explanations for these problems. It is shown that low-rank structured connectivities predict the formation of invariant and globally attracting manifolds in all these models. Regarding the dynamics arising in these manifolds, it is proved they are topologically equivalent across the considered formalisms. This letter also shows that under the low-rank hypothesis, the flows emerging in neural manifolds, including input-driven systems, are universal, which broadens previous findings. It explores how low-dimensional orbits can bear the production of continuous sets of muscular trajectories, the implementation of central pattern generators, and the storage of memory states. These dynamics can robustly simulate any Turing machine over arbitrary bounded memory strings, virtually endowing rate models with the power of universal computation. In addition, the letter shows how the low-rank hypothesis predicts the parsimonious correlation structure observed in cortical activity. Finally, it discusses how this theory could provide a useful tool from which to study neuropsychological phenomena using mathematical methods.
Collapse
Affiliation(s)
- Joan Gort
- Facultat de Psicologia, Universitat Autònoma de Barcelona, 08193, Bellaterra, Barcelona, Spain
| |
Collapse
|
9
|
Manley J, Demas J, Kim H, Traub FM, Vaziri A. Simultaneous, cortex-wide and cellular-resolution neuronal population dynamics reveal an unbounded scaling of dimensionality with neuron number. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.15.575721. [PMID: 38293036 PMCID: PMC10827059 DOI: 10.1101/2024.01.15.575721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
The brain's remarkable properties arise from collective activity of millions of neurons. Widespread application of dimensionality reduction to multi-neuron recordings implies that neural dynamics can be approximated by low-dimensional "latent" signals reflecting neural computations. However, what would be the biological utility of such a redundant and metabolically costly encoding scheme and what is the appropriate resolution and scale of neural recording to understand brain function? Imaging the activity of one million neurons at cellular resolution and near-simultaneously across mouse cortex, we demonstrate an unbounded scaling of dimensionality with neuron number. While half of the neural variance lies within sixteen behavior-related dimensions, we find this unbounded scaling of dimensionality to correspond to an ever-increasing number of internal variables without immediate behavioral correlates. The activity patterns underlying these higher dimensions are fine-grained and cortex-wide, highlighting that large-scale recording is required to uncover the full neural substrates of internal and potentially cognitive processes.
Collapse
Affiliation(s)
- Jason Manley
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Jeffrey Demas
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Hyewon Kim
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Francisca Martínez Traub
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
- Lead Contact
| |
Collapse
|
10
|
Wu S, Huang C, Snyder A, Smith M, Doiron B, Yu B. Automated customization of large-scale spiking network models to neuronal population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.21.558920. [PMID: 37790533 PMCID: PMC10542160 DOI: 10.1101/2023.09.21.558920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Understanding brain function is facilitated by constructing computational models that accurately reproduce aspects of brain activity. Networks of spiking neurons capture the underlying biophysics of neuronal circuits, yet the dependence of their activity on model parameters is notoriously complex. As a result, heuristic methods have been used to configure spiking network models, which can lead to an inability to discover activity regimes complex enough to match large-scale neuronal recordings. Here we propose an automatic procedure, Spiking Network Optimization using Population Statistics (SNOPS), to customize spiking network models that reproduce the population-wide covariability of large-scale neuronal recordings. We first confirmed that SNOPS accurately recovers simulated neural activity statistics. Then, we applied SNOPS to recordings in macaque visual and prefrontal cortices and discovered previously unknown limitations of spiking network models. Taken together, SNOPS can guide the development of network models and thereby enable deeper insight into how networks of neurons give rise to brain function.
Collapse
Affiliation(s)
- Shenghao Wu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Chengcheng Huang
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| | - Adam Snyder
- Department of Neuroscience, University of Rochester, Rochester, NY, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Matthew Smith
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Brent Doiron
- Department of Neurobiology, University of Chicago, Chicago, IL, USA
- Department of Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Byron Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| |
Collapse
|
11
|
Rouse TC, Ni AM, Huang C, Cohen MR. Topological insights into the neural basis of flexible behavior. Proc Natl Acad Sci U S A 2023; 120:e2219557120. [PMID: 37279273 PMCID: PMC10268229 DOI: 10.1073/pnas.2219557120] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 03/28/2023] [Indexed: 06/08/2023] Open
Abstract
It is widely accepted that there is an inextricable link between neural computations, biological mechanisms, and behavior, but it is challenging to simultaneously relate all three. Here, we show that topological data analysis (TDA) provides an important bridge between these approaches to studying how brains mediate behavior. We demonstrate that cognitive processes change the topological description of the shared activity of populations of visual neurons. These topological changes constrain and distinguish between competing mechanistic models, are connected to subjects' performance on a visual change detection task, and, via a link with network control theory, reveal a tradeoff between improving sensitivity to subtle visual stimulus changes and increasing the chance that the subject will stray off task. These connections provide a blueprint for using TDA to uncover the biological and computational mechanisms by which cognition affects behavior in health and disease.
Collapse
Affiliation(s)
- Tevin C. Rouse
- Division of Biological Sciences, Department of Neurobiology, University of Chicago, Chicago, IL60637
| | - Amy M. Ni
- Dietrich School of Arts and Sciences, Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA15260
| | - Chengcheng Huang
- Dietrich School of Arts and Sciences, Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA15260
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA15260
| | - Marlene R. Cohen
- Division of Biological Sciences, Department of Neurobiology, University of Chicago, Chicago, IL60637
| |
Collapse
|
12
|
DePasquale B, Sussillo D, Abbott LF, Churchland MM. The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 2023; 111:631-649.e10. [PMID: 36630961 PMCID: PMC10118067 DOI: 10.1016/j.neuron.2022.12.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/17/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023]
Abstract
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Collapse
Affiliation(s)
- Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Physiology and Cellular Biophysics, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| |
Collapse
|
13
|
Koh TH, Bishop WE, Kawashima T, Jeon BB, Srinivasan R, Mu Y, Wei Z, Kuhlman SJ, Ahrens MB, Chase SM, Yu BM. Dimensionality reduction of calcium-imaged neuronal population activity. NATURE COMPUTATIONAL SCIENCE 2023; 3:71-85. [PMID: 37476302 PMCID: PMC10358781 DOI: 10.1038/s43588-022-00390-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 12/05/2022] [Indexed: 07/22/2023]
Abstract
Calcium imaging has been widely adopted for its ability to record from large neuronal populations. To summarize the time course of neural activity, dimensionality reduction methods, which have been applied extensively to population spiking activity, may be particularly useful. However, it is unclear if the dimensionality reduction methods applied to spiking activity are appropriate for calcium imaging. We thus carried out a systematic study of design choices based on standard dimensionality reduction methods. We also developed a method to perform deconvolution and dimensionality reduction simultaneously (Calcium Imaging Linear Dynamical System, CILDS). CILDS most accurately recovered the single-trial, low-dimensional time courses from simulated calcium imaging data. CILDS also outperformed the other methods on calcium imaging recordings from larval zebrafish and mice. More broadly, this study represents a foundation for summarizing calcium imaging recordings of large neuronal populations using dimensionality reduction in diverse experimental settings.
Collapse
Affiliation(s)
- Tze Hui Koh
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Center for the Neural Basis of Cognition, PA
| | - William E. Bishop
- Center for the Neural Basis of Cognition, PA
- Department of Machine Learning, Carnegie Mellon University, PA
- Janelia Research Campus, Howard Hughes Medical Institute, VA
| | - Takashi Kawashima
- Janelia Research Campus, Howard Hughes Medical Institute, VA
- Department of Brain Sciences, Weizmann Institute of Science, Israel
| | - Brian B. Jeon
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Center for the Neural Basis of Cognition, PA
| | - Ranjani Srinivasan
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD
| | - Yu Mu
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, China
| | - Ziqiang Wei
- Janelia Research Campus, Howard Hughes Medical Institute, VA
| | - Sandra J. Kuhlman
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, PA
- Department of Biological Sciences, Carnegie Mellon University, PA
| | - Misha B. Ahrens
- Janelia Research Campus, Howard Hughes Medical Institute, VA
| | - Steven M. Chase
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, PA
| | - Byron M. Yu
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, PA
- Department of Electrical and Computer Engineering, Carnegie Mellon University, PA
| |
Collapse
|
14
|
Mosheiff N, Ermentrout B, Huang C. Chaotic dynamics in spatially distributed neuronal networks generate population-wide shared variability. PLoS Comput Biol 2023; 19:e1010843. [PMID: 36626362 PMCID: PMC9870129 DOI: 10.1371/journal.pcbi.1010843] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 01/23/2023] [Accepted: 12/26/2022] [Indexed: 01/11/2023] Open
Abstract
Neural activity in the cortex is highly variable in response to repeated stimuli. Population recordings across the cortex demonstrate that the variability of neuronal responses is shared among large groups of neurons and concentrates in a low dimensional space. However, the source of the population-wide shared variability is unknown. In this work, we analyzed the dynamical regimes of spatially distributed networks of excitatory and inhibitory neurons. We found chaotic spatiotemporal dynamics in networks with similar excitatory and inhibitory projection widths, an anatomical feature of the cortex. The chaotic solutions contain broadband frequency power in rate variability and have distance-dependent and low-dimensional correlations, in agreement with experimental findings. In addition, rate chaos can be induced by globally correlated noisy inputs. These results suggest that spatiotemporal chaos in cortical networks can explain the shared variability observed in neuronal population responses.
Collapse
Affiliation(s)
- Noga Mosheiff
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States of America
| | - Bard Ermentrout
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Chengcheng Huang
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States of America
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
15
|
Guidolin A, Desroches M, Victor JD, Purpura KP, Rodrigues S. Geometry of spiking patterns in early visual cortex: a topological data analytic approach. J R Soc Interface 2022; 19:20220677. [PMID: 36382589 PMCID: PMC9667368 DOI: 10.1098/rsif.2022.0677] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 10/21/2022] [Indexed: 11/17/2022] Open
Abstract
In the brain, spiking patterns live in a high-dimensional space of neurons and time. Thus, determining the intrinsic structure of this space presents a theoretical and experimental challenge. To address this challenge, we introduce a new framework for applying topological data analysis (TDA) to spike train data and use it to determine the geometry of spiking patterns in the visual cortex. Key to our approach is a parametrized family of distances based on the timing of spikes that quantifies the dissimilarity between neuronal responses. We applied TDA to visually driven single-unit and multiple single-unit spiking activity in macaque V1 and V2. TDA across timescales reveals a common geometry for spiking patterns in V1 and V2 which, among simple models, is most similar to that of a low-dimensional space endowed with Euclidean or hyperbolic geometry with modest curvature. Remarkably, the inferred geometry depends on timescale and is clearest for the timescales that are important for encoding contrast, orientation and spatial correlations.
Collapse
Affiliation(s)
- Andrea Guidolin
- MCEN Team, BCAM – Basque Center for Applied Mathematics, 48009 Bilbao, Basque Country, Spain
- Department of Mathematics, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden
| | - Mathieu Desroches
- MathNeuro Team, Inria at Université Côte d’Azur, 06902 Sophia Antipolis, France
| | - Jonathan D. Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York, NY 10065, USA
| | - Keith P. Purpura
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York, NY 10065, USA
| | - Serafim Rodrigues
- MCEN Team, BCAM – Basque Center for Applied Mathematics, 48009 Bilbao, Basque Country, Spain
- Ikerbasque – The Basque Foundation for Science, 48009 Bilbao, Basque Country, Spain
| |
Collapse
|
16
|
Hazon O, Minces VH, Tomàs DP, Ganguli S, Schnitzer MJ, Jercog PE. Noise correlations in neural ensemble activity limit the accuracy of hippocampal spatial representations. Nat Commun 2022; 13:4276. [PMID: 35879320 PMCID: PMC9314334 DOI: 10.1038/s41467-022-31254-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Accepted: 06/07/2022] [Indexed: 11/29/2022] Open
Abstract
Neurons in the CA1 area of the mouse hippocampus encode the position of the animal in an environment. However, given the variability in individual neurons responses, the accuracy of this code is still poorly understood. It was proposed that downstream areas could achieve high spatial accuracy by integrating the activity of thousands of neurons, but theoretical studies point to shared fluctuations in the firing rate as a potential limitation. Using high-throughput calcium imaging in freely moving mice, we demonstrated the limiting factors in the accuracy of the CA1 spatial code. We found that noise correlations in the hippocampus bound the estimation error of spatial coding to ~10 cm (the size of a mouse). Maximal accuracy was obtained using approximately [300-1400] neurons, depending on the animal. These findings reveal intrinsic limits in the brain's representations of space and suggest that single neurons downstream of the hippocampus can extract maximal spatial information from several hundred inputs.
Collapse
Affiliation(s)
| | | | - David P Tomàs
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | | | | | - Pablo E Jercog
- Stanford University, Stanford, CA, USA.
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain.
| |
Collapse
|
17
|
Bagi B, Brecht M, Sanguinetti-Scheck JI. Unsupervised discovery of behaviorally relevant brain states in rats playing hide-and-seek. Curr Biol 2022; 32:2640-2653.e4. [PMID: 35588745 PMCID: PMC9245901 DOI: 10.1016/j.cub.2022.04.068] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 03/29/2022] [Accepted: 04/22/2022] [Indexed: 11/25/2022]
Abstract
In classical neuroscience experiments, neural activity is measured across many identical trials of animals performing simple tasks and is then analyzed, associating neural responses to pre-defined experimental parameters. This type of analysis is not suitable for patterns of behavior that unfold freely, such as play behavior. Here, we attempt an alternative approach for exploratory data analysis on a single-trial level, applicable in more complex and naturalistic behavioral settings in which no two trials are identical. We analyze neural population activity in the prefrontal cortex (PFC) of rats playing hide-and-seek and show that it is possible to discover what aspects of the task are reflected in the recorded activity with a limited number of simultaneously recorded cells (≤ 31). Using hidden Markov models, we cluster population activity in the PFC into a set of neural states, each associated with a pattern of neural activity. Despite high variability in behavior, relating the inferred states to the events of the hide-and-seek game reveals neural states that consistently appear at the same phases of the game. Furthermore, we show that by applying the segmentation inferred from neural data to the animals' behavior, we can explore and discover novel correlations between neural activity and behavior. Finally, we replicate the results in a second dataset and show that population activity in the PFC displays distinct sets of states during playing hide-and-seek and observing others play the game. Overall, our results reveal robust, state-like representations in the rat PFC during unrestrained playful behavior and showcase the applicability of population analyses in naturalistic neuroscience.
Collapse
Affiliation(s)
- Bence Bagi
- Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin, Philippstr. 13, Haus 6, 10115 Berlin, Germany; Department of Bioengineering, Imperial College London, London, UK
| | - Michael Brecht
- Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin, Philippstr. 13, Haus 6, 10115 Berlin, Germany; NeuroCure Cluster of Excellence, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Juan Ignacio Sanguinetti-Scheck
- Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin, Philippstr. 13, Haus 6, 10115 Berlin, Germany; Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
18
|
Ni AM, Huang C, Doiron B, Cohen MR. A general decoding strategy explains the relationship between behavior and correlated variability. eLife 2022; 11:67258. [PMID: 35660134 PMCID: PMC9170243 DOI: 10.7554/elife.67258] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 05/11/2022] [Indexed: 11/16/2022] Open
Abstract
Improvements in perception are frequently accompanied by decreases in correlated variability in sensory cortex. This relationship is puzzling because overall changes in correlated variability should minimally affect optimal information coding. We hypothesize that this relationship arises because instead of using optimal strategies for decoding the specific stimuli at hand, observers prioritize generality: a single set of neuronal weights to decode any stimuli. We tested this using a combination of multineuron recordings in the visual cortex of behaving rhesus monkeys and a cortical circuit model. We found that general decoders optimized for broad rather than narrow sets of visual stimuli better matched the animals’ decoding strategy, and that their performance was more related to the magnitude of correlated variability. In conclusion, the inverse relationship between perceptual performance and correlated variability can be explained by observers using a general decoding strategy, capable of decoding neuronal responses to the variety of stimuli encountered in natural vision.
Collapse
Affiliation(s)
- Amy M Ni
- Department of Neuroscience,University of Pittsburgh, Pittsburgh, United States.,Center for the Neural Basis of Cognition, Pittsburgh, United States
| | - Chengcheng Huang
- Department of Neuroscience,University of Pittsburgh, Pittsburgh, United States.,Center for the Neural Basis of Cognition, Pittsburgh, United States.,Department of Mathematics, University of Pittsburgh, Pittsburgh, United States
| | - Brent Doiron
- Center for the Neural Basis of Cognition, Pittsburgh, United States.,Department of Mathematics, University of Pittsburgh, Pittsburgh, United States
| | - Marlene R Cohen
- Department of Neuroscience,University of Pittsburgh, Pittsburgh, United States.,Center for the Neural Basis of Cognition, Pittsburgh, United States
| |
Collapse
|
19
|
Huang C, Pouget A, Doiron B. Internally generated population activity in cortical networks hinders information transmission. SCIENCE ADVANCES 2022; 8:eabg5244. [PMID: 35648863 PMCID: PMC9159697 DOI: 10.1126/sciadv.abg5244] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 04/14/2022] [Indexed: 06/15/2023]
Abstract
How neuronal variability affects sensory coding is a central question in systems neuroscience, often with complex and model-dependent answers. Many studies explore population models with a parametric structure for response tuning and variability, preventing an analysis of how synaptic circuitry establishes neural codes. We study stimulus coding in networks of spiking neuron models with spatially ordered excitatory and inhibitory connectivity. The wiring structure is capable of producing rich population-wide shared neuronal variability that agrees with many features of recorded cortical activity. While both the spatial scales of feedforward and recurrent projections strongly affect noise correlations, only recurrent projections, and in particular inhibitory projections, can introduce correlations that limit the stimulus information available to a decoder. Using a spatial neural field model, we relate the recurrent circuit conditions for information limiting noise correlations to how recurrent excitation and inhibition can form spatiotemporal patterns of population-wide activity.
Collapse
Affiliation(s)
- Chengcheng Huang
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Alexandre Pouget
- Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| |
Collapse
|
20
|
Brinkman BAW, Yan H, Maffei A, Park IM, Fontanini A, Wang J, La Camera G. Metastable dynamics of neural circuits and networks. APPLIED PHYSICS REVIEWS 2022; 9:011313. [PMID: 35284030 PMCID: PMC8900181 DOI: 10.1063/5.0062603] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 01/31/2022] [Indexed: 05/14/2023]
Abstract
Cortical neurons emit seemingly erratic trains of action potentials or "spikes," and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which emerge spontaneously or in response to incoming activity produced by sensory inputs. In this Review, we focus on neural dynamics that is best understood as a sequence of repeated activations of a number of discrete hidden states. These transiently occupied states are termed "metastable" and have been linked to important sensory and cognitive functions. In the rodent gustatory cortex, for instance, metastable dynamics have been associated with stimulus coding, with states of expectation, and with decision making. In frontal, parietal, and motor areas of macaques, metastable activity has been related to behavioral performance, choice behavior, task difficulty, and attention. In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits. These approaches include (i) a theoretical framework based on non-equilibrium statistical physics for network dynamics; (ii) statistical approaches to extract information about metastable states from a variety of neural signals; and (iii) recent neural network approaches, informed by experimental results, to model the emergence of metastable dynamics. By discussing these topics, we aim to provide a cohesive view of how transitions between different states of activity may provide the neural underpinnings for essential functions such as perception, memory, expectation, or decision making, and more generally, how the study of metastable neural activity may advance our understanding of neural circuit function in health and disease.
Collapse
Affiliation(s)
| | - H. Yan
- State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, Jilin 130022, People's Republic of China
| | | | | | | | - J. Wang
- Authors to whom correspondence should be addressed: and
| | - G. La Camera
- Authors to whom correspondence should be addressed: and
| |
Collapse
|
21
|
Altan E, Solla SA, Miller LE, Perreault EJ. Estimating the dimensionality of the manifold underlying multi-electrode neural recordings. PLoS Comput Biol 2021; 17:e1008591. [PMID: 34843461 PMCID: PMC8659648 DOI: 10.1371/journal.pcbi.1008591] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 12/09/2021] [Accepted: 11/11/2021] [Indexed: 01/07/2023] Open
Abstract
It is generally accepted that the number of neurons in a given brain area far exceeds the number of neurons needed to carry any specific function controlled by that area. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating the dimensionality of linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms' accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding of the low-dimensional manifold within the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated the dimensionality. We thus developed a denoising algorithm based on deep learning, the "Joint Autoencoder", which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the intrinsic dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.
Collapse
Affiliation(s)
- Ege Altan
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
| | - Sara A. Solla
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Physics and Astronomy, Northwestern University, Evanston, Illinois, United States of America
| | - Lee E. Miller
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Shirley Ryan AbilityLab, Chicago, Illinois, United States of America
| | - Eric J. Perreault
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Shirley Ryan AbilityLab, Chicago, Illinois, United States of America
| |
Collapse
|
22
|
Umakantha A, Morina R, Cowley BR, Snyder AC, Smith MA, Yu BM. Bridging neuronal correlations and dimensionality reduction. Neuron 2021; 109:2740-2754.e12. [PMID: 34293295 PMCID: PMC8505167 DOI: 10.1016/j.neuron.2021.06.028] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 05/05/2021] [Accepted: 06/25/2021] [Indexed: 01/01/2023]
Abstract
Two commonly used approaches to study interactions among neurons are spike count correlation, which describes pairs of neurons, and dimensionality reduction, applied to a population of neurons. Although both approaches have been used to study trial-to-trial neuronal variability correlated among neurons, they are often used in isolation and have not been directly related. We first established concrete mathematical and empirical relationships between pairwise correlation and metrics of population-wide covariability based on dimensionality reduction. Applying these insights to macaque V4 population recordings, we found that the previously reported decrease in mean pairwise correlation associated with attention stemmed from three distinct changes in population-wide covariability. Overall, our work builds the intuition and formalism to bridge between pairwise correlation and population-wide covariability and presents a cautionary tale about the inferences one can make about population activity by using a single statistic, whether it be mean pairwise correlation or dimensionality.
Collapse
Affiliation(s)
- Akash Umakantha
- Carnegie Mellon Neuroscience Institute, Pittsburgh, PA 15213, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Rudina Morina
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Benjamin R Cowley
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Adam C Snyder
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14642, USA; Department of Neuroscience, University of Rochester, Rochester, NY 14642, USA; Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
| | - Matthew A Smith
- Carnegie Mellon Neuroscience Institute, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Byron M Yu
- Carnegie Mellon Neuroscience Institute, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| |
Collapse
|
23
|
Chen ZS, Pesaran B. Improving scalability in systems neuroscience. Neuron 2021; 109:1776-1790. [PMID: 33831347 PMCID: PMC8178195 DOI: 10.1016/j.neuron.2021.03.025] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 03/11/2021] [Accepted: 03/16/2021] [Indexed: 12/30/2022]
Abstract
Emerging technologies to acquire data at increasingly greater scales promise to transform discovery in systems neuroscience. However, current exponential growth in the scale of data acquisition is a double-edged sword. Scaling up data acquisition can speed up the cycle of discovery but can also misinterpret the results or possibly slow down the cycle because of challenges presented by the curse of high-dimensional data. Active, adaptive, closed-loop experimental paradigms use hardware and algorithms optimized to enable time-critical computation to provide feedback that interprets the observations and tests hypotheses to actively update the stimulus or stimulation parameters. In this perspective, we review important concepts of active and adaptive experiments and discuss how selectively constraining the dimensionality and optimizing strategies at different stages of discovery loop can help mitigate the curse of high-dimensional data. Active and adaptive closed-loop experimental paradigms can speed up discovery despite an exponentially increasing data scale, offering a road map to timely and iterative hypothesis revision and discovery in an era of exponential growth in neuroscience.
Collapse
Affiliation(s)
- Zhe Sage Chen
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY 10016, USA; Neuroscience Institute, NYU School of Medicine, New York, NY 10016, USA.
| | - Bijan Pesaran
- Neuroscience Institute, NYU School of Medicine, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA; Department of Neurology, New York University School of Medicine, New York, NY 10016, USA.
| |
Collapse
|
24
|
Hennig JA, Oby ER, Golub MD, Bahureksa LA, Sadtler PT, Quick KM, Ryu SI, Tyler-Kabara EC, Batista AP, Chase SM, Yu BM. Learning is shaped by abrupt changes in neural engagement. Nat Neurosci 2021; 24:727-736. [PMID: 33782622 DOI: 10.1038/s41593-021-00822-8] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Accepted: 02/22/2021] [Indexed: 01/30/2023]
Abstract
Internal states such as arousal, attention and motivation modulate brain-wide neural activity, but how these processes interact with learning is not well understood. During learning, the brain modifies its neural activity to improve behavior. How do internal states affect this process? Using a brain-computer interface learning paradigm in monkeys, we identified large, abrupt fluctuations in neural population activity in motor cortex indicative of arousal-like internal state changes, which we term 'neural engagement.' In a brain-computer interface, the causal relationship between neural activity and behavior is known, allowing us to understand how neural engagement impacted behavioral performance for different task goals. We observed stereotyped changes in neural engagement that occurred regardless of how they impacted performance. This allowed us to predict how quickly different task goals were learned. These results suggest that changes in internal states, even those seemingly unrelated to goal-seeking behavior, can systematically influence how behavior improves with learning.
Collapse
Affiliation(s)
- Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA. .,Center for the Neural Basis of Cognition, Pittsburgh, PA, USA. .,Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Matthew D Golub
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Lindsay A Bahureksa
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Patrick T Sadtler
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Kristin M Quick
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Stephen I Ryu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.,Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA, USA
| | - Elizabeth C Tyler-Kabara
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA, USA.,Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA.,Department of Neurosurgery, Dell Medical School, University of Texas at Austin, Austin, TX, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.,Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.,Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
25
|
Zhou J, Huang H. Weakly correlated synapses promote dimension reduction in deep neural networks. Phys Rev E 2021; 103:012315. [PMID: 33601541 DOI: 10.1103/physreve.103.012315] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 01/08/2021] [Indexed: 11/07/2022]
Abstract
By controlling synaptic and neural correlations, deep learning has achieved empirical successes in improving classification performances. How synaptic correlations affect neural correlations to produce disentangled hidden representations remains elusive. Here we propose a simplified model of dimension reduction, taking into account pairwise correlations among synapses, to reveal the mechanism underlying how the synaptic correlations affect dimension reduction. Our theory determines the scaling of synaptic correlations requiring only mathematical self-consistency for both binary and continuous synapses. The theory also predicts that weakly correlated synapses encourage dimension reduction compared to their orthogonal counterparts. In addition, these synapses attenuate the decorrelation process along the network depth. These two computational roles are explained by a proposed mean-field equation. The theoretical predictions are in excellent agreement with numerical simulations, and the key features are also captured by deep learning with Hebbian rules.
Collapse
Affiliation(s)
- Jianwen Zhou
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| | - Haiping Huang
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| |
Collapse
|
26
|
Feulner B, Clopath C. Neural manifold under plasticity in a goal driven learning behaviour. PLoS Comput Biol 2021; 17:e1008621. [PMID: 33544700 PMCID: PMC7864452 DOI: 10.1371/journal.pcbi.1008621] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 12/08/2020] [Indexed: 11/19/2022] Open
Abstract
Neural activity is often low dimensional and dominated by only a few prominent neural covariation patterns. It has been hypothesised that these covariation patterns could form the building blocks used for fast and flexible motor control. Supporting this idea, recent experiments have shown that monkeys can learn to adapt their neural activity in motor cortex on a timescale of minutes, given that the change lies within the original low-dimensional subspace, also called neural manifold. However, the neural mechanism underlying this within-manifold adaptation remains unknown. Here, we show in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning. Our findings give a new perspective, showing that recurrent weight changes do not necessarily lead to change in the neural manifold. On the contrary, successful learning is naturally constrained to a common subspace.
Collapse
Affiliation(s)
- Barbara Feulner
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
27
|
Kafashan M, Jaffe AW, Chettih SN, Nogueira R, Arandia-Romero I, Harvey CD, Moreno-Bote R, Drugowitsch J. Scaling of sensory information in large neural populations shows signatures of information-limiting correlations. Nat Commun 2021; 12:473. [PMID: 33473113 PMCID: PMC7817840 DOI: 10.1038/s41467-020-20722-y] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Accepted: 12/16/2020] [Indexed: 01/29/2023] Open
Abstract
How is information distributed across large neuronal populations within a given brain area? Information may be distributed roughly evenly across neuronal populations, so that total information scales linearly with the number of recorded neurons. Alternatively, the neural code might be highly redundant, meaning that total information saturates. Here we investigate how sensory information about the direction of a moving visual stimulus is distributed across hundreds of simultaneously recorded neurons in mouse primary visual cortex. We show that information scales sublinearly due to correlated noise in these populations. We compartmentalized noise correlations into information-limiting and nonlimiting components, then extrapolate to predict how information grows with even larger neural populations. We predict that tens of thousands of neurons encode 95% of the information about visual stimulus direction, much less than the number of neurons in primary visual cortex. These findings suggest that the brain uses a widely distributed, but nonetheless redundant code that supports recovering most sensory information from smaller subpopulations.
Collapse
Affiliation(s)
| | - Anna W Jaffe
- Department of Neurobiology, Harvard Medical School, Boston, MA, 02115, USA
| | - Selmaan N Chettih
- Department of Neurobiology, Harvard Medical School, Boston, MA, 02115, USA
| | - Ramon Nogueira
- Center for Theoretical Neuroscience, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Iñigo Arandia-Romero
- ISAAC Lab, Aragón Institute of Engineering Research, University of Zaragoza, Zaragoza, Spain
- IAS-Research Center for Life, Mind, and Society, Department of Logic and Philosophy of Science, University of the Basque Country, UPV-EHU, Donostia-San Sebastián, Spain
| | | | - Rubén Moreno-Bote
- Center for Brain and Cognition and Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
- Serra Húnter Fellow Programme and ICREA Academia, Universitat Pompeu Fabra, Barcelona, Spain
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, 02115, USA.
| |
Collapse
|
28
|
Ruff DA, Xue C, Kramer LE, Baqai F, Cohen MR. Low rank mechanisms underlying flexible visual representations. Proc Natl Acad Sci U S A 2020; 117:29321-29329. [PMID: 33229536 PMCID: PMC7703603 DOI: 10.1073/pnas.2005797117] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Neuronal population responses to sensory stimuli are remarkably flexible. The responses of neurons in visual cortex have heterogeneous dependence on stimulus properties (e.g., contrast), processes that affect all stages of visual processing (e.g., adaptation), and cognitive processes (e.g., attention or task switching). Understanding whether these processes affect similar neuronal populations and whether they have similar effects on entire populations can provide insight into whether they utilize analogous mechanisms. In particular, it has recently been demonstrated that attention has low rank effects on the covariability of populations of visual neurons, which impacts perception and strongly constrains mechanistic models. We hypothesized that measuring changes in population covariability associated with other sensory and cognitive processes could clarify whether they utilize similar mechanisms or computations. Our experimental design included measurements in multiple visual areas using four distinct sensory and cognitive processes. We found that contrast, adaptation, attention, and task switching affect the variability of responses of populations of neurons in primate visual cortex in a similarly low rank way. These results suggest that a given circuit may use similar mechanisms to perform many forms of modulation and likely reflects a general principle that applies to a wide range of brain areas and sensory, cognitive, and motor processes.
Collapse
Affiliation(s)
- Douglas A Ruff
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA 15260
| | - Cheng Xue
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA 15260
| | - Lily E Kramer
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA 15260
| | - Faisal Baqai
- Program in Neural Computation, Carnegie Mellon University, Pittsburgh, PA 15260
| | - Marlene R Cohen
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA 15260;
- Program in Neural Computation, Carnegie Mellon University, Pittsburgh, PA 15260
| |
Collapse
|
29
|
Cowley BR, Snyder AC, Acar K, Williamson RC, Yu BM, Smith MA. Slow Drift of Neural Activity as a Signature of Impulsivity in Macaque Visual and Prefrontal Cortex. Neuron 2020; 108:551-567.e8. [PMID: 32810433 PMCID: PMC7822647 DOI: 10.1016/j.neuron.2020.07.021] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 06/15/2020] [Accepted: 07/17/2020] [Indexed: 12/22/2022]
Abstract
An animal's decision depends not only on incoming sensory evidence but also on its fluctuating internal state. This state embodies multiple cognitive factors, such as arousal and fatigue, but it is unclear how these factors influence the neural processes that encode sensory stimuli and form a decision. We discovered that, unprompted by task conditions, animals slowly shifted their likelihood of detecting stimulus changes over the timescale of tens of minutes. Neural population activity from visual area V4, as well as from prefrontal cortex, slowly drifted together with these behavioral fluctuations. We found that this slow drift, rather than altering the encoding of the sensory stimulus, acted as an impulsivity signal, overriding sensory evidence to dictate the final decision. Overall, this work uncovers an internal state embedded in population activity across multiple brain areas and sheds further light on how internal states contribute to the decision-making process.
Collapse
Affiliation(s)
- Benjamin R Cowley
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Adam C Snyder
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14642, USA; Department of Neuroscience, University of Rochester, Rochester, NY 14642, USA; Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
| | - Katerina Acar
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for Neuroscience, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Ryan C Williamson
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA 15213, USA; University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Byron M Yu
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Matthew A Smith
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA 15213, USA.
| |
Collapse
|
30
|
Jin C, Chen W, Cao Y, Xu Z, Tan Z, Zhang X, Deng L, Zheng C, Zhou J, Shi H, Feng J. Development and evaluation of an artificial intelligence system for COVID-19 diagnosis. Nat Commun 2020; 11:5088. [PMID: 33037212 DOI: 10.1101/823377] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 09/04/2020] [Indexed: 05/22/2023] Open
Abstract
Early detection of COVID-19 based on chest CT enables timely treatment of patients and helps control the spread of the disease. We proposed an artificial intelligence (AI) system for rapid COVID-19 detection and performed extensive statistical analysis of CTs of COVID-19 based on the AI system. We developed and evaluated our system on a large dataset with more than 10 thousand CT volumes from COVID-19, influenza-A/B, non-viral community acquired pneumonia (CAP) and non-pneumonia subjects. In such a difficult multi-class diagnosis task, our deep convolutional neural network-based system is able to achieve an area under the receiver operating characteristic curve (AUC) of 97.81% for multi-way classification on test cohort of 3,199 scans, AUC of 92.99% and 93.25% on two publicly available datasets, CC-CCII and MosMedData respectively. In a reader study involving five radiologists, the AI system outperforms all of radiologists in more challenging tasks at a speed of two orders of magnitude above them. Diagnosis performance of chest x-ray (CXR) is compared to that of CT. Detailed interpretation of deep network is also performed to relate system outputs with CT presentations. The code is available at https://github.com/ChenWWWeixiang/diagnosis_covid19 .
Collapse
Affiliation(s)
- Cheng Jin
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Weixiang Chen
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Yukun Cao
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Zhanwei Xu
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Zimeng Tan
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Xin Zhang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Lei Deng
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Chuansheng Zheng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Jie Zhou
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Heshui Shi
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China.
| | - Jianjiang Feng
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
| |
Collapse
|
31
|
Prefrontal cortex exhibits multidimensional dynamic encoding during decision-making. Nat Neurosci 2020; 23:1410-1420. [PMID: 33020653 PMCID: PMC7610668 DOI: 10.1038/s41593-020-0696-5] [Citation(s) in RCA: 67] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2018] [Accepted: 07/21/2020] [Indexed: 01/27/2023]
Abstract
Recent work has suggested that prefrontal cortex (PFC) plays a key role in context-dependent perceptual decision-making. Here we address that role using a new method for identifying task-relevant dimensions of neural population activity. Specifically, we show that PFC has a multi-dimensional code for context, decisions, and both relevant and irrelevant sensory information. Moreover, these representations evolve in time, with an early linear accumulation phase followed by a phase with rotational dynamics. We identify the dimensions of neural activity associated with these phases, and show that they do not arise from distinct populations, but of a single population with broad tuning characteristics. Finally, we use model-based decoding to show that the transition from linear to rotational dynamics coincides with a plateau in decoding accuracy, revealing that rotational dynamics in PFC preserve sensory choice information for the duration of the stimulus integration period.
Collapse
|
32
|
Tauste Campo A. Inferring neural information flow from spiking data. Comput Struct Biotechnol J 2020; 18:2699-2708. [PMID: 33101608 PMCID: PMC7548302 DOI: 10.1016/j.csbj.2020.09.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 09/05/2020] [Accepted: 09/07/2020] [Indexed: 01/02/2023] Open
Abstract
The brain can be regarded as an information processing system in which neurons store and propagate information about external stimuli and internal processes. Therefore, estimating interactions between neural activity at the cellular scale has significant implications in understanding how neuronal circuits encode and communicate information across brain areas to generate behavior. While the number of simultaneously recorded neurons is growing exponentially, current methods relying only on pairwise statistical dependencies still suffer from a number of conceptual and technical challenges that preclude experimental breakthroughs describing neural information flows. In this review, we examine the evolution of the field over the years, starting from descriptive statistics to model-based and model-free approaches. Then, we discuss in detail the Granger Causality framework, which includes many popular state-of-the-art methods and we highlight some of its limitations from a conceptual and practical estimation perspective. Finally, we discuss directions for future research, including the development of theoretical information flow models and the use of dimensionality reduction techniques to extract relevant interactions from large-scale recording datasets.
Collapse
Affiliation(s)
- Adrià Tauste Campo
- Centre for Brain and Cognition, Universitat Pompeu Fabra, Ramon Trias Fargas 25, 08018 Barcelona, Spain
| |
Collapse
|
33
|
Thivierge JP. Frequency-separated principal component analysis of cortical population activity. J Neurophysiol 2020; 124:668-681. [PMID: 32727265 DOI: 10.1152/jn.00167.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A hallmark of neocortical activity is the presence of low-dimensional fluctuations in firing rate that are coordinated across neurons. However, the impact of these fluctuations on sensory processing remains unclear. Here, we examined fluctuations in populations of orientation-selective neurons from anesthetized macaque primary visual cortex (V1) during stimulus viewing as well as spontaneous activity. We introduce a novel approach termed frequency-separated principal component analysis (FS-PCA) to characterize these fluctuations. This method unveiled a distribution of components with a broad range of frequencies whose eigenvalues and variance followed an approximate power law. During stimulus viewing, subpopulations of V1 neurons correlated either positively or negatively with low-dimensional fluctuations. These two subpopulations displayed distinct activation properties and noise correlations in response to sensory input. Together, results suggest that slow, low-dimensional fluctuations in V1 population activity shape the response of individual neurons to oriented stimuli and may impact the transmission of sensory information to downstream regions of the primary visual system.NEW & NOTEWORTHY A method termed frequency-separated principal component analysis (FS-PCA) is introduced for analyzing populations of simultaneously recorded neurons. This framework extends standard principal component analysis by extracting components of activity delimited to specific frequency bands. FS-PCA revealed that circuits of the primary visual cortex generate a broad range of components dominated by low-frequency activity. Furthermore, low-dimensional fluctuations in population activity modulated the response of individual neurons to sensory input.
Collapse
Affiliation(s)
- Jean-Philippe Thivierge
- School of Psychology, University of Ottawa, Ottawa, Ontario, Canada.,Brain and Mind Research Institute, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
34
|
Electrical coupling controls dimensionality and chaotic firing of inferior olive neurons. PLoS Comput Biol 2020; 16:e1008075. [PMID: 32730255 PMCID: PMC7419012 DOI: 10.1371/journal.pcbi.1008075] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Revised: 08/11/2020] [Accepted: 06/18/2020] [Indexed: 01/15/2023] Open
Abstract
We previously proposed, on theoretical grounds, that the cerebellum must regulate the dimensionality of its neuronal activity during motor learning and control to cope with the low firing frequency of inferior olive neurons, which form one of two major inputs to the cerebellar cortex. Such dimensionality regulation is possible via modulation of electrical coupling through the gap junctions between inferior olive neurons by inhibitory GABAergic synapses. In addition, we previously showed in simulations that intermediate coupling strengths induce chaotic firing of inferior olive neurons and increase their information carrying capacity. However, there is no in vivo experimental data supporting these two theoretical predictions. Here, we computed the levels of synchrony, dimensionality, and chaos of the inferior olive code by analyzing in vivo recordings of Purkinje cell complex spike activity in three different coupling conditions: carbenoxolone (gap junctions blocker), control, and picrotoxin (GABA-A receptor antagonist). To examine the effect of electrical coupling on dimensionality and chaotic dynamics, we first determined the physiological range of effective coupling strengths between inferior olive neurons in the three conditions using a combination of a biophysical network model of the inferior olive and a novel Bayesian model averaging approach. We found that effective coupling co-varied with synchrony and was inversely related to the dimensionality of inferior olive firing dynamics, as measured via a principal component analysis of the spike trains in each condition. Furthermore, for both the model and the data, we found an inverted U-shaped relationship between coupling strengths and complexity entropy, a measure of chaos for spiking neural data. These results are consistent with our hypothesis according to which electrical coupling regulates the dimensionality and the complexity in the inferior olive neurons in order to optimize both motor learning and control of high dimensional motor systems by the cerebellum. Computational theory suggests that the cerebellum must decrease the dimensionality of its neuronal activity to learn and control high dimensional motor systems effectively, while being constrained by the low firing frequency of inferior olive neurons, one of the two major source of input signals to the cerebellum. We previously proposed that the cerebellum adaptively controls the dimensionality of inferior olive firing by adjusting the level of synchrony and that such control is made possible by modulating the electrical coupling strength between inferior olive neurons. Here, we developed a novel method that uses a biophysical model of the inferior olive to accurately estimate the effective coupling strengths between inferior olive neurons from in vivo recordings of spike activity in three different coupling conditions. We found that high coupling strengths induce synchronous firing and decrease the dimensionality of inferior olive firing dynamics. In contrast, intermediate coupling strengths lead to chaotic firing and increase the dimensionality of the firing dynamics. Thus, electrical coupling is a feasible mechanism to control dimensionality and chaotic firing of inferior olive neurons. In sum, our results provide insights into possible mechanisms underlying cerebellar function and, in general, a biologically plausible framework to control the dimensionality of neural coding.
Collapse
|
35
|
Bartolo R, Saunders RC, Mitz AR, Averbeck BB. Dimensionality, information and learning in prefrontal cortex. PLoS Comput Biol 2020; 16:e1007514. [PMID: 32330126 PMCID: PMC7202668 DOI: 10.1371/journal.pcbi.1007514] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 05/06/2020] [Accepted: 03/11/2020] [Indexed: 01/12/2023] Open
Abstract
Learning leads to changes in population patterns of neural activity. In this study we wanted to examine how these changes in patterns of activity affect the dimensionality of neural responses and information about choices. We addressed these questions by carrying out high channel count recordings in dorsal-lateral prefrontal cortex (dlPFC; 768 electrodes) while monkeys performed a two-armed bandit reinforcement learning task. The high channel count recordings allowed us to study population coding while monkeys learned choices between actions or objects. We found that the dimensionality of neural population activity was higher across blocks in which animals learned the values of novel pairs of objects, than across blocks in which they learned the values of actions. The increase in dimensionality with learning in object blocks was related to less shared information across blocks, and therefore patterns of neural activity that were less similar, when compared to learning in action blocks. Furthermore, these differences emerged with learning, and were not a simple function of the choice of a visual image or action. Therefore, learning the values of novel objects increases the dimensionality of neural representations in dlPFC. In this study we found that learning to choose rewarding objects increased the diversity of patterns of activity, measured as the dimensionality of the response, observed in dorsal-lateral prefrontal cortex. The dimensionality increase for learning to choose rewarding objects was larger than the dimensionality increase for learning to choose rewarding actions. The dimensionality increase was not a simple function of the diverse set of images used, as the patterns of activity only appeared after learning.
Collapse
Affiliation(s)
- Ramon Bartolo
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Richard C. Saunders
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Andrew R. Mitz
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Bruno B. Averbeck
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, United States of America
- * E-mail:
| |
Collapse
|
36
|
Stabilization of a brain-computer interface via the alignment of low-dimensional spaces of neural activity. Nat Biomed Eng 2020; 4:672-685. [PMID: 32313100 DOI: 10.1038/s41551-020-0542-9] [Citation(s) in RCA: 89] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Accepted: 02/21/2020] [Indexed: 12/31/2022]
Abstract
The instability of neural recordings can render clinical brain-computer interfaces (BCIs) uncontrollable. Here, we show that the alignment of low-dimensional neural manifolds (low-dimensional spaces that describe specific correlation patterns between neurons) can be used to stabilize neural activity, thereby maintaining BCI performance in the presence of recording instabilities. We evaluated the stabilizer with non-human primates during online cursor control via intracortical BCIs in the presence of severe and abrupt recording instabilities. The stabilized BCIs recovered proficient control under different instability conditions and across multiple days. The stabilizer does not require knowledge of user intent and can outperform supervised recalibration. It stabilized BCIs even when neural activity contained little information about the direction of cursor movement. The stabilizer may be applicable to other neural interfaces and may improve the clinical viability of BCIs.
Collapse
|
37
|
Information-Limiting Correlations in Large Neural Populations. J Neurosci 2020; 40:1668-1678. [PMID: 31941667 DOI: 10.1523/jneurosci.2072-19.2019] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Revised: 11/22/2019] [Accepted: 12/22/2019] [Indexed: 11/21/2022] Open
Abstract
Understanding the neural code requires understanding how populations of neurons code information. Theoretical models predict that information may be limited by correlated noise in large neural populations. Nevertheless, analyses based on tens of neurons have failed to find evidence of saturation. Moreover, some studies have shown that noise correlations can be very small, and therefore may not affect information coding. To determine whether information-limiting correlations exist, we implanted eight Utah arrays in prefrontal cortex (PFC; area 46) of two male macaque monkeys, recording >500 neurons simultaneously. We estimated information in PFC about saccades as a function of ensemble size. Noise correlations were, on average, small (∼10-3). However, information scaled strongly sublinearly with ensemble size. After shuffling trials, destroying noise correlations, information was a linear function of ensemble size. Thus, we provide evidence for the existence of information-limiting noise correlations in large populations of PFC neurons.SIGNIFICANCE STATEMENT Recent theoretical work has shown that even small correlations can limit information if they are "differential correlations," which are difficult to measure directly. However, they can be detected through decoding analyses on recordings from a large number of neurons over a large number of trials. We have achieved both by collecting neural activity in dorsal-lateral prefrontal cortex of macaques using eight microelectrode arrays (768 electrodes), from which we were able to compute accurate information estimates. We show, for the first time, strong evidence for information-limiting correlations. Despite pairwise correlations being small (on the order of 10-3), they affect information coding in populations on the order of 100 s of neurons.
Collapse
|
38
|
Engel TA, Steinmetz NA. New perspectives on dimensionality and variability from large-scale cortical dynamics. Curr Opin Neurobiol 2019; 58:181-190. [PMID: 31585331 PMCID: PMC6859189 DOI: 10.1016/j.conb.2019.09.003] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Revised: 07/27/2019] [Accepted: 09/05/2019] [Indexed: 12/21/2022]
Abstract
The neocortex is a multi-scale network, with intricate local circuitry interwoven into a global mesh of long-range connections. Neural activity propagates within this network on a wide range of temporal and spatial scales. At the micro scale, neurophysiological recordings reveal coordinated dynamics in local neural populations, which support behaviorally relevant computations. At the macro scale, neuroimaging modalities measure global activity fluctuations organized into spatiotemporal patterns across the entire brain. Here we review recent advances linking the local and global scales of cortical dynamics and their relationship to behavior. We argue that diverse experimental observations on the dimensionality and variability of neural activity can be reconciled by considering how activity propagates in space and time on multiple spatial scales.
Collapse
Affiliation(s)
- Tatiana A Engel
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, United States.
| | - Nicholas A Steinmetz
- Department of Biological Structure, University of Washington, Seattle, WA 98195, United States.
| |
Collapse
|
39
|
La Camera G, Fontanini A, Mazzucato L. Cortical computations via metastable activity. Curr Opin Neurobiol 2019; 58:37-45. [PMID: 31326722 DOI: 10.1016/j.conb.2019.06.007] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 06/22/2019] [Indexed: 12/27/2022]
Abstract
Metastable brain dynamics are characterized by abrupt, jump-like modulations so that the neural activity in single trials appears to unfold as a sequence of discrete, quasi-stationary 'states'. Evidence that cortical neural activity unfolds as a sequence of metastable states is accumulating at fast pace. Metastable activity occurs both in response to an external stimulus and during ongoing, self-generated activity. These spontaneous metastable states are increasingly found to subserve internal representations that are not locked to external triggers, including states of deliberations, attention and expectation. Moreover, decoding stimuli or decisions via metastable states can be carried out trial-by-trial. Focusing on metastability will allow us to shift our perspective on neural coding from traditional concepts based on trial-averaging to models based on dynamic ensemble representations. Recent theoretical work has started to characterize the mechanistic origin and potential roles of metastable representations. In this article we review recent findings on metastable activity, how it may arise in biologically realistic models, and its potential role for representing internal states as well as relevant task variables.
Collapse
Affiliation(s)
- Giancarlo La Camera
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, NY 11794, United States; Graduate Program in Neuroscience, State University of New York at Stony Brook, Stony Brook, NY 11794, United States.
| | - Alfredo Fontanini
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, NY 11794, United States; Graduate Program in Neuroscience, State University of New York at Stony Brook, Stony Brook, NY 11794, United States
| | - Luca Mazzucato
- Departments of Biology and Mathematics and Institute of Neuroscience, University of Oregon, Eugene, OR 97403, United States
| |
Collapse
|
40
|
Recanatesi S, Ocker GK, Buice MA, Shea-Brown E. Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivity. PLoS Comput Biol 2019; 15:e1006446. [PMID: 31299044 PMCID: PMC6655892 DOI: 10.1371/journal.pcbi.1006446] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 07/24/2019] [Accepted: 04/03/2019] [Indexed: 11/25/2022] Open
Abstract
The dimensionality of a network's collective activity is of increasing interest in neuroscience. This is because dimensionality provides a compact measure of how coordinated network-wide activity is, in terms of the number of modes (or degrees of freedom) that it can independently explore. A low number of modes suggests a compressed low dimensional neural code and reveals interpretable dynamics [1], while findings of high dimension may suggest flexible computations [2, 3]. Here, we address the fundamental question of how dimensionality is related to connectivity, in both autonomous and stimulus-driven networks. Working with a simple spiking network model, we derive three main findings. First, the dimensionality of global activity patterns can be strongly, and systematically, regulated by local connectivity structures. Second, the dimensionality is a better indicator than average correlations in determining how constrained neural activity is. Third, stimulus evoked neural activity interacts systematically with neural connectivity patterns, leading to network responses of either greater or lesser dimensionality than the stimulus.
Collapse
Affiliation(s)
- Stefano Recanatesi
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A. Buice
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
41
|
Distinct Sources of Variability Affect Eye Movement Preparation. J Neurosci 2019; 39:4511-4526. [PMID: 30914447 PMCID: PMC6554625 DOI: 10.1523/jneurosci.2329-18.2019] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Revised: 02/28/2019] [Accepted: 03/22/2019] [Indexed: 01/01/2023] Open
Abstract
The sequence of events leading to an eye movement to a target begins the moment visual information has reached the brain, well in advance of the eye movement itself. The process by which visual information is encoded and used to generate a motor plan has been the focus of substantial interest partly because of the rapid and reproducible nature of saccadic eye movements, and the key role that they play in primate behavior. Signals related to eye movements are present in much of the primate brain, yet most neurophysiological studies of the transition from vision to eye movements have measured the activity of one neuron at a time. Less is known about how the coordinated action of populations of neurons contribute to the initiation of eye movements. One cortical area of particular interest in this process is the frontal eye fields, a region of prefrontal cortex that has descending projections to oculomotor control centers. We recorded from populations of frontal eye field neurons in macaque monkeys engaged in a memory-guided saccade task. We found a variety of neurons with visually evoked responses, saccade-aligned responses, and mixtures of both. We took advantage of the simultaneous nature of the recordings to measure variability in individual neurons and pairs of neurons from trial-to-trial, as well as the moment-to-moment population activity structure. We found that these measures were related to saccadic reaction times, suggesting that the population-level organization of frontal eye field activity is important for the transition from perception to movement.SIGNIFICANCE STATEMENT The transition from perception to action involves coordination among neurons across the brain. In the case of eye movements, visual and motor signals coexist in individual neurons as well as in neighboring neurons. We used a task designed to compartmentalize the visual and motor aspects of this transition and studied populations of neurons in the frontal eye fields, a key cortical area containing neurons that are implicated in the transition from vision to eye movements. We found that the time required for subjects to produce an eye movement could be predicted from the statistics of the neuronal response of populations of frontal eye field neurons, suggesting that these neurons coordinate their activity to optimize the transition from perception to action.
Collapse
|
42
|
Wärnberg E, Kumar A. Perturbing low dimensional activity manifolds in spiking neuronal networks. PLoS Comput Biol 2019; 15:e1007074. [PMID: 31150376 PMCID: PMC6586365 DOI: 10.1371/journal.pcbi.1007074] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Revised: 06/20/2019] [Accepted: 05/07/2019] [Indexed: 11/19/2022] Open
Abstract
Several recent studies have shown that neural activity in vivo tends to be constrained to a low-dimensional manifold. Such activity does not arise in simulated neural networks with homogeneous connectivity and it has been suggested that it is indicative of some other connectivity pattern in neuronal networks. In particular, this connectivity pattern appears to be constraining learning so that only neural activity patterns falling within the intrinsic manifold can be learned and elicited. Here, we use three different models of spiking neural networks (echo-state networks, the Neural Engineering Framework and Efficient Coding) to demonstrate how the intrinsic manifold can be made a direct consequence of the circuit connectivity. Using this relationship between the circuit connectivity and the intrinsic manifold, we show that learning of patterns outside the intrinsic manifold corresponds to much larger changes in synaptic weights than learning of patterns within the intrinsic manifold. Assuming larger changes to synaptic weights requires extensive learning, this observation provides an explanation of why learning is easier when it does not require the neural activity to leave its intrinsic manifold.
Collapse
Affiliation(s)
- Emil Wärnberg
- Dept. of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Dept. of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Arvind Kumar
- Dept. of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
43
|
Semedo JD, Zandvakili A, Machens CK, Yu BM, Kohn A. Cortical Areas Interact through a Communication Subspace. Neuron 2019; 102:249-259.e4. [PMID: 30770252 DOI: 10.1016/j.neuron.2019.01.026] [Citation(s) in RCA: 168] [Impact Index Per Article: 33.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Revised: 10/12/2018] [Accepted: 01/14/2019] [Indexed: 01/03/2023]
Abstract
Most brain functions involve interactions among multiple, distinct areas or nuclei. For instance, visual processing in primates requires the appropriate relaying of signals across many distinct cortical areas. Yet our understanding of how populations of neurons in interconnected brain areas communicate is in its infancy. Here we investigate how trial-to-trial fluctuations of population responses in primary visual cortex (V1) are related to simultaneously recorded population responses in area V2. Using dimensionality reduction methods, we find that V1-V2 interactions occur through a communication subspace: V2 fluctuations are related to a small subset of V1 population activity patterns, distinct from the largest fluctuations shared among neurons within V1. In contrast, interactions between subpopulations within V1 are less selective. We propose that the communication subspace may be a general, population-level mechanism by which activity can be selectively routed across brain areas.
Collapse
Affiliation(s)
- João D Semedo
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal; Department of Electrical and Computer Engineering, Instituto Superior Técnico, Lisbon, Portugal.
| | - Amin Zandvakili
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Christian K Machens
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Byron M Yu
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Adam Kohn
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA; Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY, USA; Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
| |
Collapse
|
44
|
Liu S, Iriate-Diaz J, Hatsopoulos NG, Ross CF, Takahashi K, Chen Z. Dynamics of motor cortical activity during naturalistic feeding behavior. J Neural Eng 2019; 16:026038. [PMID: 30721881 DOI: 10.1088/1741-2552/ab0474] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVE The orofacial primary motor cortex (MIo) plays a critical role in controlling tongue and jaw movements during oral motor functions, such as chewing, swallowing and speech. However, the neural mechanisms of MIo during naturalistic feeding are still poorly understood. There is a strong need for a systematic study of motor cortical dynamics during feeding behavior. APPROACH To investigate the neural dynamics and variability of MIo neuronal activity during naturalistic feeding, we used chronically implanted micro-electrode arrays to simultaneously recorded ensembles of neuronal activity in the MIo of two monkeys (Macaca mulatta) while eating various types of food. We developed a Bayesian nonparametric latent variable model to reveal latent structures of neuronal population activity of the MIo and identify the complex mapping between MIo ensemble spike activity and high-dimensional kinematics. MAIN RESULTS Rhythmic neuronal firing patterns and oscillatory dynamics are evident in single-unit activity. At the population level, we uncovered the neural dynamics of rhythmic chewing, and quantified the neural variability at multiple timescales (complete feeding sequences, chewing sequence stages, chewing gape cycle phases) across food types. Our approach accommodates time-warping of chewing sequences and automatic model selection, and maps the latent states to chewing behaviors at fine timescales. SIGNIFICANCE Our work shows that neural representations of MIo ensembles display spatiotemporal patterns in chewing gape cycles at different chew sequence stages, and these patterns vary in a stage-dependent manner. Unsupervised learning and decoding analysis may reveal the link between complex MIo spatiotemporal patterns and chewing kinematics.
Collapse
Affiliation(s)
- Shizhao Liu
- Department of Psychiatry, Department of Neuroscience & Physiology, New York University School of Medicine, New York, NY 10016, United States of America. Department of Biomedical Engineering, Tsinghua University, Beijing, People's Republic of China
| | | | | | | | | | | |
Collapse
|
45
|
Williamson RC, Doiron B, Smith MA, Yu BM. Bridging large-scale neuronal recordings and large-scale network models using dimensionality reduction. Curr Opin Neurobiol 2019; 55:40-47. [PMID: 30677702 DOI: 10.1016/j.conb.2018.12.009] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2018] [Revised: 12/16/2018] [Accepted: 12/17/2018] [Indexed: 12/21/2022]
Abstract
A long-standing goal in neuroscience has been to bring together neuronal recordings and neural network modeling to understand brain function. Neuronal recordings can inform the development of network models, and network models can in turn provide predictions for subsequent experiments. Traditionally, neuronal recordings and network models have been related using single-neuron and pairwise spike train statistics. We review here recent studies that have begun to relate neuronal recordings and network models based on the multi-dimensional structure of neuronal population activity, as identified using dimensionality reduction. This approach has been used to study working memory, decision making, motor control, and more. Dimensionality reduction has provided common ground for incisive comparisons and tight interplay between neuronal recordings and network models.
Collapse
Affiliation(s)
- Ryan C Williamson
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA, USA; School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Brent Doiron
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| | - Matthew A Smith
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Byron M Yu
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Electrical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
46
|
Huang C, Ruff DA, Pyle R, Rosenbaum R, Cohen MR, Doiron B. Circuit Models of Low-Dimensional Shared Variability in Cortical Networks. Neuron 2018; 101:337-348.e4. [PMID: 30581012 DOI: 10.1016/j.neuron.2018.11.034] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Revised: 10/25/2018] [Accepted: 11/19/2018] [Indexed: 12/19/2022]
Abstract
Trial-to-trial variability is a reflection of the circuitry and cellular physiology that make up a neuronal network. A pervasive yet puzzling feature of cortical circuits is that despite their complex wiring, population-wide shared spiking variability is low dimensional. Previous model cortical networks cannot explain this global variability, and rather assume it is from external sources. We show that if the spatial and temporal scales of inhibitory coupling match known physiology, networks of model spiking neurons internally generate low-dimensional shared variability that captures population activity recorded in vivo. Shifting spatial attention into the receptive field of visual neurons has been shown to differentially modulate shared variability within and between brain areas. A top-down modulation of inhibitory neurons in our network provides a parsimonious mechanism for this attentional modulation. Our work provides a critical link between observed cortical circuit structure and realistic shared neuronal variability and its modulation.
Collapse
Affiliation(s)
- Chengcheng Huang
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Douglas A Ruff
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Ryan Pyle
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA; Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, IN, USA
| | - Marlene R Cohen
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|
47
|
Hennig JA, Golub MD, Lund PJ, Sadtler PT, Oby ER, Quick KM, Ryu SI, Tyler-Kabara EC, Batista AP, Yu BM, Chase SM. Constraints on neural redundancy. eLife 2018; 7:36774. [PMID: 30109848 PMCID: PMC6130976 DOI: 10.7554/elife.36774] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2018] [Accepted: 08/06/2018] [Indexed: 12/24/2022] Open
Abstract
Millions of neurons drive the activity of hundreds of muscles, meaning many different neural population activity patterns could generate the same movement. Studies have suggested that these redundant (i.e. behaviorally equivalent) activity patterns may be beneficial for neural computation. However, it is unknown what constraints may limit the selection of different redundant activity patterns. We leveraged a brain-computer interface, allowing us to define precisely which neural activity patterns were redundant. Rhesus monkeys made cursor movements by modulating neural activity in primary motor cortex. We attempted to predict the observed distribution of redundant neural activity. Principles inspired by work on muscular redundancy did not accurately predict these distributions. Surprisingly, the distributions of redundant neural activity and task-relevant activity were coupled, which enabled accurate predictions of the distributions of redundant activity. This suggests limits on the extent to which redundancy may be exploited by the brain for computation. When you swing a tennis racket, muscles in your arm contract in a specific sequence. For this to happen, millions of neurons in your brain and spinal cord must fire to make those muscles contract. If you swing the racket a second time, the same muscles in your arm will contract again. But the firing pattern of the underlying neurons will probably be different. This phenomenon, in which different patterns of neural activity generate the same outcome, is called neural redundancy. Neural redundancy allows a set of neurons to perform multiple tasks at once. For example, the same neurons may drive an arm movement while simultaneously planning the next activity. But does performing a given task constrain how often different patterns of neural activity can be produced? If so, this would limit whether other tasks could be carried out at the same time. To address this, Hennig et al. trained macaque monkeys to use a brain-computer interface (BCI). This is a device that reads out electrical brain activity and converts it into signals that can be used to control another device. The key advantage of a BCI is that the redundant activity patterns are precisely known. The monkeys learned to use their brain activity, via the BCI, to move a cursor on a computer screen in different directions. The results revealed that monkeys could only produce a limited number of different patterns of brain activity for a given BCI cursor movement. This suggests that the ability of a group of neurons to multitask is restricted. For example, if the same set of neurons is involved in both planning and performing movements, then an animal’s ability to plan a future movement will depend on the one it is currently performing. BCIs can help patients who have suffered stroke or paralysis. They enable patients to use their brain activity to control a computer or even robotic limbs. Understanding how the brain controls BCIs will help us improve their performance and deepen our knowledge of how the brain plans and performs movements. This might include designing BCIs that allow users to multitask more effectively.
Collapse
Affiliation(s)
- Jay A Hennig
- Program in Neural Computation, Carnegie Mellon University, Pittsburgh, United States.,Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Machine Learning Department, Carnegie Mellon University, Pittsburgh, United States
| | - Matthew D Golub
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, United States
| | - Peter J Lund
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Machine Learning Department, Carnegie Mellon University, Pittsburgh, United States.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, United States
| | - Patrick T Sadtler
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Kristin M Quick
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Stephen I Ryu
- Department of Neurosurgery, Palo Alto Medical Foundation, California, United States.,Department of Electrical Engineering, Stanford University, California, United States
| | - Elizabeth C Tyler-Kabara
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, United States.,Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, United States
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Byron M Yu
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, United States.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, United States
| | - Steven M Chase
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, United States
| |
Collapse
|
48
|
Mastrogiuseppe F, Ostojic S. Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks. Neuron 2018; 99:609-623.e29. [PMID: 30057201 DOI: 10.1016/j.neuron.2018.07.003] [Citation(s) in RCA: 146] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 04/27/2018] [Accepted: 07/02/2018] [Indexed: 11/18/2022]
Abstract
Large-scale neural recordings have established that the transformation of sensory stimuli into motor outputs relies on low-dimensional dynamics at the population level, while individual neurons exhibit complex selectivity. Understanding how low-dimensional computations on mixed, distributed representations emerge from the structure of the recurrent connectivity and inputs to cortical networks is a major challenge. Here, we study a class of recurrent network models in which the connectivity is a sum of a random part and a minimal, low-dimensional structure. We show that, in such networks, the dynamics are low dimensional and can be directly inferred from connectivity using a geometrical approach. We exploit this understanding to determine minimal connectivity required to implement specific computations and find that the dynamical range and computational capacity quickly increase with the dimensionality of the connectivity structure. This framework produces testable experimental predictions for the relationship between connectivity, low-dimensional dynamics, and computational features of recorded neurons.
Collapse
Affiliation(s)
- Francesca Mastrogiuseppe
- Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure - PSL Research University, 75005 Paris, France; Laboratoire de Physique Statistique, CNRS UMR 8550, École Normale Supérieure - PSL Research University, 75005 Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure - PSL Research University, 75005 Paris, France.
| |
Collapse
|
49
|
Downey JE, Schwed N, Chase SM, Schwartz AB, Collinger JL. Intracortical recording stability in human brain–computer interface users. J Neural Eng 2018; 15:046016. [PMID: 29553484 DOI: 10.1088/1741-2552/aab7a0] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
50
|
Athalye VR, Santos FJ, Carmena JM, Costa RM. Evidence for a neural law of effect. Science 2018; 359:1024-1029. [PMID: 29496877 DOI: 10.1126/science.aao6058] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2017] [Accepted: 01/08/2018] [Indexed: 01/11/2023]
Abstract
Thorndike's law of effect states that actions that lead to reinforcements tend to be repeated more often. Accordingly, neural activity patterns leading to reinforcement are also reentered more frequently. Reinforcement relies on dopaminergic activity in the ventral tegmental area (VTA), and animals shape their behavior to receive dopaminergic stimulation. Seeking evidence for a neural law of effect, we found that mice learn to reenter more frequently motor cortical activity patterns that trigger optogenetic VTA self-stimulation. Learning was accompanied by gradual shaping of these patterns, with participating neurons progressively increasing and aligning their covariance to that of the target pattern. Motor cortex patterns that lead to phasic dopaminergic VTA activity are progressively reinforced and shaped, suggesting a mechanism by which animals select and shape actions to reliably achieve reinforcement.
Collapse
Affiliation(s)
- Vivek R Athalye
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon 1400-038, Portugal.,Department of Electrical Engineering and Computer Sciences, University of California-Berkeley, Berkeley, CA 94720, USA
| | - Fernando J Santos
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon 1400-038, Portugal
| | - Jose M Carmena
- Department of Electrical Engineering and Computer Sciences, University of California-Berkeley, Berkeley, CA 94720, USA. .,Helen Wills Neuroscience Institute, University of California-Berkeley, Berkeley, CA 94720, USA.,Joint Graduate Group in Bioengineering University of California-Berkeley and University of California-San Francisco, Berkeley, CA 94720, USA
| | - Rui M Costa
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon 1400-038, Portugal. .,Departments of Neuroscience and Neurology, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10032, USA
| |
Collapse
|