1
|
Koch D, Nandan A, Ramesan G, Koseska A. Biological computations: Limitations of attractor-based formalisms and the need for transients. Biochem Biophys Res Commun 2024; 720:150069. [PMID: 38754165 DOI: 10.1016/j.bbrc.2024.150069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 04/15/2024] [Accepted: 05/07/2024] [Indexed: 05/18/2024]
Abstract
Living systems, from single cells to higher vertebrates, receive a continuous stream of non-stationary inputs that they sense, for e.g. via cell surface receptors or sensory organs. By integrating these time-varying, multi-sensory, and often noisy information with memory using complex molecular or neuronal networks, they generate a variety of responses beyond simple stimulus-response association, including avoidance behavior, life-long-learning or social interactions. In a broad sense, these processes can be understood as a type of biological computation. Taking as a basis generic features of biological computations, such as real-time responsiveness or robustness and flexibility of the computation, we highlight the limitations of the current attractor-based framework for understanding computations in biological systems. We argue that frameworks based on transient dynamics away from attractors are better suited for the description of computations performed by neuronal and signaling networks. In particular, we discuss how quasi-stable transient dynamics from ghost states that emerge at criticality have a promising potential for developing an integrated framework of computations, that can help us understand how living system actively process information and learn from their continuously changing environment.
Collapse
Affiliation(s)
- Daniel Koch
- Lise Meitner Group Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behaviour - Caesar, Bonn, Germany
| | - Akhilesh Nandan
- Lise Meitner Group Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behaviour - Caesar, Bonn, Germany
| | - Gayathri Ramesan
- Lise Meitner Group Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behaviour - Caesar, Bonn, Germany
| | - Aneta Koseska
- Lise Meitner Group Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behaviour - Caesar, Bonn, Germany.
| |
Collapse
|
2
|
Koch D, Nandan A, Ramesan G, Tyukin I, Gorban A, Koseska A. Ghost Channels and Ghost Cycles Guiding Long Transients in Dynamical Systems. PHYSICAL REVIEW LETTERS 2024; 133:047202. [PMID: 39121409 DOI: 10.1103/physrevlett.133.047202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 04/30/2024] [Accepted: 06/04/2024] [Indexed: 08/11/2024]
Abstract
Dynamical descriptions and modeling of natural systems have generally focused on fixed points, with saddles and saddle-based phase-space objects such as heteroclinic channels or cycles being central concepts behind the emergence of quasistable long transients. Reliable and robust transient dynamics observed for real, inherently noisy systems is, however, not met by saddle-based dynamics, as demonstrated here. Generalizing the notion of ghost states, we provide a complementary framework that does not rely on the precise knowledge or existence of (un)stable fixed points, but rather on slow directed flows organized by ghost sets in ghost channels and ghost cycles. Moreover, we show that the appearance of these novel objects is an emergent property of a broad class of models typically used for description of natural systems.
Collapse
|
3
|
Jiang X, Dimitriou E, Grabe V, Sun R, Chang H, Zhang Y, Gershenzon J, Rybak J, Hansson BS, Sachse S. Ring-shaped odor coding in the antennal lobe of migratory locusts. Cell 2024; 187:3973-3991.e24. [PMID: 38897195 DOI: 10.1016/j.cell.2024.05.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 04/05/2024] [Accepted: 05/20/2024] [Indexed: 06/21/2024]
Abstract
The representation of odors in the locust antennal lobe with its >2,000 glomeruli has long remained a perplexing puzzle. We employed the CRISPR-Cas9 system to generate transgenic locusts expressing the genetically encoded calcium indicator GCaMP in olfactory sensory neurons. Using two-photon functional imaging, we mapped the spatial activation patterns representing a wide range of ecologically relevant odors across all six developmental stages. Our findings reveal a functionally ring-shaped organization of the antennal lobe composed of specific glomerular clusters. This configuration establishes an odor-specific chemotopic representation by encoding different chemical classes and ecologically distinct odors in the form of glomerular rings. The ring-shaped glomerular arrangement, which we confirm by selective targeting of OR70a-expressing sensory neurons, occurs throughout development, and the odor-coding pattern within the glomerular population is consistent across developmental stages. Mechanistically, this unconventional spatial olfactory code reflects the locust-specific and multiplexed glomerular innervation pattern of the antennal lobe.
Collapse
Affiliation(s)
- Xingcong Jiang
- Department of Evolutionary Neuroethology, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany; Research Group Olfactory Coding, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany
| | - Eleftherios Dimitriou
- Department of Evolutionary Neuroethology, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany
| | - Veit Grabe
- Microscopic Service Group, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany
| | - Ruo Sun
- Department of Biochemistry, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany
| | - Hetan Chang
- Department of Evolutionary Neuroethology, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany
| | - Yifu Zhang
- Department of Evolutionary Neuroethology, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany
| | - Jonathan Gershenzon
- Department of Biochemistry, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany
| | - Jürgen Rybak
- Department of Evolutionary Neuroethology, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany
| | - Bill S Hansson
- Department of Evolutionary Neuroethology, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany.
| | - Silke Sachse
- Research Group Olfactory Coding, Max Planck Institute for Chemical Ecology, 07745 Jena, Germany.
| |
Collapse
|
4
|
Do J, James O, Kim YJ. Choice-dependent delta-band neural trajectory during semantic category decision making in the human brain. iScience 2024; 27:110173. [PMID: 39040068 PMCID: PMC11260863 DOI: 10.1016/j.isci.2024.110173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 04/15/2024] [Accepted: 05/31/2024] [Indexed: 07/24/2024] Open
Abstract
Recent human brain imaging studies have identified widely distributed cortical areas that represent information about the meaning of language. Yet, the dynamic nature of widespread neural activity as a correlate of the semantic information processing remains poorly explored. Our state space analysis of electroencephalograms (EEGs) recorded during semantic match-to-category task show that depending on the semantic category and decision path chosen by participants, whole-brain delta-band dynamics follow distinct trajectories that are correlated with participants' response time on a trial-by-trial basis. Especially, the proximity of the neural trajectory to category decision-specific region in the state space was predictive of participants' decision-making reaction times. We also found that posterolateral regions primarily encoded word categories while postero-central regions encoded category decisions. Our results demonstrate the role of neural dynamics embedded in the evolving multivariate delta-band activity patterns in processing the semantic relatedness of words and the semantic category-based decision-making.
Collapse
Affiliation(s)
- Jongrok Do
- Center for Cognition and Sociality, Institute for Basic Science, Daejeon 34126, Republic of Korea
| | - Oliver James
- Center for Cognition and Sociality, Institute for Basic Science, Daejeon 34126, Republic of Korea
| | - Yee-Joon Kim
- Center for Cognition and Sociality, Institute for Basic Science, Daejeon 34126, Republic of Korea
| |
Collapse
|
5
|
Petelski I, Günzel Y, Sayin S, Kraus S, Couzin-Fuchs E. Synergistic olfactory processing for social plasticity in desert locusts. Nat Commun 2024; 15:5476. [PMID: 38942759 PMCID: PMC11213921 DOI: 10.1038/s41467-024-49719-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 06/11/2024] [Indexed: 06/30/2024] Open
Abstract
Desert locust plagues threaten the food security of millions. Central to their formation is crowding-induced plasticity, with social phenotypes changing from cryptic (solitarious) to swarming (gregarious). Here, we elucidate the implications of this transition on foraging decisions and corresponding neural circuits. We use behavioral experiments and Bayesian modeling to decompose the multi-modal facets of foraging, revealing olfactory social cues as critical. To this end, we investigate how corresponding odors are encoded in the locust olfactory system using in-vivo calcium imaging. We discover crowding-dependent synergistic interactions between food-related and social odors distributed across stable combinatorial response maps. The observed synergy was specific to the gregarious phase and manifested in distinct odor response motifs. Our results suggest a crowding-induced modulation of the locust olfactory system that enhances food detection in swarms. Overall, we demonstrate how linking sensory adaptations to behaviorally relevant tasks can improve our understanding of social modulation in non-model organisms.
Collapse
Affiliation(s)
- Inga Petelski
- International Max Planck Research School for Quantitative Behavior, Ecology and Evolution from lab to field, 78464, Konstanz, Germany
- Department of Biology, University of Konstanz, 78464, Konstanz, Germany
- Department of Collective Behavior, Max Planck Institute of Animal Behavior, 78464, Konstanz, Germany
| | - Yannick Günzel
- International Max Planck Research School for Quantitative Behavior, Ecology and Evolution from lab to field, 78464, Konstanz, Germany.
- Department of Biology, University of Konstanz, 78464, Konstanz, Germany.
- Department of Collective Behavior, Max Planck Institute of Animal Behavior, 78464, Konstanz, Germany.
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz, 78464, Konstanz, Germany.
| | - Sercan Sayin
- Department of Biology, University of Konstanz, 78464, Konstanz, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz, 78464, Konstanz, Germany
| | - Susanne Kraus
- Department of Biology, University of Konstanz, 78464, Konstanz, Germany
| | - Einat Couzin-Fuchs
- Department of Biology, University of Konstanz, 78464, Konstanz, Germany.
- Department of Collective Behavior, Max Planck Institute of Animal Behavior, 78464, Konstanz, Germany.
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz, 78464, Konstanz, Germany.
| |
Collapse
|
6
|
Puri P, Wu ST, Su CY, Aljadeff J. Peripheral preprocessing in Drosophila facilitates odor classification. Proc Natl Acad Sci U S A 2024; 121:e2316799121. [PMID: 38753511 PMCID: PMC11126917 DOI: 10.1073/pnas.2316799121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 04/16/2024] [Indexed: 05/18/2024] Open
Abstract
The mammalian brain implements sophisticated sensory processing algorithms along multilayered ("deep") neural networks. Strategies that insects use to meet similar computational demands, while relying on smaller nervous systems with shallow architectures, remain elusive. Using Drosophila as a model, we uncover the algorithmic role of odor preprocessing by a shallow network of compartmentalized olfactory receptor neurons. Each compartment operates as a ratiometric unit for specific odor-mixtures. This computation arises from a simple mechanism: electrical coupling between two differently sized neurons. We demonstrate that downstream synaptic connectivity is shaped to optimally leverage amplification of a hedonic value signal in the periphery. Furthermore, peripheral preprocessing is shown to markedly improve novel odor classification in a higher brain center. Together, our work highlights a far-reaching functional role of the sensory periphery for downstream processing. By elucidating the implementation of powerful computations by a shallow network, we provide insights into general principles of efficient sensory processing algorithms.
Collapse
Affiliation(s)
- Palka Puri
- Department of Physics, University of California, San Diego, La Jolla, CA92093
| | - Shiuan-Tze Wu
- Department of Neurobiology, University of California, San Diego, La Jolla, CA92093
| | - Chih-Ying Su
- Department of Neurobiology, University of California, San Diego, La Jolla, CA92093
| | - Johnatan Aljadeff
- Department of Neurobiology, University of California, San Diego, La Jolla, CA92093
| |
Collapse
|
7
|
Morrell MC, Nemenman I, Sederberg A. Neural criticality from effective latent variables. eLife 2024; 12:RP89337. [PMID: 38470471 PMCID: PMC10957169 DOI: 10.7554/elife.89337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024] Open
Abstract
Observations of power laws in neural activity data have raised the intriguing notion that brains may operate in a critical state. One example of this critical state is 'avalanche criticality', which has been observed in various systems, including cultured neurons, zebrafish, rodent cortex, and human EEG. More recently, power laws were also observed in neural populations in the mouse under an activity coarse-graining procedure, and they were explained as a consequence of the neural activity being coupled to multiple latent dynamical variables. An intriguing possibility is that avalanche criticality emerges due to a similar mechanism. Here, we determine the conditions under which latent dynamical variables give rise to avalanche criticality. We find that populations coupled to multiple latent variables produce critical behavior across a broader parameter range than those coupled to a single, quasi-static latent variable, but in both cases, avalanche criticality is observed without fine-tuning of model parameters. We identify two regimes of avalanches, both critical but differing in the amount of information carried about the latent variable. Our results suggest that avalanche criticality arises in neural systems in which activity is effectively modeled as a population driven by a few dynamical variables and these variables can be inferred from the population activity.
Collapse
Affiliation(s)
- Mia C Morrell
- Department of Physics, New York UniversityNew YorkUnited States
| | - Ilya Nemenman
- Department of Physics, Department of Biology, Initiative in Theory and Modeling of Living Systems, Emory UniversityAtlantaUnited States
| | - Audrey Sederberg
- Department of Neuroscience, University of Minnesota Medical SchoolMinneapolisUnited States
| |
Collapse
|
8
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. J Neural Eng 2024; 21:026001. [PMID: 38016450 PMCID: PMC10913727 DOI: 10.1088/1741-2552/ad1053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/23/2023] [Accepted: 11/28/2023] [Indexed: 11/30/2023]
Abstract
Objective.Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales.Approach.Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient learning for modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical SID method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and with spiking and local field potential population activity recorded during a naturalistic reach and grasp behavior.Main results.We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower training time while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity and behavior.Significance.Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest, such as for online adaptive BMIs to track non-stationary dynamics or for reducing offline training time in neuroscience investigations.
Collapse
Affiliation(s)
- Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
9
|
Crosser JT, Brinkman BAW. Applications of information geometry to spiking neural network activity. Phys Rev E 2024; 109:024302. [PMID: 38491696 DOI: 10.1103/physreve.109.024302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 01/10/2024] [Indexed: 03/18/2024]
Abstract
The space of possible behaviors that complex biological systems may exhibit is unimaginably vast, and these systems often appear to be stochastic, whether due to variable noisy environmental inputs or intrinsically generated chaos. The brain is a prominent example of a biological system with complex behaviors. The number of possible patterns of spikes emitted by a local brain circuit is combinatorially large, although the brain may not make use of all of them. Understanding which of these possible patterns are actually used by the brain, and how those sets of patterns change as properties of neural circuitry change is a major goal in neuroscience. Recently, tools from information geometry have been used to study embeddings of probabilistic models onto a hierarchy of model manifolds that encode how model outputs change as a function of their parameters, giving a quantitative notion of "distances" between outputs. We apply this method to a network model of excitatory and inhibitory neural populations to understand how the competition between membrane and synaptic response timescales shapes the network's information geometry. The hyperbolic embedding allows us to identify the statistical parameters to which the model behavior is most sensitive, and demonstrate how the ranking of these coordinates changes with the balance of excitation and inhibition in the network.
Collapse
Affiliation(s)
- Jacob T Crosser
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| | - Braden A W Brinkman
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| |
Collapse
|
10
|
Sun K, Ray S, Gupta N, Aldworth Z, Stopfer M. Olfactory system structure and function in newly hatched and adult locusts. Sci Rep 2024; 14:2608. [PMID: 38297144 PMCID: PMC10830560 DOI: 10.1038/s41598-024-52879-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 01/24/2024] [Indexed: 02/02/2024] Open
Abstract
An important question in neuroscience is how sensory systems change as animals grow and interact with the environment. Exploring sensory systems in animals as they develop can reveal how networks of neurons process information as the neurons themselves grow and the needs of the animal change. Here we compared the structure and function of peripheral parts of the olfactory pathway in newly hatched and adult locusts. We found that populations of olfactory sensory neurons (OSNs) in hatchlings and adults responded with similar tunings to a panel of odors. The morphologies of local neurons (LNs) and projection neurons (PNs) in the antennal lobes (ALs) were very similar in both age groups, though they were smaller in hatchlings, they were proportional to overall brain size. The odor evoked responses of LNs and PNs were also very similar in both age groups, characterized by complex patterns of activity including oscillatory synchronization. Notably, in hatchlings, spontaneous and odor-evoked firing rates of PNs were lower, and LFP oscillations were lower in frequency, than in the adult. Hatchlings have smaller antennae with fewer OSNs; removing antennal segments from adults also reduced LFP oscillation frequency. Thus, consistent with earlier computational models, the developmental increase in frequency is due to increasing intensity of input to the oscillation circuitry. Overall, our results show that locusts hatch with a fully formed olfactory system that structurally and functionally matches that of the adult, despite its small size and lack of prior experience with olfactory stimuli.
Collapse
Affiliation(s)
- Kui Sun
- Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health, Bethesda, MD, USA
| | - Subhasis Ray
- Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health, Bethesda, MD, USA
- Plaksha University, Sahibzada Ajit Singh Nagar, Punjab, India
| | - Nitin Gupta
- Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health, Bethesda, MD, USA
- Indian Institute of Technology Kanpur, Kanpur, 208016, India
| | - Zane Aldworth
- Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health, Bethesda, MD, USA
| | - Mark Stopfer
- Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health, Bethesda, MD, USA.
| |
Collapse
|
11
|
Weng G, Clark K, Akbarian A, Noudoost B, Nategh N. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Front Comput Neurosci 2024; 18:1273053. [PMID: 38348287 PMCID: PMC10859875 DOI: 10.3389/fncom.2024.1273053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
To create a behaviorally relevant representation of the visual world, neurons in higher visual areas exhibit dynamic response changes to account for the time-varying interactions between external (e.g., visual input) and internal (e.g., reward value) factors. The resulting high-dimensional representational space poses challenges for precisely quantifying individual factors' contributions to the representation and readout of sensory information during a behavior. The widely used point process generalized linear model (GLM) approach provides a powerful framework for a quantitative description of neuronal processing as a function of various sensory and non-sensory inputs (encoding) as well as linking particular response components to particular behaviors (decoding), at the level of single trials and individual neurons. However, most existing variations of GLMs assume the neural systems to be time-invariant, making them inadequate for modeling nonstationary characteristics of neuronal sensitivity in higher visual areas. In this review, we summarize some of the existing GLM variations, with a focus on time-varying extensions. We highlight their applications to understanding neural representations in higher visual areas and decoding transient neuronal sensitivity as well as linking physiology to behavior through manipulation of model components. This time-varying class of statistical models provide valuable insights into the neural basis of various visual behaviors in higher visual areas and hold significant potential for uncovering the fundamental computational principles that govern neuronal processing underlying various behaviors in different regions of the brain.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, United States
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
12
|
Margolles P, Elosegi P, Mei N, Soto D. Unconscious Manipulation of Conceptual Representations with Decoded Neurofeedback Impacts Search Behavior. J Neurosci 2024; 44:e1235232023. [PMID: 37985180 PMCID: PMC10866193 DOI: 10.1523/jneurosci.1235-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 10/04/2023] [Accepted: 10/26/2023] [Indexed: 11/22/2023] Open
Abstract
The necessity of conscious awareness in human learning has been a long-standing topic in psychology and neuroscience. Previous research on non-conscious associative learning is limited by the low signal-to-noise ratio of the subliminal stimulus, and the evidence remains controversial, including failures to replicate. Using functional MRI decoded neurofeedback, we guided participants from both sexes to generate neural patterns akin to those observed when visually perceiving real-world entities (e.g., dogs). Importantly, participants remained unaware of the actual content represented by these patterns. We utilized an associative DecNef approach to imbue perceptual meaning (e.g., dogs) into Japanese hiragana characters that held no inherent meaning for our participants, bypassing a conscious link between the characters and the dogs concept. Despite their lack of awareness regarding the neurofeedback objective, participants successfully learned to activate the target perceptual representations in the bilateral fusiform. The behavioral significance of our training was evaluated in a visual search task. DecNef and control participants searched for dogs or scissors targets that were pre-cued by the hiragana used during DecNef training or by a control hiragana. The DecNef hiragana did not prime search for its associated target but, strikingly, participants were impaired at searching for the targeted perceptual category. Hence, conscious awareness may function to support higher-order associative learning. Meanwhile, lower-level forms of re-learning, modification, or plasticity in existing neural representations can occur unconsciously, with behavioral consequences outside the original training context. The work also provides an account of DecNef effects in terms of neural representational drift.
Collapse
Affiliation(s)
- Pedro Margolles
- Basque Center on Cognition, Brain and Language (BCBL), Donostia - San Sebastián, Gipuzkoa 20009, Spain
- Universidad del País Vasco/Euskal Herriko Unibertsitatea (UPV/EHU), Leioa, Bizkaia 48940, Spain
| | - Patxi Elosegi
- Basque Center on Cognition, Brain and Language (BCBL), Donostia - San Sebastián, Gipuzkoa 20009, Spain
- Universidad del País Vasco/Euskal Herriko Unibertsitatea (UPV/EHU), Leioa, Bizkaia 48940, Spain
| | - Ning Mei
- Basque Center on Cognition, Brain and Language (BCBL), Donostia - San Sebastián, Gipuzkoa 20009, Spain
| | - David Soto
- Basque Center on Cognition, Brain and Language (BCBL), Donostia - San Sebastián, Gipuzkoa 20009, Spain
- Ikerbasque, Basque Foundation for Science, Bilbao, Bizkaia 48009, Spain
| |
Collapse
|
13
|
Oby ER, Degenhart AD, Grigsby EM, Motiwala A, McClain NT, Marino PJ, Yu BM, Batista AP. Dynamical constraints on neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.03.573543. [PMID: 38260549 PMCID: PMC10802336 DOI: 10.1101/2024.01.03.573543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The manner in which neural activity unfolds over time is thought to be central to sensory, motor, and cognitive functions in the brain. Network models have long posited that the brain's computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface (BCI) to challenge monkeys to violate the naturally-occurring time courses of neural population activity that we observed in motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.
Collapse
|
14
|
Rizzoglio F, Altan E, Ma X, Bodkin KL, Dekleva BM, Solla SA, Kennedy A, Miller LE. From monkeys to humans: observation-basedEMGbrain-computer interface decoders for humans with paralysis. J Neural Eng 2023; 20:056040. [PMID: 37844567 PMCID: PMC10618714 DOI: 10.1088/1741-2552/ad038e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 10/02/2023] [Accepted: 10/16/2023] [Indexed: 10/18/2023]
Abstract
Objective. Intracortical brain-computer interfaces (iBCIs) aim to enable individuals with paralysis to control the movement of virtual limbs and robotic arms. Because patients' paralysis prevents training a direct neural activity to limb movement decoder, most iBCIs rely on 'observation-based' decoding in which the patient watches a moving cursor while mentally envisioning making the movement. However, this reliance on observed target motion for decoder development precludes its application to the prediction of unobservable motor output like muscle activity. Here, we ask whether recordings of muscle activity from a surrogate individual performing the same movement as the iBCI patient can be used as target for an iBCI decoder.Approach. We test two possible approaches, each using data from a human iBCI user and a monkey, both performing similar motor actions. In one approach, we trained a decoder to predict the electromyographic (EMG) activity of a monkey from neural signals recorded from a human. We then contrast this to a second approach, based on the hypothesis that the low-dimensional 'latent' neural representations of motor behavior, known to be preserved across time for a given behavior, might also be preserved across individuals. We 'transferred' an EMG decoder trained solely on monkey data to the human iBCI user after using Canonical Correlation Analysis to align the human latent signals to those of the monkey.Main results. We found that both direct and transfer decoding approaches allowed accurate EMG predictions between two monkeys and from a monkey to a human.Significance. Our findings suggest that these latent representations of behavior are consistent across animals and even primate species. These methods are an important initial step in the development of iBCI decoders that generate EMG predictions that could serve as signals for a biomimetic decoder controlling motion and impedance of a prosthetic arm, or even muscle force directly through functional electrical stimulation.
Collapse
Affiliation(s)
- Fabio Rizzoglio
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Ege Altan
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States of America
| | - Xuan Ma
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Kevin L Bodkin
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Brian M Dekleva
- Rehab Neural Engineering Labs, Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA, United States of America
| | - Sara A Solla
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Physics and Astronomy, Northwestern University, Evanston, IL, United States of America
| | - Ann Kennedy
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Lee E Miller
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States of America
- Shirley Ryan AbilityLab, Chicago, IL, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, United States of America
| |
Collapse
|
15
|
Boucher PO, Wang T, Carceroni L, Kane G, Shenoy KV, Chandrasekaran C. Initial conditions combine with sensory evidence to induce decision-related dynamics in premotor cortex. Nat Commun 2023; 14:6510. [PMID: 37845221 PMCID: PMC10579235 DOI: 10.1038/s41467-023-41752-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 09/18/2023] [Indexed: 10/18/2023] Open
Abstract
We used a dynamical systems perspective to understand decision-related neural activity, a fundamentally unresolved problem. This perspective posits that time-varying neural activity is described by a state equation with an initial condition and evolves in time by combining at each time step, recurrent activity and inputs. We hypothesized various dynamical mechanisms of decisions, simulated them in models to derive predictions, and evaluated these predictions by examining firing rates of neurons in the dorsal premotor cortex (PMd) of monkeys performing a perceptual decision-making task. Prestimulus neural activity (i.e., the initial condition) predicted poststimulus neural trajectories, covaried with RT and the outcome of the previous trial, but not with choice. Poststimulus dynamics depended on both the sensory evidence and initial condition, with easier stimuli and fast initial conditions leading to the fastest choice-related dynamics. Together, these results suggest that initial conditions combine with sensory evidence to induce decision-related dynamics in PMd.
Collapse
Affiliation(s)
- Pierre O Boucher
- Department of Biomedical Engineering, Boston University, Boston, 02115, MA, USA
| | - Tian Wang
- Department of Biomedical Engineering, Boston University, Boston, 02115, MA, USA
| | - Laura Carceroni
- Undergraduate Program in Neuroscience, Boston University, Boston, 02115, MA, USA
| | - Gary Kane
- Department of Psychological and Brain Sciences, Boston University, Boston, 02115, MA, USA
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, 94305, CA, USA
- Department of Neurobiology, Stanford University, Stanford, 94305, CA, USA
- Howard Hughes Medical Institute, HHMI, Chevy Chase, 20815-6789, MD, USA
- Department of Bioengineering, Stanford University, Stanford, 94305, CA, USA
- Stanford Neurosciences Institute, Stanford University, Stanford, 94305, CA, USA
- Bio-X Program, Stanford University, Stanford, 94305, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, 94305, CA, USA
| | - Chandramouli Chandrasekaran
- Department of Biomedical Engineering, Boston University, Boston, 02115, MA, USA.
- Department of Psychological and Brain Sciences, Boston University, Boston, 02115, MA, USA.
- Center for Systems Neuroscience, Boston University, Boston, 02115, MA, USA.
- Department of Anatomy & Neurobiology, Boston University, Boston, 02118, MA, USA.
| |
Collapse
|
16
|
De A, Chaudhuri R. Common population codes produce extremely nonlinear neural manifolds. Proc Natl Acad Sci U S A 2023; 120:e2305853120. [PMID: 37733742 PMCID: PMC10523500 DOI: 10.1073/pnas.2305853120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 08/03/2023] [Indexed: 09/23/2023] Open
Abstract
Populations of neurons represent sensory, motor, and cognitive variables via patterns of activity distributed across the population. The size of the population used to encode a variable is typically much greater than the dimension of the variable itself, and thus, the corresponding neural population activity occupies lower-dimensional subsets of the full set of possible activity states. Given population activity data with such lower-dimensional structure, a fundamental question asks how close the low-dimensional data lie to a linear subspace. The linearity or nonlinearity of the low-dimensional structure reflects important computational features of the encoding, such as robustness and generalizability. Moreover, identifying such linear structure underlies common data analysis methods such as Principal Component Analysis (PCA). Here, we show that for data drawn from many common population codes the resulting point clouds and manifolds are exceedingly nonlinear, with the dimension of the best-fitting linear subspace growing at least exponentially with the true dimension of the data. Consequently, linear methods like PCA fail dramatically at identifying the true underlying structure, even in the limit of arbitrarily many data points and no noise.
Collapse
Affiliation(s)
- Anandita De
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Physics, University of California, Davis, CA95616
| | - Rishidev Chaudhuri
- Center for Neuroscience, University of California, Davis, CA95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, CA95616
- Department of Mathematics, University of California, Davis, CA95616
| |
Collapse
|
17
|
Chandak R, Raman B. Neural manifolds for odor-driven innate and acquired appetitive preferences. Nat Commun 2023; 14:4719. [PMID: 37543628 PMCID: PMC10404252 DOI: 10.1038/s41467-023-40443-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 07/27/2023] [Indexed: 08/07/2023] Open
Abstract
Sensory stimuli evoke spiking neural responses that innately or after learning drive suitable behavioral outputs. How are these spiking activities intrinsically patterned to encode for innate preferences, and could the neural response organization impose constraints on learning? We examined this issue in the locust olfactory system. Using a diverse odor panel, we found that ensemble activities both during ('ON response') and after stimulus presentations ('OFF response') could be linearly mapped onto overall appetitive preference indices. Although diverse, ON and OFF response patterns generated by innately appetitive odorants (higher palp-opening responses) were still limited to a low-dimensional subspace (a 'neural manifold'). Similarly, innately non-appetitive odorants evoked responses that were separable yet confined to another neural manifold. Notably, only odorants that evoked neural response excursions in the appetitive manifold could be associated with gustatory reward. In sum, these results provide insights into how encoding for innate preferences can also impact associative learning.
Collapse
Affiliation(s)
- Rishabh Chandak
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO, 63130, USA
| | - Baranidharan Raman
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO, 63130, USA.
| |
Collapse
|
18
|
Nandan A, Koseska A. Non-asymptotic transients away from steady states determine cellular responsiveness to dynamic spatial-temporal signals. PLoS Comput Biol 2023; 19:e1011388. [PMID: 37578988 PMCID: PMC10449117 DOI: 10.1371/journal.pcbi.1011388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 08/24/2023] [Accepted: 07/25/2023] [Indexed: 08/16/2023] Open
Abstract
Majority of the theory on cell polarization and the understanding of cellular sensing and responsiveness to localized chemical cues has been based on the idea that non-polarized and polarized cell states can be represented by stable asymptotic switching between them. The existing model classes that describe the dynamics of signaling networks underlying polarization are formulated within the framework of autonomous systems. However these models do not simultaneously capture both, robust maintenance of polarized state longer than the signal duration, and retained responsiveness to signals with complex spatial-temporal distribution. Based on recent experimental evidence for criticality organization of biochemical networks, we challenge the current concepts and demonstrate that non-asymptotic signaling dynamics arising at criticality uniquely ensures optimal responsiveness to changing chemoattractant fields. We provide a framework to characterize non-asymptotic dynamics of system's state trajectories through a non-autonomous treatment of the system, further emphasizing the importance of (long) transient dynamics, as well as the necessity to change the mathematical formalism when describing biological systems that operate in changing environments.
Collapse
Affiliation(s)
- Akhilesh Nandan
- Cellular computations and learning, Max Planck Institute for Neurobiology of Behavior – caesar, Bonn, Germany
| | - Aneta Koseska
- Cellular computations and learning, Max Planck Institute for Neurobiology of Behavior – caesar, Bonn, Germany
| |
Collapse
|
19
|
Kirchherr S, Mildiner Moraga S, Coudé G, Bimbi M, Ferrari PF, Aarts E, Bonaiuto JJ. Bayesian multilevel hidden Markov models identify stable state dynamics in longitudinal recordings from macaque primary motor cortex. Eur J Neurosci 2023; 58:2787-2806. [PMID: 37382060 DOI: 10.1111/ejn.16065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 04/02/2023] [Accepted: 06/01/2023] [Indexed: 06/30/2023]
Abstract
Neural populations, rather than single neurons, may be the fundamental unit of cortical computation. Analysing chronically recorded neural population activity is challenging not only because of the high dimensionality of activity but also because of changes in the signal that may or may not be due to neural plasticity. Hidden Markov models (HMMs) are a promising technique for analysing such data in terms of discrete latent states, but previous approaches have not considered the statistical properties of neural spiking data, have not been adaptable to longitudinal data, or have not modelled condition-specific differences. We present a multilevel Bayesian HMM addresses these shortcomings by incorporating multivariate Poisson log-normal emission probability distributions, multilevel parameter estimation and trial-specific condition covariates. We applied this framework to multi-unit neural spiking data recorded using chronically implanted multi-electrode arrays from macaque primary motor cortex during a cued reaching, grasping and placing task. We show that, in line with previous work, the model identifies latent neural population states which are tightly linked to behavioural events, despite the model being trained without any information about event timing. The association between these states and corresponding behaviour is consistent across multiple days of recording. Notably, this consistency is not observed in the case of a single-level HMM, which fails to generalise across distinct recording sessions. The utility and stability of this approach is demonstrated using a previously learned task, but this multilevel Bayesian HMM framework would be especially suited for future studies of long-term plasticity in neural populations.
Collapse
Affiliation(s)
- Sebastien Kirchherr
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| | | | - Gino Coudé
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
- Inovarion, Paris, France
| | - Marco Bimbi
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| | - Pier F Ferrari
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| | - Emmeke Aarts
- Department of Methodology and Statistics, Universiteit Utrecht, Utrecht, Netherlands
| | - James J Bonaiuto
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Bron, France
- Université Claude Bernard Lyon 1, Université de Lyon, France
| |
Collapse
|
20
|
Naik S, Adibpour P, Dubois J, Dehaene-Lambertz G, Battaglia D. Event-related variability is modulated by task and development. Neuroimage 2023; 276:120208. [PMID: 37268095 DOI: 10.1016/j.neuroimage.2023.120208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/11/2023] [Accepted: 05/30/2023] [Indexed: 06/04/2023] Open
Abstract
In carefully designed experimental paradigms, cognitive scientists interpret the mean event-related potentials (ERP) in terms of cognitive operations. However, the huge signal variability from one trial to the next, questions the representability of such mean events. We explored here whether this variability is an unwanted noise, or an informative part of the neural response. We took advantage of the rapid changes in the visual system during human infancy and analyzed the variability of visual responses to central and lateralized faces in 2-to 6-month-old infants compared to adults using high-density electroencephalography (EEG). We observed that neural trajectories of individual trials always remain very far from ERP components, only moderately bending their direction with a substantial temporal jitter across trials. However, single trial trajectories displayed characteristic patterns of acceleration and deceleration when approaching ERP components, as if they were under the active influence of steering forces causing transient attraction and stabilization. These dynamic events could only partly be accounted for by induced microstate transitions or phase reset phenomena. Importantly, these structured modulations of response variability, both between and within trials, had a rich sequential organization, which in infants, was modulated by the task difficulty and age. Our approaches to characterize Event Related Variability (ERV) expand on classic ERP analyses and provide the first evidence for the functional role of ongoing neural variability in human infants.
Collapse
Affiliation(s)
- Shruti Naik
- Cognitive Neuroimaging Unit U992, NeuroSpin Center, F-91190 Gif/Yvette, France
| | - Parvaneh Adibpour
- Cognitive Neuroimaging Unit U992, NeuroSpin Center, F-91190 Gif/Yvette, France
| | - Jessica Dubois
- Cognitive Neuroimaging Unit U992, NeuroSpin Center, F-91190 Gif/Yvette, France; Université de Paris, NeuroDiderot, Inserm, F-75019 Paris, France
| | | | - Demian Battaglia
- Institute for System Neuroscience U1106, Aix-Marseille Université, F-13005 Marseille, France; University of Strasbourg Institute for Advanced Studies (USIAS), F-67000 Strasbourg, France.
| |
Collapse
|
21
|
Puri P, Wu ST, Su CY, Aljadeff J. Shallow networks run deep: Peripheral preprocessing facilitates odor classification. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.23.550211. [PMID: 37546820 PMCID: PMC10401955 DOI: 10.1101/2023.07.23.550211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
The mammalian brain implements sophisticated sensory processing algorithms along multilayered ('deep') neural-networks. Strategies that insects use to meet similar computational demands, while relying on smaller nervous systems with shallow architectures, remain elusive. Using Drosophila as a model, we uncover the algorithmic role of odor preprocessing by a shallow network of compartmentalized olfactory receptor neurons. Each compartment operates as a ratiometric unit for specific odor-mixtures. This computation arises from a simple mechanism: electrical coupling between two differently-sized neurons. We demonstrate that downstream synaptic connectivity is shaped to optimally leverage amplification of a hedonic value signal in the periphery. Furthermore, peripheral preprocessing is shown to markedly improve novel odor classification in a higher brain center. Together, our work highlights a far-reaching functional role of the sensory periphery for downstream processing. By elucidating the implementation of powerful computations by a shallow network, we provide insights into general principles of efficient sensory processing algorithms.
Collapse
Affiliation(s)
- Palka Puri
- Department of Physics, University of California San Diego, La Jolla, CA, 92093, USA
| | - Shiuan-Tze Wu
- Department of Neurobiology, University of California San Diego, La Jolla, CA, 92093, USA
| | - Chih-Ying Su
- Department of Neurobiology, University of California San Diego, La Jolla, CA, 92093, USA
| | - Johnatan Aljadeff
- Department of Neurobiology, University of California San Diego, La Jolla, CA, 92093, USA
| |
Collapse
|
22
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.26.542509. [PMID: 37398400 PMCID: PMC10312539 DOI: 10.1101/2023.05.26.542509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales. Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical subspace identification method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and spike-LFP population activity recorded during a naturalistic reach and grasp behavior. We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower computational cost while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity. Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest.
Collapse
|
23
|
Yiling Y, Shapcott K, Peter A, Klon-Lipok J, Xuhui H, Lazar A, Singer W. Robust encoding of natural stimuli by neuronal response sequences in monkey visual cortex. Nat Commun 2023; 14:3021. [PMID: 37231014 DOI: 10.1038/s41467-023-38587-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 05/08/2023] [Indexed: 05/27/2023] Open
Abstract
Parallel multisite recordings in the visual cortex of trained monkeys revealed that the responses of spatially distributed neurons to natural scenes are ordered in sequences. The rank order of these sequences is stimulus-specific and maintained even if the absolute timing of the responses is modified by manipulating stimulus parameters. The stimulus specificity of these sequences was highest when they were evoked by natural stimuli and deteriorated for stimulus versions in which certain statistical regularities were removed. This suggests that the response sequences result from a matching operation between sensory evidence and priors stored in the cortical network. Decoders trained on sequence order performed as well as decoders trained on rate vectors but the former could decode stimulus identity from considerably shorter response intervals than the latter. A simulated recurrent network reproduced similarly structured stimulus-specific response sequences, particularly once it was familiarized with the stimuli through non-supervised Hebbian learning. We propose that recurrent processing transforms signals from stationary visual scenes into sequential responses whose rank order is the result of a Bayesian matching operation. If this temporal code were used by the visual system it would allow for ultrafast processing of visual scenes.
Collapse
Affiliation(s)
- Yang Yiling
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
- International Max Planck Research School (IMPRS) for Neural Circuits, 60438, Frankfurt am Main, Germany
- Faculty of Biological Sciences, Goethe-University Frankfurt am Main, 60438, Frankfurt am Main, Germany
| | - Katharine Shapcott
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
- Frankfurt Institute for Advanced Studies, 60438, Frankfurt am Main, Germany
| | - Alina Peter
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
- International Max Planck Research School (IMPRS) for Neural Circuits, 60438, Frankfurt am Main, Germany
- Faculty of Biological Sciences, Goethe-University Frankfurt am Main, 60438, Frankfurt am Main, Germany
| | - Johanna Klon-Lipok
- Max Planck Institute for Brain Research, 60438, Frankfurt am Main, Germany
| | - Huang Xuhui
- Intelligent Science and Technology Academy, China Aerospace Science and Industry Corporation (CASIC), 100144, Beijing, China
- Institute of Automation, Chinese Academy of Sciences, 100190, Beijing, China
| | - Andreea Lazar
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
| | - Wolf Singer
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany.
- Frankfurt Institute for Advanced Studies, 60438, Frankfurt am Main, Germany.
- Max Planck Institute for Brain Research, 60438, Frankfurt am Main, Germany.
| |
Collapse
|
24
|
Naik S, Dehaene-Lambertz G, Battaglia D. Repairing Artifacts in Neural Activity Recordings Using Low-Rank Matrix Estimation. SENSORS (BASEL, SWITZERLAND) 2023; 23:4847. [PMID: 37430760 DOI: 10.3390/s23104847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/09/2023] [Accepted: 05/10/2023] [Indexed: 07/12/2023]
Abstract
Electrophysiology recordings are frequently affected by artifacts (e.g., subject motion or eye movements), which reduces the number of available trials and affects the statistical power. When artifacts are unavoidable and data are scarce, signal reconstruction algorithms that allow for the retention of sufficient trials become crucial. Here, we present one such algorithm that makes use of large spatiotemporal correlations in neural signals and solves the low-rank matrix completion problem, to fix artifactual entries. The method uses a gradient descent algorithm in lower dimensions to learn the missing entries and provide faithful reconstruction of signals. We carried out numerical simulations to benchmark the method and estimate optimal hyperparameters for actual EEG data. The fidelity of reconstruction was assessed by detecting event-related potentials (ERP) from a highly artifacted EEG time series from human infants. The proposed method significantly improved the standardized error of the mean in ERP group analysis and a between-trial variability analysis compared to a state-of-the-art interpolation technique. This improvement increased the statistical power and revealed significant effects that would have been deemed insignificant without reconstruction. The method can be applied to any time-continuous neural signal where artifacts are sparse and spread out across epochs and channels, increasing data retention and statistical power.
Collapse
Affiliation(s)
- Shruti Naik
- Cognitive Neuroimaging Unit, Centre National de la Recherche Scientifique (CNRS), Institut National de la Santé et de la Recherche Médicale (INSERM), CEA, Université Paris-Saclay, NeuroSpin Center, F-91190 Gif-sur-Yvette, France
| | - Ghislaine Dehaene-Lambertz
- Cognitive Neuroimaging Unit, Centre National de la Recherche Scientifique (CNRS), Institut National de la Santé et de la Recherche Médicale (INSERM), CEA, Université Paris-Saclay, NeuroSpin Center, F-91190 Gif-sur-Yvette, France
| | - Demian Battaglia
- Institut de Neurosciences des Systèmes, U1106, Centre National de la Recherche Scientifique (CNRS) Aix-Marseille Université, F-13005 Marseille, France
- Institute for Advanced Studies, University of Strasbourg, (USIAS), F-67000 Strasbourg, France
| |
Collapse
|
25
|
Affiliation(s)
- Max Dabagia
- School of Computer Science, Georgia Institute of Technology, Atlanta, GA, USA
| | - Konrad P Kording
- Department of Biomedical Engineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Eva L Dyer
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, USA.
| |
Collapse
|
26
|
Mitchell-Heggs R, Prado S, Gava GP, Go MA, Schultz SR. Neural manifold analysis of brain circuit dynamics in health and disease. J Comput Neurosci 2023; 51:1-21. [PMID: 36522604 PMCID: PMC9840597 DOI: 10.1007/s10827-022-00839-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 08/30/2022] [Accepted: 10/29/2022] [Indexed: 12/23/2022]
Abstract
Recent developments in experimental neuroscience make it possible to simultaneously record the activity of thousands of neurons. However, the development of analysis approaches for such large-scale neural recordings have been slower than those applicable to single-cell experiments. One approach that has gained recent popularity is neural manifold learning. This approach takes advantage of the fact that often, even though neural datasets may be very high dimensional, the dynamics of neural activity tends to traverse a much lower-dimensional space. The topological structures formed by these low-dimensional neural subspaces are referred to as "neural manifolds", and may potentially provide insight linking neural circuit dynamics with cognitive function and behavioral performance. In this paper we review a number of linear and non-linear approaches to neural manifold learning, including principal component analysis (PCA), multi-dimensional scaling (MDS), Isomap, locally linear embedding (LLE), Laplacian eigenmaps (LEM), t-SNE, and uniform manifold approximation and projection (UMAP). We outline these methods under a common mathematical nomenclature, and compare their advantages and disadvantages with respect to their use for neural data analysis. We apply them to a number of datasets from published literature, comparing the manifolds that result from their application to hippocampal place cells, motor cortical neurons during a reaching task, and prefrontal cortical neurons during a multi-behavior task. We find that in many circumstances linear algorithms produce similar results to non-linear methods, although in particular cases where the behavioral complexity is greater, non-linear methods tend to find lower-dimensional manifolds, at the possible expense of interpretability. We demonstrate that these methods are applicable to the study of neurological disorders through simulation of a mouse model of Alzheimer's Disease, and speculate that neural manifold analysis may help us to understand the circuit-level consequences of molecular and cellular neuropathology.
Collapse
Affiliation(s)
- Rufus Mitchell-Heggs
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, London, SW7 2AZ United Kingdom
- Centre for Discovery Brain Sciences, The University of Edinburgh, Edinburgh, EH8 9XD United Kingdom
| | - Seigfred Prado
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, London, SW7 2AZ United Kingdom
- Department of Electronics Engineering, University of Santo Tomas, Manila, Philippines
| | - Giuseppe P. Gava
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, London, SW7 2AZ United Kingdom
| | - Mary Ann Go
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, London, SW7 2AZ United Kingdom
| | - Simon R. Schultz
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, London, SW7 2AZ United Kingdom
| |
Collapse
|
27
|
Galgali AR, Sahani M, Mante V. Residual dynamics resolves recurrent contributions to neural computation. Nat Neurosci 2023; 26:326-338. [PMID: 36635498 DOI: 10.1038/s41593-022-01230-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 11/08/2022] [Indexed: 01/14/2023]
Abstract
Relating neural activity to behavior requires an understanding of how neural computations arise from the coordinated dynamics of distributed, recurrently connected neural populations. However, inferring the nature of recurrent dynamics from partial recordings of a neural circuit presents considerable challenges. Here we show that some of these challenges can be overcome by a fine-grained analysis of the dynamics of neural residuals-that is, trial-by-trial variability around the mean neural population trajectory for a given task condition. Residual dynamics in macaque prefrontal cortex (PFC) in a saccade-based perceptual decision-making task reveals recurrent dynamics that is time dependent, but consistently stable, and suggests that pronounced rotational structure in PFC trajectories during saccades is driven by inputs from upstream areas. The properties of residual dynamics restrict the possible contributions of PFC to decision-making and saccade generation and suggest a path toward fully characterizing distributed neural computations with large-scale neural recordings and targeted causal perturbations.
Collapse
Affiliation(s)
- Aniruddh R Galgali
- Institute of Neuroinformatics, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| | - Valerio Mante
- Institute of Neuroinformatics, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich & ETH Zurich, Zurich, Switzerland.
| |
Collapse
|
28
|
Koh TH, Bishop WE, Kawashima T, Jeon BB, Srinivasan R, Mu Y, Wei Z, Kuhlman SJ, Ahrens MB, Chase SM, Yu BM. Dimensionality reduction of calcium-imaged neuronal population activity. NATURE COMPUTATIONAL SCIENCE 2023; 3:71-85. [PMID: 37476302 PMCID: PMC10358781 DOI: 10.1038/s43588-022-00390-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 12/05/2022] [Indexed: 07/22/2023]
Abstract
Calcium imaging has been widely adopted for its ability to record from large neuronal populations. To summarize the time course of neural activity, dimensionality reduction methods, which have been applied extensively to population spiking activity, may be particularly useful. However, it is unclear if the dimensionality reduction methods applied to spiking activity are appropriate for calcium imaging. We thus carried out a systematic study of design choices based on standard dimensionality reduction methods. We also developed a method to perform deconvolution and dimensionality reduction simultaneously (Calcium Imaging Linear Dynamical System, CILDS). CILDS most accurately recovered the single-trial, low-dimensional time courses from simulated calcium imaging data. CILDS also outperformed the other methods on calcium imaging recordings from larval zebrafish and mice. More broadly, this study represents a foundation for summarizing calcium imaging recordings of large neuronal populations using dimensionality reduction in diverse experimental settings.
Collapse
Affiliation(s)
- Tze Hui Koh
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Center for the Neural Basis of Cognition, PA
| | - William E. Bishop
- Center for the Neural Basis of Cognition, PA
- Department of Machine Learning, Carnegie Mellon University, PA
- Janelia Research Campus, Howard Hughes Medical Institute, VA
| | - Takashi Kawashima
- Janelia Research Campus, Howard Hughes Medical Institute, VA
- Department of Brain Sciences, Weizmann Institute of Science, Israel
| | - Brian B. Jeon
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Center for the Neural Basis of Cognition, PA
| | - Ranjani Srinivasan
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD
| | - Yu Mu
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, China
| | - Ziqiang Wei
- Janelia Research Campus, Howard Hughes Medical Institute, VA
| | - Sandra J. Kuhlman
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, PA
- Department of Biological Sciences, Carnegie Mellon University, PA
| | - Misha B. Ahrens
- Janelia Research Campus, Howard Hughes Medical Institute, VA
| | - Steven M. Chase
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, PA
| | - Byron M. Yu
- Department of Biomedical Engineering, Carnegie Mellon University, PA
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, PA
- Department of Electrical and Computer Engineering, Carnegie Mellon University, PA
| |
Collapse
|
29
|
Farnum A, Parnas M, Hoque Apu E, Cox E, Lefevre N, Contag CH, Saha D. Harnessing insect olfactory neural circuits for detecting and discriminating human cancers. Biosens Bioelectron 2023; 219:114814. [PMID: 36327558 DOI: 10.1016/j.bios.2022.114814] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 10/04/2022] [Accepted: 10/11/2022] [Indexed: 11/06/2022]
Abstract
There is overwhelming evidence that presence of cancer alters cellular metabolic processes, and these changes are manifested in emitted volatile organic compound (VOC) compositions of cancer cells. Here, we take a novel forward engineering approach by developing an insect olfactory neural circuit-based VOC sensor for cancer detection. We obtained oral cancer cell culture VOC-evoked extracellular neural responses from in vivo insect (locust) antennal lobe neurons. We employed biological neural computations of the antennal lobe circuitry for generating spatiotemporal neuronal response templates corresponding to each cell culture VOC mixture, and employed these neuronal templates to distinguish oral cancer cell lines (SAS, Ca9-22, and HSC-3) vs. a non-cancer cell line (HaCaT). Our results demonstrate that three different human oral cancers can be robustly distinguished from each other and from a non-cancer oral cell line. By using high-dimensional population neuronal response analysis and leave-one-trial-out methodology, our approach yielded high classification success for each cell line tested. Our analyses achieved 76-100% success in identifying cell lines by using the population neural response (n = 194) collected for the entire duration of the cell culture study. We also demonstrate this cancer detection technique can distinguish between different types of oral cancers and non-cancer at different time-matched points of growth. This brain-based cancer detection approach is fast as it can differentiate between VOC mixtures within 250 ms of stimulus onset. Our brain-based cancer detection system comprises a novel VOC sensing methodology that incorporates entire biological chemosensory arrays, biological signal transduction, and neuronal computations in a form of a forward-engineered technology for cancer VOC detection.
Collapse
Affiliation(s)
- Alexander Farnum
- Department of Biomedical Engineering and the Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI, USA
| | - Michael Parnas
- Department of Biomedical Engineering and the Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI, USA
| | - Ehsanul Hoque Apu
- Department of Biomedical Engineering and the Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI, USA; Division of Hematology and Oncology, Department of Internal Medicine, Michigan Medicine, University of Michigan, Ann Arbor, MI, 48108, USA
| | - Elyssa Cox
- Department of Biomedical Engineering and the Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI, USA
| | - Noël Lefevre
- Department of Biomedical Engineering and the Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI, USA
| | - Christopher H Contag
- Department of Biomedical Engineering and the Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI, USA; Department of Microbiology and Molecular Genetics, Michigan State University, East Lansing, MI, USA
| | - Debajit Saha
- Department of Biomedical Engineering and the Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI, USA.
| |
Collapse
|
30
|
Patel M, Kulkarni N, Lei HH, Lai K, Nematova O, Wei K, Lei H. Experimental and theoretical probe on mechano- and chemosensory integration in the insect antennal lobe. Front Physiol 2022; 13:1004124. [PMID: 36406994 PMCID: PMC9667105 DOI: 10.3389/fphys.2022.1004124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 10/19/2022] [Indexed: 11/06/2022] Open
Abstract
In nature, olfactory signals are delivered to detectors—for example, insect antennae—by means of turbulent air, which exerts concurrent chemical and mechanical stimulation on the detectors. The antennal lobe, which is traditionally viewed as a chemosensory module, sits downstream of antennal inputs. We review experimental evidence showing that, in addition to being a chemosensory structure, antennal lobe neurons also respond to mechanosensory input in the form of wind speed. Benchmarked with empirical data, we constructed a dynamical model to simulate bimodal integration in the antennal lobe, with model dynamics yielding insights such as a positive correlation between the strength of mechanical input and the capacity to follow high frequency odor pulses, an important task in tracking odor sources. Furthermore, we combine experimental and theoretical results to develop a conceptual framework for viewing the functional significance of sensory integration within the antennal lobe. We formulate the testable hypothesis that the antennal lobe alternates between two distinct dynamical regimes, one which benefits odor plume tracking and one which promotes odor discrimination. We postulate that the strength of mechanical input, which correlates with behavioral contexts such being mid-flight versus hovering near a flower, triggers the transition from one regime to the other.
Collapse
Affiliation(s)
- Mainak Patel
- Department of Mathematics, William and Mary College, Williamsburg, VA, United States
| | - Nisha Kulkarni
- School of Life Sciences, Arizona State University, Tempe, AZ, United States
| | - Harry H. Lei
- School of Life Sciences, Arizona State University, Tempe, AZ, United States
| | - Kaitlyn Lai
- School of Life Sciences, Arizona State University, Tempe, AZ, United States
| | - Omina Nematova
- School of Life Sciences, Arizona State University, Tempe, AZ, United States
| | - Katherine Wei
- School of Life Sciences, Arizona State University, Tempe, AZ, United States
| | - Hong Lei
- School of Life Sciences, Arizona State University, Tempe, AZ, United States
- *Correspondence: Hong Lei,
| |
Collapse
|
31
|
Gonzalez-Suarez AD, Zavatone-Veth JA, Chen J, Matulis CA, Badwan BA, Clark DA. Excitatory and inhibitory neural dynamics jointly tune motion detection. Curr Biol 2022; 32:3659-3675.e8. [PMID: 35868321 PMCID: PMC9474608 DOI: 10.1016/j.cub.2022.06.075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 05/03/2022] [Accepted: 06/24/2022] [Indexed: 11/26/2022]
Abstract
Neurons integrate excitatory and inhibitory signals to produce their outputs, but the role of input timing in this integration remains poorly understood. Motion detection is a paradigmatic example of this integration, since theories of motion detection rely on different delays in visual signals. These delays allow circuits to compare scenes at different times to calculate the direction and speed of motion. Different motion detection circuits have different velocity sensitivity, but it remains untested how the response dynamics of individual cell types drive this tuning. Here, we sped up or slowed down specific neuron types in Drosophila's motion detection circuit by manipulating ion channel expression. Altering the dynamics of individual neuron types upstream of motion detectors increased their sensitivity to fast or slow visual motion, exposing distinct roles for excitatory and inhibitory dynamics in tuning directional signals, including a role for the amacrine cell CT1. A circuit model constrained by functional data and anatomy qualitatively reproduced the observed tuning changes. Overall, these results reveal how excitatory and inhibitory dynamics together tune a canonical circuit computation.
Collapse
Affiliation(s)
| | - Jacob A Zavatone-Veth
- Department of Physics, Harvard University, Cambridge, MA 02138, USA; Center for Brain Science, Harvard University, Cambridge, MA 02138, USA
| | - Juyue Chen
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | | | - Bara A Badwan
- School of Engineering and Applied Science, Yale University, New Haven, CT 06511, USA
| | - Damon A Clark
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA; Department of Molecular, Cellular, and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
32
|
Krishnamurthy K, Hermundstad AM, Mora T, Walczak AM, Balasubramanian V. Disorder and the Neural Representation of Complex Odors. Front Comput Neurosci 2022; 16:917786. [PMID: 36003684 PMCID: PMC9393645 DOI: 10.3389/fncom.2022.917786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 05/17/2022] [Indexed: 11/25/2022] Open
Abstract
Animals smelling in the real world use a small number of receptors to sense a vast number of natural molecular mixtures, and proceed to learn arbitrary associations between odors and valences. Here, we propose how the architecture of olfactory circuits leverages disorder, diffuse sensing and redundancy in representation to meet these immense complementary challenges. First, the diffuse and disordered binding of receptors to many molecules compresses a vast but sparsely-structured odor space into a small receptor space, yielding an odor code that preserves similarity in a precise sense. Introducing any order/structure in the sensing degrades similarity preservation. Next, lateral interactions further reduce the correlation present in the low-dimensional receptor code. Finally, expansive disordered projections from the periphery to the central brain reconfigure the densely packed information into a high-dimensional representation, which contains multiple redundant subsets from which downstream neurons can learn flexible associations and valences. Moreover, introducing any order in the expansive projections degrades the ability to recall the learned associations in the presence of noise. We test our theory empirically using data from Drosophila. Our theory suggests that the neural processing of sparse but high-dimensional olfactory information differs from the other senses in its fundamental use of disorder.
Collapse
Affiliation(s)
- Kamesh Krishnamurthy
- Joseph Henry Laboratories of Physics and Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States
| | - Ann M. Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Thierry Mora
- Laboratoire de Physique Statistique, UMR8550, CNRS, UPMC and École Normale Supérieure, Paris, France
| | - Aleksandra M. Walczak
- Laboratoire de Physique Théorique, UMR8549m CNRS, UPMC and École Normale Supérieure, Paris, France
| | - Vijay Balasubramanian
- David Rittenhouse and Richards Laboratories, University of Pennsylvania, Philadelphia, PA, United States
- *Correspondence: Vijay Balasubramanian
| |
Collapse
|
33
|
Hennig MH. The sloppy relationship between neural circuit structure and function. J Physiol 2022. [PMID: 35876720 DOI: 10.1113/jp282757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 07/20/2022] [Indexed: 11/08/2022] Open
Abstract
Investigating and describing the relationships between the structure of a circuit and its function has a long tradition in neuroscience. Since neural circuits acquire their structure through sophisticated developmental programmes, and memories and experiences are maintained through synaptic modification, it is to be expected that structure is closely linked to function. Recent findings challenge this hypothesis from three different angles: Function does not strongly constrain circuit parameters, many parameters in neural circuits are irrelevant and contribute little to function, and circuit parameters are unstable and subject to constant random drift. At the same time however, recent work also showed that dynamics in neural circuit activity that is related to function are robust over time and across individuals. Here this apparent contradiction is addressed by considering the properties of neural manifolds that restrict circuit activity to functionally relevant subspaces, and it will be suggested that degenerate, anisotropic and unstable parameter spaces are a closely related to the structure and implementation of functionally relevant neural manifolds. Abstract figure legend What are the relationships between noisy and highly variable microscopic neural circuit variables on the one hand and the generation of behaviour on the other? Here it is proposed that an intermediate level of description exists where this relationship can be understood in terms of low-dimensional dynamics. Recordings of neural activity during unconstrained behaviour and the development of new machine learning methods will help to uncover these links. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Matthias H Hennig
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh
| |
Collapse
|
34
|
Oscillations and variability in neuronal systems: interplay of autonomous transient dynamics and fast deterministic fluctuations. J Comput Neurosci 2022; 50:331-355. [PMID: 35653072 DOI: 10.1007/s10827-022-00819-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 02/03/2022] [Accepted: 03/14/2022] [Indexed: 10/18/2022]
Abstract
Neuronal systems are subject to rapid fluctuations both intrinsically and externally. These fluctuations can be disruptive or constructive. We investigate the dynamic mechanisms underlying the interactions between rapidly fluctuating signals and the intrinsic properties of the target cells to produce variable and/or coherent responses. We use linearized and non-linear conductance-based models and piecewise constant (PWC) inputs with short duration pieces. The amplitude distributions of the constant pieces consist of arbitrary permutations of a baseline PWC function. In each trial within a given protocol we use one of these permutations and each protocol consists of a subset of all possible permutations, which is the only source of uncertainty in the protocol. We show that sustained oscillatory behavior can be generated in response to various forms of PWC inputs independently of whether the stable equilibria of the corresponding unperturbed systems are foci or nodes. The oscillatory voltage responses are amplified by the model nonlinearities and attenuated for conductance-based PWC inputs as compared to current-based PWC inputs, consistent with previous theoretical and experimental work. In addition, the voltage responses to PWC inputs exhibited variability across trials, which is reminiscent of the variability generated by stochastic noise (e.g., Gaussian white noise). Our analysis demonstrates that both oscillations and variability are the result of the interaction between the PWC input and the target cell's autonomous transient dynamics with little to no contribution from the dynamics in vicinities of the steady-state, and do not require input stochasticity.
Collapse
|
35
|
Miller CT, Gire D, Hoke K, Huk AC, Kelley D, Leopold DA, Smear MC, Theunissen F, Yartsev M, Niell CM. Natural behavior is the language of the brain. Curr Biol 2022; 32:R482-R493. [PMID: 35609550 PMCID: PMC10082559 DOI: 10.1016/j.cub.2022.03.031] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
The breadth and complexity of natural behaviors inspires awe. Understanding how our perceptions, actions, and internal thoughts arise from evolved circuits in the brain has motivated neuroscientists for generations. Researchers have traditionally approached this question by focusing on stereotyped behaviors, either natural or trained, in a limited number of model species. This approach has allowed for the isolation and systematic study of specific brain operations, which has greatly advanced our understanding of the circuits involved. At the same time, the emphasis on experimental reductionism has left most aspects of the natural behaviors that have shaped the evolution of the brain largely unexplored. However, emerging technologies and analytical tools make it possible to comprehensively link natural behaviors to neural activity across a broad range of ethological contexts and timescales, heralding new modes of neuroscience focused on natural behaviors. Here we describe a three-part roadmap that aims to leverage the wealth of behaviors in their naturally occurring distributions, linking their variance with that of underlying neural processes to understand how the brain is able to successfully navigate the everyday challenges of animals' social and ecological landscapes. To achieve this aim, experimenters must harness one challenge faced by all neurobiological systems, namely variability, in order to gain new insights into the language of the brain.
Collapse
Affiliation(s)
- Cory T Miller
- Cortical Systems and Behavior Laboratory, University of California San Diego, 9500 Gilman Drive, La Jolla, CA 92039, USA.
| | - David Gire
- Department of Psychology, University of Washington, Guthrie Hall, Seattle, WA 98105, USA
| | - Kim Hoke
- Department of Biology, Colorado State University, 1878 Campus Delivery, Fort Collins, CO 80523, USA
| | - Alexander C Huk
- Center for Perceptual Systems, Departments of Neuroscience and Psychology, University of Texas at Austin, 116 Inner Campus Drive, Austin, TX 78712, USA
| | - Darcy Kelley
- Department of Biological Sciences, Columbia University, 1212 Amsterdam Avenue, New York, NY 10027, USA
| | - David A Leopold
- Section of Cognitive Neurophysiology and Imaging, National Institute of Mental Health, 49 Convent Drive, Bethesda, MD 20892, USA
| | - Matthew C Smear
- Department of Psychology and Institute of Neuroscience, University of Oregon, 1227 University Street, Eugene, OR 97403, USA
| | - Frederic Theunissen
- Department of Psychology, University of California Berkeley, 2121 Berkeley Way, Berkeley, CA 94720, USA
| | - Michael Yartsev
- Department of Bioengineering, University of California Berkeley, 306 Stanley Hall, Berkeley, CA 94720, USA
| | - Cristopher M Niell
- Department of Biology and Institute of Neuroscience, University of Oregon, 222 Huestis Hall, Eugene, OR 97403, USA.
| |
Collapse
|
36
|
Waaga T, Agmon H, Normand VA, Nagelhus A, Gardner RJ, Moser MB, Moser EI, Burak Y. Grid-cell modules remain coordinated when neural activity is dissociated from external sensory cues. Neuron 2022; 110:1843-1856.e6. [PMID: 35385698 PMCID: PMC9235855 DOI: 10.1016/j.neuron.2022.03.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 01/25/2022] [Accepted: 03/09/2022] [Indexed: 11/30/2022]
Abstract
The representation of an animal’s position in the medial entorhinal cortex (MEC) is distributed across several modules of grid cells, each characterized by a distinct spatial scale. The population activity within each module is tightly coordinated and preserved across environments and behavioral states. Little is known, however, about the coordination of activity patterns across modules. We analyzed the joint activity patterns of hundreds of grid cells simultaneously recorded in animals that were foraging either in the light, when sensory cues could stabilize the representation, or in darkness, when such stabilization was disrupted. We found that the states of different modules are tightly coordinated, even in darkness, when the internal representation of position within the MEC deviates substantially from the true position of the animal. These findings suggest that internal brain mechanisms dynamically coordinate the representation of position in different modules, ensuring that they jointly encode a coherent and smooth trajectory. Hundreds of grid cells were recorded simultaneously from multiple grid modules Coordination between grid modules was assessed in rats that foraged in darkness Coordination persists despite relative drift of the represented versus true position This suggests that internal network mechanisms maintain inter-module coordination
Collapse
Affiliation(s)
- Torgeir Waaga
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Haggai Agmon
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.
| | - Valentin A Normand
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Anne Nagelhus
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Richard J Gardner
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - May-Britt Moser
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Edvard I Moser
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Yoram Burak
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel; Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, Israel.
| |
Collapse
|
37
|
Brinkman BAW, Yan H, Maffei A, Park IM, Fontanini A, Wang J, La Camera G. Metastable dynamics of neural circuits and networks. APPLIED PHYSICS REVIEWS 2022; 9:011313. [PMID: 35284030 PMCID: PMC8900181 DOI: 10.1063/5.0062603] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 01/31/2022] [Indexed: 05/14/2023]
Abstract
Cortical neurons emit seemingly erratic trains of action potentials or "spikes," and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which emerge spontaneously or in response to incoming activity produced by sensory inputs. In this Review, we focus on neural dynamics that is best understood as a sequence of repeated activations of a number of discrete hidden states. These transiently occupied states are termed "metastable" and have been linked to important sensory and cognitive functions. In the rodent gustatory cortex, for instance, metastable dynamics have been associated with stimulus coding, with states of expectation, and with decision making. In frontal, parietal, and motor areas of macaques, metastable activity has been related to behavioral performance, choice behavior, task difficulty, and attention. In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits. These approaches include (i) a theoretical framework based on non-equilibrium statistical physics for network dynamics; (ii) statistical approaches to extract information about metastable states from a variety of neural signals; and (iii) recent neural network approaches, informed by experimental results, to model the emergence of metastable dynamics. By discussing these topics, we aim to provide a cohesive view of how transitions between different states of activity may provide the neural underpinnings for essential functions such as perception, memory, expectation, or decision making, and more generally, how the study of metastable neural activity may advance our understanding of neural circuit function in health and disease.
Collapse
Affiliation(s)
| | - H. Yan
- State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, Jilin 130022, People's Republic of China
| | | | | | | | - J. Wang
- Authors to whom correspondence should be addressed: and
| | - G. La Camera
- Authors to whom correspondence should be addressed: and
| |
Collapse
|
38
|
Abstract
The smell of coffee is the same whether it is smelled in a coffee shop or grocery shop (different backgrounds), on a hot day or a cold day (different ambient conditions), after lunch or dinner (different temporal contexts), or using a deep inhalation or normal inhalation (different stimulus dynamics). This feat of pattern recognition that is still difficult to achieve in artificial chemical sensing systems is performed by most sensory systems for their survival. How is this capability achieved? We explored this issue. We found that there are two orthogonal ensembles of neurons, one activated during stimulus presence (ON neurons) and one activated after its termination (OFF neurons), and both contribute to this important computation in a complementary fashion. Invariant stimulus recognition is a challenging pattern-recognition problem that must be dealt with by all sensory systems. Since neural responses evoked by a stimulus are perturbed in a multitude of ways, how can this computational capability be achieved? We examine this issue in the locust olfactory system. We find that locusts trained in an appetitive-conditioning assay robustly recognize the trained odorant independent of variations in stimulus durations, dynamics, or history, or changes in background and ambient conditions. However, individual- and population-level neural responses vary unpredictably with many of these variations. Our results indicate that linear statistical decoding schemes, which assign positive weights to ON neurons and negative weights to OFF neurons, resolve this apparent confound between neural variability and behavioral stability. Furthermore, simplification of the decoder using only ternary weights ({+1, 0, −1}) (i.e., an “ON-minus-OFF” approach) does not compromise performance, thereby striking a fine balance between simplicity and robustness.
Collapse
|
39
|
Parker JR, Klishko AN, Prilutsky BI, Cymbalyuk GS. Asymmetric and transient properties of reciprocal activity of antagonists during the paw-shake response in the cat. PLoS Comput Biol 2021; 17:e1009677. [PMID: 34962927 PMCID: PMC8759665 DOI: 10.1371/journal.pcbi.1009677] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 01/14/2022] [Accepted: 11/22/2021] [Indexed: 12/24/2022] Open
Abstract
Mutually inhibitory populations of neurons, half-center oscillators (HCOs), are commonly involved in the dynamics of the central pattern generators (CPGs) driving various rhythmic movements. Previously, we developed a multifunctional, multistable symmetric HCO model which produced slow locomotor-like and fast paw-shake-like activity patterns. Here, we describe asymmetric features of paw-shake responses in a symmetric HCO model and test these predictions experimentally. We considered bursting properties of the two model half-centers during transient paw-shake-like responses to short perturbations during locomotor-like activity. We found that when a current pulse was applied during the spiking phase of one half-center, let’s call it #1, the consecutive burst durations (BDs) of that half-center increased throughout the paw-shake response, while BDs of the other half-center, let’s call it #2, only changed slightly. In contrast, the consecutive interburst intervals (IBIs) of half-center #1 changed little, while IBIs of half-center #2 increased. We demonstrated that this asymmetry between the half-centers depends on the phase of the locomotor-like rhythm at which the perturbation was applied. We suggest that the fast transient response reflects functional asymmetries of slow processes that underly the locomotor-like pattern; e.g., asymmetric levels of inactivation across the two half-centers for a slowly inactivating inward current. We compared model results with those of in-vivo paw-shake responses evoked in locomoting cats and found similar asymmetries. Electromyographic (EMG) BDs of anterior hindlimb muscles with flexor-related activity increased in consecutive paw-shake cycles, while BD of posterior muscles with extensor-related activity did not change, and vice versa for IBIs of anterior flexors and posterior extensors. We conclude that EMG activity patterns during paw-shaking are consistent with the proposed mechanism producing transient paw-shake-like bursting patterns found in our multistable HCO model. We suggest that the described asymmetry of paw-shaking responses could implicate a multifunctional CPG controlling both locomotion and paw-shaking. The existence of multifunctional central pattern generators (CPGs), circuits which control more than one rhythmic motor behavior, is an intriguing hypothesis. We suggest that the cat paw-shaking response could be a transient response of the locomotor CPG. Our general prediction is that this CPG is multifunctional, and in addition to the locomotor rhythm, it can generate a transient, ten-times faster, paw-shake-like response to a stimulus. In our multistable half-center oscillator (HCO) CPG model, we applied perturbations to the locomotor pattern which resulted in a transient paw-shake-like pattern that eventually returned back to the locomotor pattern. We showed that the inactivation of the slow inward current that drives the locomotor rhythm produced asymmetry of the transient flexor and extensor activity in a symmetric HCO model. To test predictions from our model about the transient nature of the paw-shake response, we compared burst durations (BDs) and interburst intervals (IBIs) of the model half-centers in consecutive cycles of paw-shake-like responses with the BD and IBI of electromyographic (EMG) activity bursts of cat hindlimb flexors and extensors recorded during a paw-shake response. In both cases, we found similar asymmetric trends in the BD and IBI throughout a paw-shake response.
Collapse
Affiliation(s)
- Jessica R. Parker
- Neuroscience Institute, Georgia State University, Atlanta, Georgia, United States of America
| | - Alexander N. Klishko
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| | - Boris I. Prilutsky
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- * E-mail: (BIP); (GSC)
| | - Gennady S. Cymbalyuk
- Neuroscience Institute, Georgia State University, Atlanta, Georgia, United States of America
- * E-mail: (BIP); (GSC)
| |
Collapse
|
40
|
Tiraboschi E, Leonardelli L, Segata G, Haase A. Parallel Processing of Olfactory and Mechanosensory Information in the Honey Bee Antennal Lobe. Front Physiol 2021; 12:790453. [PMID: 34950059 PMCID: PMC8691435 DOI: 10.3389/fphys.2021.790453] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 11/18/2021] [Indexed: 11/13/2022] Open
Abstract
In insects, neuronal responses to clean air have so far been reported only episodically in moths. Here we present results obtained by fast two-photon calcium imaging in the honey bee Apis mellifera, indicating a substantial involvement of the antennal lobe, the first olfactory neuropil, in the processing of mechanical stimuli. Clean air pulses generate a complex pattern of glomerular activation that provides a code for stimulus intensity and dynamics with a similar level of stereotypy as observed for the olfactory code. Overlapping the air pulses with odor stimuli reveals a superposition of mechanosensory and odor response codes with high contrast. On the mechanosensitive signal, modulations were observed in the same frequency regime as the oscillatory motion of the antennae, suggesting a possible way to detect odorless airflow directions. The transduction of mechanosensory information via the insect antennae has so far been attributed primarily to Johnston's organ in the pedicel of the antenna. The possibility that the antennal lobe activation by clean air originates from Johnston's organ could be ruled out, as the signal is suppressed by covering the surfaces of the otherwise freely moving and bending antennae, which should leave Johnston's organ unaffected. The tuning curves of individual glomeruli indicate increased sensitivity at low-frequency mechanical oscillations as produced by the abdominal motion in waggle dance communication, suggesting a further potential function of this mechanosensory code. The discovery that the olfactory system can sense both odors and mechanical stimuli has recently been made also in mammals. The results presented here give hope that studies on insects can make a fundamental contribution to the cross-taxa understanding of this dual function, as only a few thousand neurons are involved in their brains, all of which are accessible by in vivo optical imaging.
Collapse
Affiliation(s)
- Ettore Tiraboschi
- Department of Physics, University of Trento, Trento, Italy.,Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy
| | - Luana Leonardelli
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy.,Department of Electrical, Electronic, and Information Engineering, University of Bologna, Bologna, Italy
| | | | - Albrecht Haase
- Department of Physics, University of Trento, Trento, Italy.,Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy
| |
Collapse
|
41
|
Wu YK, Zenke F. Nonlinear transient amplification in recurrent neural networks with short-term plasticity. eLife 2021; 10:e71263. [PMID: 34895468 PMCID: PMC8820736 DOI: 10.7554/elife.71263] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 12/10/2021] [Indexed: 11/24/2022] Open
Abstract
To rapidly process information, neural circuits have to amplify specific activity patterns transiently. How the brain performs this nonlinear operation remains elusive. Hebbian assemblies are one possibility whereby strong recurrent excitatory connections boost neuronal activity. However, such Hebbian amplification is often associated with dynamical slowing of network dynamics, non-transient attractor states, and pathological run-away activity. Feedback inhibition can alleviate these effects but typically linearizes responses and reduces amplification gain. Here, we study nonlinear transient amplification (NTA), a plausible alternative mechanism that reconciles strong recurrent excitation with rapid amplification while avoiding the above issues. NTA has two distinct temporal phases. Initially, positive feedback excitation selectively amplifies inputs that exceed a critical threshold. Subsequently, short-term plasticity quenches the run-away dynamics into an inhibition-stabilized network state. By characterizing NTA in supralinear network models, we establish that the resulting onset transients are stimulus selective and well-suited for speedy information processing. Further, we find that excitatory-inhibitory co-tuning widens the parameter regime in which NTA is possible in the absence of persistent activity. In summary, NTA provides a parsimonious explanation for how excitatory-inhibitory co-tuning and short-term plasticity collaborate in recurrent networks to achieve transient amplification.
Collapse
Affiliation(s)
- Yue Kris Wu
- Friedrich Miescher Institute for Biomedical ResearchBaselSwitzerland
- Faculty of Natural Sciences, University of BaselBaselSwitzerland
- Max Planck Institute for Brain ResearchFrankfurtGermany
- School of Life Sciences, Technical University of MunichFreisingGermany
| | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical ResearchBaselSwitzerland
- Faculty of Natural Sciences, University of BaselBaselSwitzerland
| |
Collapse
|
42
|
White PA. Perception of Happening: How the Brain Deals with the No-History Problem. Cogn Sci 2021; 45:e13068. [PMID: 34865252 DOI: 10.1111/cogs.13068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 09/01/2021] [Accepted: 11/04/2021] [Indexed: 11/30/2022]
Abstract
In physics, the temporal dimension has units of infinitesimally brief duration. Given this, how is it possible to perceive things, such as motion, music, and vibrotactile stimulation, that involve extension across many units of time? To address this problem, it is proposed that there is what is termed an "information construct of happening" (ICOH), a simultaneous representation of recent, temporally differentiated perceptual information on the millisecond time scale. The main features of the ICOH are (i) time marking, semantic labeling of all information in the ICOH with ordinal temporal information and distance from what is informationally identified as the present moment, (ii) vector informational features that specify kind, direction, and rate of change for every feature in a percept, and (iii) connectives, information relating vector informational features at adjacent temporal locations in the ICOH. The ICOH integrates products of perceptual processing with recent historical information in sensory memory on the subsecond time scale. Perceptual information about happening in informational sensory memory is encoded in semantic form that preserves connected semantic trails of vector and timing information. The basic properties of the ICOH must be supported by a general and widespread timing mechanism that generates ordinal and interval timing information and it is suggested that state-dependent networks may suffice for that purpose. Happening, therefore, is perceived at a moment and is constituted by an information structure of connected recent historical information.
Collapse
|
43
|
Altan E, Solla SA, Miller LE, Perreault EJ. Estimating the dimensionality of the manifold underlying multi-electrode neural recordings. PLoS Comput Biol 2021; 17:e1008591. [PMID: 34843461 PMCID: PMC8659648 DOI: 10.1371/journal.pcbi.1008591] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 12/09/2021] [Accepted: 11/11/2021] [Indexed: 01/07/2023] Open
Abstract
It is generally accepted that the number of neurons in a given brain area far exceeds the number of neurons needed to carry any specific function controlled by that area. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating the dimensionality of linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms' accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding of the low-dimensional manifold within the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated the dimensionality. We thus developed a denoising algorithm based on deep learning, the "Joint Autoencoder", which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the intrinsic dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.
Collapse
Affiliation(s)
- Ege Altan
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
| | - Sara A. Solla
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Physics and Astronomy, Northwestern University, Evanston, Illinois, United States of America
| | - Lee E. Miller
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Shirley Ryan AbilityLab, Chicago, Illinois, United States of America
| | - Eric J. Perreault
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Shirley Ryan AbilityLab, Chicago, Illinois, United States of America
| |
Collapse
|
44
|
Basu R, Gebauer R, Herfurth T, Kolb S, Golipour Z, Tchumatchenko T, Ito HT. The orbitofrontal cortex maps future navigational goals. Nature 2021; 599:449-452. [PMID: 34707289 PMCID: PMC8599015 DOI: 10.1038/s41586-021-04042-9] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 09/20/2021] [Indexed: 11/09/2022]
Abstract
Accurate navigation to a desired goal requires consecutive estimates of spatial relationships between the current position and future destination throughout the journey. Although neurons in the hippocampal formation can represent the position of an animal as well as its nearby trajectories1-7, their role in determining the destination of the animal has been questioned8,9. It is, thus, unclear whether the brain can possess a precise estimate of target location during active environmental exploration. Here we describe neurons in the rat orbitofrontal cortex (OFC) that form spatial representations persistently pointing to the subsequent goal destination of an animal throughout navigation. This destination coding emerges before the onset of navigation, without direct sensory access to a distal goal, and even predicts the incorrect destination of an animal at the beginning of an error trial. Goal representations in the OFC are maintained by destination-specific neural ensemble dynamics, and their brief perturbation at the onset of a journey led to a navigational error. These findings suggest that the OFC is part of the internal goal map of the brain, enabling animals to navigate precisely to a chosen destination that is beyond the range of sensory perception.
Collapse
Affiliation(s)
- Raunak Basu
- Max Planck Institute for Brain Research, Frankfurt am Main, Germany.
| | - Robert Gebauer
- Max Planck Institute for Brain Research, Frankfurt am Main, Germany
| | - Tim Herfurth
- Max Planck Institute for Brain Research, Frankfurt am Main, Germany
| | - Simon Kolb
- Max Planck Institute for Brain Research, Frankfurt am Main, Germany
| | - Zahra Golipour
- Max Planck Institute for Brain Research, Frankfurt am Main, Germany
| | - Tatjana Tchumatchenko
- Max Planck Institute for Brain Research, Frankfurt am Main, Germany.,Institute of Experimental Epileptology and Cognition Research, Life and Brain Center, Universitätsklinikum Bonn, Bonn, Germany
| | - Hiroshi T Ito
- Max Planck Institute for Brain Research, Frankfurt am Main, Germany.
| |
Collapse
|
45
|
Devineni AV, Deere JU, Sun B, Axel R. Individual bitter-sensing neurons in Drosophila exhibit both ON and OFF responses that influence synaptic plasticity. Curr Biol 2021; 31:5533-5546.e7. [PMID: 34731675 DOI: 10.1016/j.cub.2021.10.020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 09/04/2021] [Accepted: 10/08/2021] [Indexed: 01/07/2023]
Abstract
The brain generates internal representations that translate sensory stimuli into appropriate behavior. In the taste system, different tastes activate distinct populations of sensory neurons. We investigated the temporal properties of taste responses in Drosophila and discovered that different types of taste sensory neurons show striking differences in their response dynamics. Strong responses to stimulus onset (ON responses) and offset (OFF responses) were observed in bitter-sensing neurons in the labellum, whereas bitter neurons in the leg and other classes of labellar taste neurons showed only an ON response. Individual labellar bitter neurons generate both ON and OFF responses through a cell-intrinsic mechanism that requires canonical bitter receptors. A single receptor complex likely generates both ON and OFF responses to a given bitter ligand. These ON and OFF responses in the periphery are propagated to dopaminergic neurons that mediate aversive learning, and the presence of the OFF response impacts synaptic plasticity when bitter is used as a reinforcement cue. These studies reveal previously unknown features of taste responses that impact neural circuit function and may be important for behavior. Moreover, these studies show that OFF responses can dramatically influence timing-based synaptic plasticity, which is thought to underlie associative learning.
Collapse
Affiliation(s)
- Anita V Devineni
- Zuckerman Mind Brain Behavior Institute, Columbia University, 3227 Broadway, New York, NY 10027, USA.
| | - Julia U Deere
- Zuckerman Mind Brain Behavior Institute, Columbia University, 3227 Broadway, New York, NY 10027, USA
| | - Bei Sun
- Zuckerman Mind Brain Behavior Institute, Columbia University, 3227 Broadway, New York, NY 10027, USA
| | - Richard Axel
- Zuckerman Mind Brain Behavior Institute, Columbia University, 3227 Broadway, New York, NY 10027, USA; Howard Hughes Medical Institute, Columbia University, New York, NY 10032, USA.
| |
Collapse
|
46
|
Sohn H, Narain D. Neural implementations of Bayesian inference. Curr Opin Neurobiol 2021; 70:121-129. [PMID: 34678599 DOI: 10.1016/j.conb.2021.09.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 08/18/2021] [Accepted: 09/09/2021] [Indexed: 10/20/2022]
Abstract
Bayesian inference has emerged as a general framework that captures how organisms make decisions under uncertainty. Recent experimental findings reveal disparate mechanisms for how the brain generates behaviors predicted by normative Bayesian theories. Here, we identify two broad classes of neural implementations for Bayesian inference: a modular class, where each probabilistic component of Bayesian computation is independently encoded and a transform class, where uncertain measurements are converted to Bayesian estimates through latent processes. Many recent experimental neuroscience findings studying probabilistic inference broadly fall into these classes. We identify potential avenues for synthesis across these two classes and the disparities that, at present, cannot be reconciled. We conclude that to distinguish among implementation hypotheses for Bayesian inference, we require greater engagement among theoretical and experimental neuroscientists in an effort that spans different scales of analysis, circuits, tasks, and species.
Collapse
Affiliation(s)
- Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Devika Narain
- Dept. of Neuroscience, Erasmus University Medical Center, Rotterdam, 3015, CN, the Netherlands.
| |
Collapse
|
47
|
Sirio Carmantini G, Schittler Neves F, Timme M, Rodrigues S. Stochastic facilitation in heteroclinic communication channels. CHAOS (WOODBURY, N.Y.) 2021; 31:093130. [PMID: 34598472 DOI: 10.1063/5.0054485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 08/26/2021] [Indexed: 06/13/2023]
Abstract
Biological neural systems encode and transmit information as patterns of activity tracing complex trajectories in high-dimensional state spaces, inspiring alternative paradigms of information processing. Heteroclinic networks, naturally emerging in artificial neural systems, are networks of saddles in state space that provide a transparent approach to generate complex trajectories via controlled switches among interconnected saddles. External signals induce specific switching sequences, thus dynamically encoding inputs as trajectories. Recent works have focused either on computational aspects of heteroclinic networks, i.e., Heteroclinic Computing, or their stochastic properties under noise. Yet, how well such systems may transmit information remains an open question. Here, we investigate the information transmission properties of heteroclinic networks, studying them as communication channels. Choosing a tractable but representative system exhibiting a heteroclinic network, we investigate the mutual information rate (MIR) between input signals and the resulting sequences of states as the level of noise varies. Intriguingly, MIR does not decrease monotonically with increasing noise. Intermediate noise levels indeed maximize the information transmission capacity by promoting an increased yet controlled exploration of the underlying network of states. Complementing standard stochastic resonance, these results highlight the constructive effect of stochastic facilitation (i.e., noise-enhanced information transfer) on heteroclinic communication channels and possibly on more general dynamical systems exhibiting complex trajectories in state space.
Collapse
Affiliation(s)
| | - Fabio Schittler Neves
- Center for Advancing Electronics Dresden (cfaed) and Institute for Theoretical Physics, TU Dresden, 01062 Dresden, Germany
| | - Marc Timme
- Center for Advancing Electronics Dresden (cfaed) and Institute for Theoretical Physics, TU Dresden, 01062 Dresden, Germany
| | - Serafim Rodrigues
- BCAM-Basque Center for Applied Mathematics, 48009 Bilbao, Bizkaia, Spain
| |
Collapse
|
48
|
Umakantha A, Morina R, Cowley BR, Snyder AC, Smith MA, Yu BM. Bridging neuronal correlations and dimensionality reduction. Neuron 2021; 109:2740-2754.e12. [PMID: 34293295 PMCID: PMC8505167 DOI: 10.1016/j.neuron.2021.06.028] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 05/05/2021] [Accepted: 06/25/2021] [Indexed: 01/01/2023]
Abstract
Two commonly used approaches to study interactions among neurons are spike count correlation, which describes pairs of neurons, and dimensionality reduction, applied to a population of neurons. Although both approaches have been used to study trial-to-trial neuronal variability correlated among neurons, they are often used in isolation and have not been directly related. We first established concrete mathematical and empirical relationships between pairwise correlation and metrics of population-wide covariability based on dimensionality reduction. Applying these insights to macaque V4 population recordings, we found that the previously reported decrease in mean pairwise correlation associated with attention stemmed from three distinct changes in population-wide covariability. Overall, our work builds the intuition and formalism to bridge between pairwise correlation and population-wide covariability and presents a cautionary tale about the inferences one can make about population activity by using a single statistic, whether it be mean pairwise correlation or dimensionality.
Collapse
Affiliation(s)
- Akash Umakantha
- Carnegie Mellon Neuroscience Institute, Pittsburgh, PA 15213, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Rudina Morina
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Benjamin R Cowley
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Adam C Snyder
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14642, USA; Department of Neuroscience, University of Rochester, Rochester, NY 14642, USA; Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
| | - Matthew A Smith
- Carnegie Mellon Neuroscience Institute, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Byron M Yu
- Carnegie Mellon Neuroscience Institute, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| |
Collapse
|
49
|
Abstract
Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.
Collapse
Affiliation(s)
- Saurabh Vyas
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - Matthew D Golub
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Google AI, Google Inc., Mountain View, California 94305, USA
| | - Krishna V Shenoy
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Department of Neurobiology, Bio-X Institute, Neurosciences Program, and Howard Hughes Medical Institute, Stanford University, Stanford, California 94305, USA
| |
Collapse
|
50
|
Singh V, Tchernookov M, Balasubramanian V. What the odor is not: Estimation by elimination. Phys Rev E 2021; 104:024415. [PMID: 34525542 PMCID: PMC8892575 DOI: 10.1103/physreve.104.024415] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 08/02/2021] [Indexed: 11/07/2022]
Abstract
Olfactory systems use a small number of broadly sensitive receptors to combinatorially encode a vast number of odors. We propose a method of decoding such distributed representations by exploiting a statistical fact: Receptors that do not respond to an odor carry more information than receptors that do because they signal the absence of all odorants that bind to them. Thus, it is easier to identify what the odor is not rather than what the odor is. For realistic numbers of receptors, response functions, and odor complexity, this method of elimination turns an underconstrained decoding problem into a solvable one, allowing accurate determination of odorants in a mixture and their concentrations. We construct a neural network realization of our algorithm based on the structure of the olfactory pathway.
Collapse
Affiliation(s)
- Vijay Singh
- Department of Physics, North Carolina A&T State University, Greensboro, NC, 27410, USA
- Department of Physics, & Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Martin Tchernookov
- Department of Physics, University of Wisconsin, Whitewater, WI, 53190, USA
| | - Vijay Balasubramanian
- Department of Physics, & Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|