1
|
Lyamzin DR, Alamia A, Abdolrahmani M, Aoki R, Benucci A. Regularizing hyperparameters of interacting neural signals in the mouse cortex reflect states of arousal. PLoS Comput Biol 2024; 20:e1012478. [PMID: 39405361 PMCID: PMC11527387 DOI: 10.1371/journal.pcbi.1012478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 10/31/2024] [Accepted: 09/11/2024] [Indexed: 11/02/2024] Open
Abstract
In natural behaviors, multiple neural signals simultaneously drive activation across overlapping brain networks. Due to limitations in the amount of data that can be acquired in common experimental designs, the determination of these interactions is commonly inferred via modeling approaches, which reduce overfitting by finding appropriate regularizing hyperparameters. However, it is unclear whether these hyperparameters can also be related to any aspect of the underlying biological phenomena and help interpret them. We applied a state-of-the-art regularization procedure-automatic locality determination-to interacting neural activations in the mouse posterior cortex associated with movements of the body and eyes. As expected, regularization significantly improved the determination and interpretability of the response interactions. However, regularizing hyperparameters also changed considerably, and seemingly unpredictably, from animal to animal. We found that these variations were not random; rather, they correlated with the variability in visually evoked responses and with the variability in the state of arousal of the animals measured by pupillometry-both pieces of information that were not included in the modeling framework. These observations could be generalized to another commonly used-but potentially less informative-regularization method, ridge regression. Our findings demonstrate that optimal model hyperparameters can be discovery tools that are informative of factors not a priori included in the model's design.
Collapse
Affiliation(s)
| | - Andrea Alamia
- Centre de Recherche Cerveau et Cognition, CNRS, Université de Toulouse, Toulouse, France
| | | | - Ryo Aoki
- RIKEN Center for Brain Science, Wako-shi, Saitama, Japan
| | - Andrea Benucci
- Queen Mary University of London, School of Biological and Behavioural Sciences, London, United Kingdom
| |
Collapse
|
2
|
Hermansen E, Klindt DA, Dunn BA. Uncovering 2-D toroidal representations in grid cell ensemble activity during 1-D behavior. Nat Commun 2024; 15:5429. [PMID: 38926360 PMCID: PMC11208534 DOI: 10.1038/s41467-024-49703-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 06/13/2024] [Indexed: 06/28/2024] Open
Abstract
Minimal experiments, such as head-fixed wheel-running and sleep, offer experimental advantages but restrict the amount of observable behavior, making it difficult to classify functional cell types. Arguably, the grid cell, and its striking periodicity, would not have been discovered without the perspective provided by free behavior in an open environment. Here, we show that by shifting the focus from single neurons to populations, we change the minimal experimental complexity required. We identify grid cell modules and show that the activity covers a similar, stable toroidal state space during wheel running as in open field foraging. Trajectories on grid cell tori correspond to single trial runs in virtual reality and path integration in the dark, and the alignment of the representation rapidly shifts with changes in experimental conditions. Thus, we provide a methodology to discover and study complex internal representations in even the simplest of experiments.
Collapse
Affiliation(s)
- Erik Hermansen
- Department of Mathematical Sciences, NTNU, Trondheim, Norway.
| | - David A Klindt
- Department of Mathematical Sciences, NTNU, Trondheim, Norway
- Cold Spring Harbor Laboratory, Cold Spring Harbor, Laurel Hollow, New York, USA
| | - Benjamin A Dunn
- Department of Mathematical Sciences, NTNU, Trondheim, Norway.
| |
Collapse
|
3
|
Piet A, Ponvert N, Ollerenshaw D, Garrett M, Groblewski PA, Olsen S, Koch C, Arkhipov A. Behavioral strategy shapes activation of the Vip-Sst disinhibitory circuit in visual cortex. Neuron 2024; 112:1876-1890.e4. [PMID: 38447579 PMCID: PMC11156560 DOI: 10.1016/j.neuron.2024.02.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 11/17/2023] [Accepted: 02/08/2024] [Indexed: 03/08/2024]
Abstract
In complex environments, animals can adopt diverse strategies to find rewards. How distinct strategies differentially engage brain circuits is not well understood. Here, we investigate this question, focusing on the cortical Vip-Sst disinhibitory circuit between vasoactive intestinal peptide-postive (Vip) interneurons and somatostatin-positive (Sst) interneurons. We characterize the behavioral strategies used by mice during a visual change detection task. Using a dynamic logistic regression model, we find that individual mice use mixtures of a visual comparison strategy and a statistical timing strategy. Separately, mice also have periods of task engagement and disengagement. Two-photon calcium imaging shows large strategy-dependent differences in neural activity in excitatory, Sst inhibitory, and Vip inhibitory cells in response to both image changes and image omissions. In contrast, task engagement has limited effects on neural population activity. We find that the diversity of neural correlates of strategy can be understood parsimoniously as the increased activation of the Vip-Sst disinhibitory circuit during the visual comparison strategy, which facilitates task-appropriate responses.
Collapse
Affiliation(s)
- Alex Piet
- Allen Institute, Mindscope Program, Seattle, WA, USA.
| | - Nick Ponvert
- Allen Institute, Mindscope Program, Seattle, WA, USA
| | | | | | | | - Shawn Olsen
- Allen Institute, Mindscope Program, Seattle, WA, USA
| | - Christof Koch
- Allen Institute, Mindscope Program, Seattle, WA, USA
| | | |
Collapse
|
4
|
Segraves MA. Using Natural Scenes to Enhance our Understanding of the Cerebral Cortex's Role in Visual Search. Annu Rev Vis Sci 2023; 9:435-454. [PMID: 37164028 DOI: 10.1146/annurev-vision-100720-124033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Using natural scenes is an approach to studying the visual and eye movement systems approximating how these systems function in everyday life. This review examines the results from behavioral and neurophysiological studies using natural scene viewing in humans and monkeys. The use of natural scenes for the study of cerebral cortical activity is relatively new and presents challenges for data analysis. Methods and results from the use of natural scenes for the study of the visual and eye movement cortex are presented, with emphasis on new insights that this method provides enhancing what is known about these cortical regions from the use of conventional methods.
Collapse
Affiliation(s)
- Mark A Segraves
- Department of Neurobiology, Northwestern University, Evanston, Illinois, USA;
| |
Collapse
|
5
|
Maisson DJN, Cervera RL, Voloh B, Conover I, Zambre M, Zimmermann J, Hayden BY. Widespread coding of navigational variables in prefrontal cortex. Curr Biol 2023; 33:3478-3488.e3. [PMID: 37541250 PMCID: PMC10984098 DOI: 10.1016/j.cub.2023.07.024] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 06/01/2023] [Accepted: 07/13/2023] [Indexed: 08/06/2023]
Abstract
To navigate effectively, we must represent information about our location in the environment. Traditional research highlights the role of the hippocampal complex in this process. Spurred by recent research highlighting the widespread cortical encoding of cognitive and motor variables previously thought to have localized function, we hypothesized that navigational variables would be likewise encoded widely, especially in the prefrontal cortex, which is associated with volitional behavior. We recorded neural activity from six prefrontal regions while macaques performed a foraging task in an open enclosure. In all regions, we found strong encoding of allocentric position, allocentric head direction, boundary distance, and linear and angular velocity. These encodings were not accounted for by distance, time to reward, or motor factors. The strength of coding of all variables increased along a ventral-to-dorsal gradient. Together, these results argue that encoding of navigational variables is not localized to the hippocampus and support the hypothesis that navigation is continuous with other forms of flexible cognition in the service of action.
Collapse
Affiliation(s)
- David J-N Maisson
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - Roberto Lopez Cervera
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - Benjamin Voloh
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - Indirah Conover
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - Mrunal Zambre
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - Jan Zimmermann
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - Benjamin Y Hayden
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA.
| |
Collapse
|
6
|
Angeloni CF, Młynarski W, Piasini E, Williams AM, Wood KC, Garami L, Hermundstad AM, Geffen MN. Dynamics of cortical contrast adaptation predict perception of signals in noise. Nat Commun 2023; 14:4817. [PMID: 37558677 PMCID: PMC10412650 DOI: 10.1038/s41467-023-40477-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 07/27/2023] [Indexed: 08/11/2023] Open
Abstract
Neurons throughout the sensory pathway adapt their responses depending on the statistical structure of the sensory environment. Contrast gain control is a form of adaptation in the auditory cortex, but it is unclear whether the dynamics of gain control reflect efficient adaptation, and whether they shape behavioral perception. Here, we trained mice to detect a target presented in background noise shortly after a change in the contrast of the background. The observed changes in cortical gain and behavioral detection followed the dynamics of a normative model of efficient contrast gain control; specifically, target detection and sensitivity improved slowly in low contrast, but degraded rapidly in high contrast. Auditory cortex was required for this task, and cortical responses were not only similarly affected by contrast but predicted variability in behavioral performance. Combined, our results demonstrate that dynamic gain adaptation supports efficient coding in auditory cortex and predicts the perception of sounds in noise.
Collapse
Affiliation(s)
- Christopher F Angeloni
- Psychology Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Wiktor Młynarski
- Faculty of Biology, Ludwig Maximilian University of Munich, Munich, Germany
- Bernstein Center for Computational Neuroscience, Munich, Germany
| | - Eugenio Piasini
- International School for Advanced Studies (SISSA), Trieste, Italy
| | - Aaron M Williams
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
| | - Katherine C Wood
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Linda Garami
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Maria N Geffen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA.
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Neuroscience, Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
7
|
Barabási DL, Beynon T, Katona Á, Perez-Nieves N. Complex computation from developmental priors. Nat Commun 2023; 14:2226. [PMID: 37076523 PMCID: PMC10115783 DOI: 10.1038/s41467-023-37980-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 04/07/2023] [Indexed: 04/21/2023] Open
Abstract
Machine learning (ML) models have long overlooked innateness: how strong pressures for survival lead to the encoding of complex behaviors in the nascent wiring of a brain. Here, we derive a neurodevelopmental encoding of artificial neural networks that considers the weight matrix of a neural network to be emergent from well-studied rules of neuronal compatibility. Rather than updating the network's weights directly, we improve task fitness by updating the neurons' wiring rules, thereby mirroring evolutionary selection on brain development. We find that our model (1) provides sufficient representational power for high accuracy on ML benchmarks while also compressing parameter count, and (2) can act as a regularizer, selecting simple circuits that provide stable and adaptive performance on metalearning tasks. In summary, by introducing neurodevelopmental considerations into ML frameworks, we not only model the emergence of innate behaviors, but also define a discovery process for structures that promote complex computations.
Collapse
Affiliation(s)
| | | | - Ádám Katona
- Electrical and Electronic Engineering, Imperial College London, London, UK
| | | |
Collapse
|
8
|
Gopinath N. Artificial intelligence and neuroscience: An update on fascinating relationships. Process Biochem 2023. [DOI: 10.1016/j.procbio.2022.12.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
9
|
Barbosa J, Stein H, Zorowitz S, Niv Y, Summerfield C, Soto-Faraco S, Hyafil A. A practical guide for studying human behavior in the lab. Behav Res Methods 2023; 55:58-76. [PMID: 35262897 DOI: 10.3758/s13428-022-01793-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/04/2022] [Indexed: 11/08/2022]
Abstract
In the last few decades, the field of neuroscience has witnessed major technological advances that have allowed researchers to measure and control neural activity with great detail. Yet, behavioral experiments in humans remain an essential approach to investigate the mysteries of the mind. Their relatively modest technological and economic requisites make behavioral research an attractive and accessible experimental avenue for neuroscientists with very diverse backgrounds. However, like any experimental enterprise, it has its own inherent challenges that may pose practical hurdles, especially to less experienced behavioral researchers. Here, we aim at providing a practical guide for a steady walk through the workflow of a typical behavioral experiment with human subjects. This primer concerns the design of an experimental protocol, research ethics, and subject care, as well as best practices for data collection, analysis, and sharing. The goal is to provide clear instructions for both beginners and experienced researchers from diverse backgrounds in planning behavioral experiments.
Collapse
Affiliation(s)
- Joao Barbosa
- Brain Circuits & Behavior lab, IDIBAPS, Barcelona, Spain.
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Supérieure - PSL Research University, 75005, Paris, France.
| | - Heike Stein
- Brain Circuits & Behavior lab, IDIBAPS, Barcelona, Spain
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Supérieure - PSL Research University, 75005, Paris, France
| | - Sam Zorowitz
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Yael Niv
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Department of Psychology, Princeton University, Princeton, USA
| | | | - Salvador Soto-Faraco
- Multisensory Research Group, Center for Brain and Cognition, Universitat Pompeu Fabra Barcelona, Spain, and Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | | |
Collapse
|
10
|
Monaco JD, Hwang GM. Neurodynamical Computing at the Information Boundaries of Intelligent Systems. Cognit Comput 2022; 16:1-13. [PMID: 39129840 PMCID: PMC11306504 DOI: 10.1007/s12559-022-10081-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 11/15/2022] [Indexed: 12/28/2022]
Abstract
Artificial intelligence has not achieved defining features of biological intelligence despite models boasting more parameters than neurons in the human brain. In this perspective article, we synthesize historical approaches to understanding intelligent systems and argue that methodological and epistemic biases in these fields can be resolved by shifting away from cognitivist brain-as-computer theories and recognizing that brains exist within large, interdependent living systems. Integrating the dynamical systems view of cognition with the massive distributed feedback of perceptual control theory highlights a theoretical gap in our understanding of nonreductive neural mechanisms. Cell assemblies-properly conceived as reentrant dynamical flows and not merely as identified groups of neurons-may fill that gap by providing a minimal supraneuronal level of organization that establishes a neurodynamical base layer for computation. By considering information streams from physical embodiment and situational embedding, we discuss this computational base layer in terms of conserved oscillatory and structural properties of cortical-hippocampal networks. Our synthesis of embodied cognition, based in dynamical systems and perceptual control, aims to bypass the neurosymbolic stalemates that have arisen in artificial intelligence, cognitive science, and computational neuroscience.
Collapse
Affiliation(s)
- Joseph D. Monaco
- Dept of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD USA
| | - Grace M. Hwang
- Johns Hopkins University Applied Physics Laboratory, Laurel, MD USA
| |
Collapse
|
11
|
Aitken K, Garrett M, Olsen S, Mihalas S. The geometry of representational drift in natural and artificial neural networks. PLoS Comput Biol 2022; 18:e1010716. [PMID: 36441762 PMCID: PMC9731438 DOI: 10.1371/journal.pcbi.1010716] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 12/08/2022] [Accepted: 11/07/2022] [Indexed: 11/29/2022] Open
Abstract
Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggested that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed "representational drift". In this study we geometrically characterize the properties of representational drift in the primary visual cortex of mice in two open datasets from the Allen Institute and propose a potential mechanism behind such drift. We observe representational drift both for passively presented stimuli, as well as for stimuli which are behaviorally relevant. Across experiments, the drift differs from in-session variance and most often occurs along directions that have the most in-class variance, leading to a significant turnover in the neurons used for a given representation. Interestingly, despite this significant change due to drift, linear classifiers trained to distinguish neuronal representations show little to no degradation in performance across days. The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting.
Collapse
Affiliation(s)
- Kyle Aitken
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| | - Marina Garrett
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| | - Shawn Olsen
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| | - Stefan Mihalas
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| |
Collapse
|
12
|
Bowles S, Hickman J, Peng X, Williamson WR, Huang R, Washington K, Donegan D, Welle CG. Vagus nerve stimulation drives selective circuit modulation through cholinergic reinforcement. Neuron 2022; 110:2867-2885.e7. [PMID: 35858623 DOI: 10.1016/j.neuron.2022.06.017] [Citation(s) in RCA: 44] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 04/22/2022] [Accepted: 06/17/2022] [Indexed: 12/23/2022]
Abstract
Vagus nerve stimulation (VNS) is a neuromodulation therapy for a broad and expanding set of neurologic conditions. However, the mechanism through which VNS influences central nervous system circuitry is not well described, limiting therapeutic optimization. VNS leads to widespread brain activation, but the effects on behavior are remarkably specific, indicating plasticity unique to behaviorally engaged neural circuits. To understand how VNS can lead to specific circuit modulation, we leveraged genetic tools including optogenetics and in vivo calcium imaging in mice learning a skilled reach task. We find that VNS enhances skilled motor learning in healthy animals via a cholinergic reinforcement mechanism, producing a rapid consolidation of an expert reach trajectory. In primary motor cortex (M1), VNS drives precise temporal modulation of neurons that respond to behavioral outcome. This suggests that VNS may accelerate motor refinement in M1 via cholinergic signaling, opening new avenues for optimizing VNS to target specific disease-relevant circuitry.
Collapse
Affiliation(s)
- Spencer Bowles
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO 80045, USA; Department of Neurosurgery, University of Colorado School of Medicine, Aurora, CO 80045, USA
| | - Jordan Hickman
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO 80045, USA
| | - Xiaoyu Peng
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO 80045, USA; Department of Neurosurgery, University of Colorado School of Medicine, Aurora, CO 80045, USA
| | - W Ryan Williamson
- IDEA Core, University of Colorado School of Medicine, Aurora, CO 80045, USA
| | - Rongchen Huang
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO 80045, USA; Department of Neurosurgery, University of Colorado School of Medicine, Aurora, CO 80045, USA
| | - Kayden Washington
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO 80045, USA; Department of Neurosurgery, University of Colorado School of Medicine, Aurora, CO 80045, USA
| | - Dane Donegan
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO 80045, USA; Department of Neurosurgery, University of Colorado School of Medicine, Aurora, CO 80045, USA
| | - Cristin G Welle
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO 80045, USA; Department of Neurosurgery, University of Colorado School of Medicine, Aurora, CO 80045, USA.
| |
Collapse
|
13
|
Delis I, Ince RAA, Sajda P, Wang Q. Neural Encoding of Active Multi-Sensing Enhances Perceptual Decision-Making via a Synergistic Cross-Modal Interaction. J Neurosci 2022; 42:2344-2355. [PMID: 35091504 PMCID: PMC8936614 DOI: 10.1523/jneurosci.0861-21.2022] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 11/29/2021] [Accepted: 01/02/2022] [Indexed: 12/16/2022] Open
Abstract
Most perceptual decisions rely on the active acquisition of evidence from the environment involving stimulation from multiple senses. However, our understanding of the neural mechanisms underlying this process is limited. Crucially, it remains elusive how different sensory representations interact in the formation of perceptual decisions. To answer these questions, we used an active sensing paradigm coupled with neuroimaging, multivariate analysis, and computational modeling to probe how the human brain processes multisensory information to make perceptual judgments. Participants of both sexes actively sensed to discriminate two texture stimuli using visual (V) or haptic (H) information or the two sensory cues together (VH). Crucially, information acquisition was under the participants' control, who could choose where to sample information from and for how long on each trial. To understand the neural underpinnings of this process, we first characterized where and when active sensory experience (movement patterns) is encoded in human brain activity (EEG) in the three sensory conditions. Then, to offer a neurocomputational account of active multisensory decision formation, we used these neural representations of active sensing to inform a drift diffusion model of decision-making behavior. This revealed a multisensory enhancement of the neural representation of active sensing, which led to faster and more accurate multisensory decisions. We then dissected the interactions between the V, H, and VH representations using a novel information-theoretic methodology. Ultimately, we identified a synergistic neural interaction between the two unisensory (V, H) representations over contralateral somatosensory and motor locations that predicted multisensory (VH) decision-making performance.SIGNIFICANCE STATEMENT In real-world settings, perceptual decisions are made during active behaviors, such as crossing the road on a rainy night, and include information from different senses (e.g., car lights, slippery ground). Critically, it remains largely unknown how sensory evidence is combined and translated into perceptual decisions in such active scenarios. Here we address this knowledge gap. First, we show that the simultaneous exploration of information across senses (multi-sensing) enhances the neural encoding of active sensing movements. Second, the neural representation of active sensing modulates the evidence available for decision; and importantly, multi-sensing yields faster evidence accumulation. Finally, we identify a cross-modal interaction in the human brain that correlates with multisensory performance, constituting a putative neural mechanism for forging active multisensory perception.
Collapse
Affiliation(s)
- Ioannis Delis
- School of Biomedical Sciences, University of Leeds, Leeds, LS2 9JT, United Kingdom
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, G12 8QQ, United Kingdom
| | - Paul Sajda
- Department of Biomedical Engineering, Columbia University, New York, New York 10027
- Data Science Institute, Columbia University, New York, New York 10027
| | - Qi Wang
- Department of Biomedical Engineering, Columbia University, New York, New York 10027
| |
Collapse
|
14
|
Robson DN, Li JM. A dynamical systems view of neuroethology: Uncovering stateful computation in natural behaviors. Curr Opin Neurobiol 2022; 73:102517. [PMID: 35217311 DOI: 10.1016/j.conb.2022.01.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 01/06/2022] [Accepted: 01/11/2022] [Indexed: 11/03/2022]
Abstract
State-dependent computation is key to cognition in both biological and artificial systems. Alan Turing recognized the power of stateful computation when he created the Turing machine with theoretically infinite computational capacity in 1936. Independently, by 1950, ethologists such as Tinbergen and Lorenz also began to implicitly embed rudimentary forms of state-dependent computation to create qualitative models of internal drives and naturally occurring animal behaviors. Here, we reformulate core ethological concepts in explicitly dynamical systems terms for stateful computation. We examine, based on a wealth of recent neural data collected during complex innate behaviors across species, the neural dynamics that determine the temporal structure of internal states. We will also discuss the degree to which the brain can be hierarchically partitioned into nested dynamical systems and the need for a multi-dimensional state-space model of the neuromodulatory system that underlies motivational and affective states.
Collapse
Affiliation(s)
- Drew N Robson
- Max Planck Institute for Biological Cybernetics, Tuebingen, Germany.
| | - Jennifer M Li
- Max Planck Institute for Biological Cybernetics, Tuebingen, Germany.
| |
Collapse
|
15
|
Urai AE, Doiron B, Leifer AM, Churchland AK. Large-scale neural recordings call for new insights to link brain and behavior. Nat Neurosci 2022; 25:11-19. [PMID: 34980926 DOI: 10.1038/s41593-021-00980-9] [Citation(s) in RCA: 90] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 11/08/2021] [Indexed: 12/17/2022]
Abstract
Neuroscientists today can measure activity from more neurons than ever before, and are facing the challenge of connecting these brain-wide neural recordings to computation and behavior. In the present review, we first describe emerging tools and technologies being used to probe large-scale brain activity and new approaches to characterize behavior in the context of such measurements. We next highlight insights obtained from large-scale neural recordings in diverse model systems, and argue that some of these pose a challenge to traditional theoretical frameworks. Finally, we elaborate on existing modeling frameworks to interpret these data, and argue that the interpretation of brain-wide neural recordings calls for new theoretical approaches that may depend on the desired level of understanding. These advances in both neural recordings and theory development will pave the way for critical advances in our understanding of the brain.
Collapse
Affiliation(s)
- Anne E Urai
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
- Cognitive Psychology Unit, Leiden University, Leiden, The Netherlands
| | | | | | - Anne K Churchland
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
- University of California Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
16
|
Kragel JE, Voss JL. Looking for the neural basis of memory. Trends Cogn Sci 2022; 26:53-65. [PMID: 34836769 PMCID: PMC8678329 DOI: 10.1016/j.tics.2021.10.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 10/14/2021] [Accepted: 10/15/2021] [Indexed: 01/03/2023]
Abstract
Memory neuroscientists often measure neural activity during task trials designed to recruit specific memory processes. Behavior is championed as crucial for deciphering brain-memory linkages but is impoverished in typical experiments that rely on summary judgments. We criticize this approach as being blind to the multiple cognitive, neural, and behavioral processes that occur rapidly within a trial to support memory. Instead, time-resolved behaviors such as eye movements occur at the speed of cognition and neural activity. We highlight successes using eye-movement tracking with in vivo electrophysiology to link rapid hippocampal oscillations to encoding and retrieval processes that interact over hundreds of milliseconds. This approach will improve research on the neural basis of memory because it pinpoints discrete moments of brain-behavior-cognition correspondence.
Collapse
Affiliation(s)
- James E Kragel
- Department of Neurology, The University of Chicago, 5841 South Maryland Avenue, Chicago, IL 60637, USA.
| | - Joel L Voss
- Department of Neurology, The University of Chicago, 5841 South Maryland Avenue, Chicago, IL 60637, USA
| |
Collapse
|
17
|
Functional ultrasound imaging: A useful tool for functional connectomics? Neuroimage 2021; 245:118722. [PMID: 34800662 DOI: 10.1016/j.neuroimage.2021.118722] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 09/15/2021] [Accepted: 11/10/2021] [Indexed: 12/28/2022] Open
Abstract
Functional ultrasound (fUS) is a hemodynamic-based functional neuroimaging technique, primarily used in animal models, that combines a high spatiotemporal resolution, a large field of view, and compatibility with behavior. These assets make fUS especially suited to interrogating brain activity at the systems level. In this review, we describe the technical capabilities offered by fUS and discuss how this technique can contribute to the field of functional connectomics. First, fUS can be used to study intrinsic functional connectivity, namely patterns of correlated activity between brain regions. In this area, fUS has made the most impact by following connectivity changes in disease models, across behavioral states, or dynamically. Second, fUS can also be used to map brain-wide pathways associated with an external event. For example, fUS has helped obtain finer descriptions of several sensory systems, and uncover new pathways implicated in specific behaviors. Additionally, combining fUS with direct circuit manipulations such as optogenetics is an attractive way to map the brain-wide connections of defined neuronal populations. Finally, technological improvements and the application of new analytical tools promise to boost fUS capabilities. As brain coverage and the range of behavioral contexts that can be addressed with fUS keep on increasing, we believe that fUS-guided connectomics will only expand in the future. In this regard, we consider the incorporation of fUS into multimodal studies combining diverse techniques and behavioral tasks to be the most promising research avenue.
Collapse
|
18
|
Hennig JA, Oby ER, Losey DM, Batista AP, Yu BM, Chase SM. How learning unfolds in the brain: toward an optimization view. Neuron 2021; 109:3720-3735. [PMID: 34648749 PMCID: PMC8639641 DOI: 10.1016/j.neuron.2021.09.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/25/2021] [Accepted: 09/02/2021] [Indexed: 12/17/2022]
Abstract
How do changes in the brain lead to learning? To answer this question, consider an artificial neural network (ANN), where learning proceeds by optimizing a given objective or cost function. This "optimization framework" may provide new insights into how the brain learns, as many idiosyncratic features of neural activity can be recapitulated by an ANN trained to perform the same task. Nevertheless, there are key features of how neural population activity changes throughout learning that cannot be readily explained in terms of optimization and are not typically features of ANNs. Here we detail three of these features: (1) the inflexibility of neural variability throughout learning, (2) the use of multiple learning processes even during simple tasks, and (3) the presence of large task-nonspecific activity changes. We propose that understanding the role of these features in the brain will be key to describing biological learning using an optimization framework.
Collapse
Affiliation(s)
- Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Darby M Losey
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
19
|
Chronic nicotine increases midbrain dopamine neuron activity and biases individual strategies towards reduced exploration in mice. Nat Commun 2021; 12:6945. [PMID: 34836948 PMCID: PMC8635406 DOI: 10.1038/s41467-021-27268-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 11/04/2021] [Indexed: 11/09/2022] Open
Abstract
Long-term exposure to nicotine alters brain circuits and induces profound changes in decision-making strategies, affecting behaviors both related and unrelated to drug seeking and consumption. Using an intracranial self-stimulation reward-based foraging task, we investigated in mice the impact of chronic nicotine on midbrain dopamine neuron activity and its consequence on the trade-off between exploitation and exploration. Model-based and archetypal analysis revealed substantial inter-individual variability in decision-making strategies, with mice passively exposed to nicotine shifting toward a more exploitative profile compared to non-exposed animals. We then mimicked the effect of chronic nicotine on the tonic activity of dopamine neurons using optogenetics, and found that photo-stimulated mice adopted a behavioral phenotype similar to that of mice exposed to chronic nicotine. Our results reveal a key role of tonic midbrain dopamine in the exploration/exploitation trade-off and highlight a potential mechanism by which nicotine affects the exploration/exploitation balance and decision-making.
Collapse
|
20
|
Macpherson T, Churchland A, Sejnowski T, DiCarlo J, Kamitani Y, Takahashi H, Hikida T. Natural and Artificial Intelligence: A brief introduction to the interplay between AI and neuroscience research. Neural Netw 2021; 144:603-613. [PMID: 34649035 DOI: 10.1016/j.neunet.2021.09.018] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 09/15/2021] [Accepted: 09/21/2021] [Indexed: 10/20/2022]
Abstract
Neuroscience and artificial intelligence (AI) share a long history of collaboration. Advances in neuroscience, alongside huge leaps in computer processing power over the last few decades, have given rise to a new generation of in silico neural networks inspired by the architecture of the brain. These AI systems are now capable of many of the advanced perceptual and cognitive abilities of biological systems, including object recognition and decision making. Moreover, AI is now increasingly being employed as a tool for neuroscience research and is transforming our understanding of brain functions. In particular, deep learning has been used to model how convolutional layers and recurrent connections in the brain's cerebral cortex control important functions, including visual processing, memory, and motor control. Excitingly, the use of neuroscience-inspired AI also holds great promise for understanding how changes in brain networks result in psychopathologies, and could even be utilized in treatment regimes. Here we discuss recent advancements in four areas in which the relationship between neuroscience and AI has led to major advancements in the field; (1) AI models of working memory, (2) AI visual processing, (3) AI analysis of big neuroscience datasets, and (4) computational psychiatry.
Collapse
Affiliation(s)
- Tom Macpherson
- Laboratory for Advanced Brain Functions, Institute for Protein Research, Osaka University, Osaka, Japan
| | - Anne Churchland
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA
| | - Terry Sejnowski
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, CA, USA; Division of Biological Sciences, University of California San Diego, CA, USA
| | - James DiCarlo
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, MA, USA
| | - Yukiyasu Kamitani
- Department of Neuroinformatics, ATR Computational Neuroscience Laboratories, Kyoto, Japan; Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Hidehiko Takahashi
- Department of Psychiatry and Behavioral Sciences, Tokyo Medical and Dental University Graduate School, Tokyo, Japan
| | - Takatoshi Hikida
- Laboratory for Advanced Brain Functions, Institute for Protein Research, Osaka University, Osaka, Japan.
| |
Collapse
|
21
|
McCullough MH, Goodhill GJ. Unsupervised quantification of naturalistic animal behaviors for gaining insight into the brain. Curr Opin Neurobiol 2021; 70:89-100. [PMID: 34482006 DOI: 10.1016/j.conb.2021.07.014] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 07/20/2021] [Accepted: 07/21/2021] [Indexed: 01/02/2023]
Abstract
Neural computation has evolved to optimize the behaviors that enable our survival. Although much previous work in neuroscience has focused on constrained task behaviors, recent advances in computer vision are fueling a trend toward the study of naturalistic behaviors. Automated tracking of fine-scale behaviors is generating rich datasets for animal models including rodents, fruit flies, zebrafish, and worms. However, extracting meaning from these large and complex data often requires sophisticated computational techniques. Here we review the latest methods and modeling approaches providing new insights into the brain from behavior. We focus on unsupervised methods for identifying stereotyped behaviors and for resolving details of the structure and dynamics of behavioral sequences.
Collapse
Affiliation(s)
- Michael H McCullough
- Queensland Brain Institute, The University of Queensland, Brisbane, Queensland, 4072, Australia
| | - Geoffrey J Goodhill
- Queensland Brain Institute, The University of Queensland, Brisbane, Queensland, 4072, Australia; School of Mathematics and Physics, The University of Queensland, Brisbane, Queensland, 4072, Australia.
| |
Collapse
|
22
|
Lanore F, Cayco-Gajic NA, Gurnani H, Coyle D, Silver RA. Cerebellar granule cell axons support high-dimensional representations. Nat Neurosci 2021; 24:1142-1150. [PMID: 34168340 PMCID: PMC7611462 DOI: 10.1038/s41593-021-00873-x] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 05/13/2021] [Indexed: 02/05/2023]
Abstract
In classical theories of cerebellar cortex, high-dimensional sensorimotor representations are used to separate neuronal activity patterns, improving associative learning and motor performance. Recent experimental studies suggest that cerebellar granule cell (GrC) population activity is low-dimensional. To examine sensorimotor representations from the point of view of downstream Purkinje cell 'decoders', we used three-dimensional acousto-optic lens two-photon microscopy to record from hundreds of GrC axons. Here we show that GrC axon population activity is high dimensional and distributed with little fine-scale spatial structure during spontaneous behaviors. Moreover, distinct behavioral states are represented along orthogonal dimensions in neuronal activity space. These results suggest that the cerebellar cortex supports high-dimensional representations and segregates behavioral state-dependent computations into orthogonal subspaces, as reported in the neocortex. Our findings match the predictions of cerebellar pattern separation theories and suggest that the cerebellum and neocortex use population codes with common features, despite their vastly different circuit structures.
Collapse
Affiliation(s)
- Frederic Lanore
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK
- University of Bordeaux, CNRS, Interdisciplinary Institute for Neuroscience, IINS, UMR 5297, Bordeaux, France
| | - N Alex Cayco-Gajic
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK
- Group for Neural Theory, Laboratoire de neurosciences cognitives et computationnelles, Département d'études cognitives, École normale supérieure, INSERM U960, Université Paris Sciences et Lettres, Paris, France
| | - Harsha Gurnani
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK
| | - Diccon Coyle
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK
| | - R Angus Silver
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK.
| |
Collapse
|
23
|
Meijer GT, Arlandis J, Urai AE. There is no mouse: using a virtual mouse to generate training data for video-based pose estimation. Lab Anim (NY) 2021; 50:172-173. [PMID: 34117377 DOI: 10.1038/s41684-021-00794-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
| | | | - Anne E Urai
- Cognitive Psychology Unit, Institute of Psychology and Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands.
| |
Collapse
|
24
|
Márton CD, Schultz SR, Averbeck BB. Learning to select actions shapes recurrent dynamics in the corticostriatal system. Neural Netw 2020; 132:375-393. [PMID: 32992244 PMCID: PMC7685243 DOI: 10.1016/j.neunet.2020.09.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 09/03/2020] [Accepted: 09/11/2020] [Indexed: 01/03/2023]
Abstract
Learning to select appropriate actions based on their values is fundamental to adaptive behavior. This form of learning is supported by fronto-striatal systems. The dorsal-lateral prefrontal cortex (dlPFC) and the dorsal striatum (dSTR), which are strongly interconnected, are key nodes in this circuitry. Substantial experimental evidence, including neurophysiological recordings, have shown that neurons in these structures represent key aspects of learning. The computational mechanisms that shape the neurophysiological responses, however, are not clear. To examine this, we developed a recurrent neural network (RNN) model of the dlPFC-dSTR circuit and trained it on an oculomotor sequence learning task. We compared the activity generated by the model to activity recorded from monkey dlPFC and dSTR in the same task. This network consisted of a striatal component which encoded action values, and a prefrontal component which selected appropriate actions. After training, this system was able to autonomously represent and update action values and select actions, thus being able to closely approximate the representational structure in corticostriatal recordings. We found that learning to select the correct actions drove action-sequence representations further apart in activity space, both in the model and in the neural data. The model revealed that learning proceeds by increasing the distance between sequence-specific representations. This makes it more likely that the model will select the appropriate action sequence as learning develops. Our model thus supports the hypothesis that learning in networks drives the neural representations of actions further apart, increasing the probability that the network generates correct actions as learning proceeds. Altogether, this study advances our understanding of how neural circuit dynamics are involved in neural computation, revealing how dynamics in the corticostriatal system support task learning.
Collapse
Affiliation(s)
- Christian D Márton
- Centre for Neurotechnology & Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK; Laboratory of Neuropsychology, Section on Learning and Decision Making, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA.
| | - Simon R Schultz
- Centre for Neurotechnology & Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
| | - Bruno B Averbeck
- Laboratory of Neuropsychology, Section on Learning and Decision Making, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
25
|
Pollock E, Jazayeri M. Engineering recurrent neural networks from task-relevant manifolds and dynamics. PLoS Comput Biol 2020; 16:e1008128. [PMID: 32785228 PMCID: PMC7446915 DOI: 10.1371/journal.pcbi.1008128] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 08/24/2020] [Accepted: 07/08/2020] [Indexed: 12/11/2022] Open
Abstract
Many cognitive processes involve transformations of distributed representations in neural populations, creating a need for population-level models. Recurrent neural network models fulfill this need, but there are many open questions about how their connectivity gives rise to dynamics that solve a task. Here, we present a method for finding the connectivity of networks for which the dynamics are specified to solve a task in an interpretable way. We apply our method to a working memory task by synthesizing a network that implements a drift-diffusion process over a ring-shaped manifold. We also use our method to demonstrate how inputs can be used to control network dynamics for cognitive flexibility and explore the relationship between representation geometry and network capacity. Our work fits within the broader context of understanding neural computations as dynamics over relatively low-dimensional manifolds formed by correlated patterns of neurons.
Collapse
Affiliation(s)
- Eli Pollock
- Department of Brain & Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Mehrdad Jazayeri
- Department of Brain & Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
26
|
Bermudez-Contreras E, Clark BJ, Wilber A. The Neuroscience of Spatial Navigation and the Relationship to Artificial Intelligence. Front Comput Neurosci 2020; 14:63. [PMID: 32848684 PMCID: PMC7399088 DOI: 10.3389/fncom.2020.00063] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Accepted: 05/28/2020] [Indexed: 11/13/2022] Open
Abstract
Recent advances in artificial intelligence (AI) and neuroscience are impressive. In AI, this includes the development of computer programs that can beat a grandmaster at GO or outperform human radiologists at cancer detection. A great deal of these technological developments are directly related to progress in artificial neural networks-initially inspired by our knowledge about how the brain carries out computation. In parallel, neuroscience has also experienced significant advances in understanding the brain. For example, in the field of spatial navigation, knowledge about the mechanisms and brain regions involved in neural computations of cognitive maps-an internal representation of space-recently received the Nobel Prize in medicine. Much of the recent progress in neuroscience has partly been due to the development of technology used to record from very large populations of neurons in multiple regions of the brain with exquisite temporal and spatial resolution in behaving animals. With the advent of the vast quantities of data that these techniques allow us to collect there has been an increased interest in the intersection between AI and neuroscience, many of these intersections involve using AI as a novel tool to explore and analyze these large data sets. However, given the common initial motivation point-to understand the brain-these disciplines could be more strongly linked. Currently much of this potential synergy is not being realized. We propose that spatial navigation is an excellent area in which these two disciplines can converge to help advance what we know about the brain. In this review, we first summarize progress in the neuroscience of spatial navigation and reinforcement learning. We then turn our attention to discuss how spatial navigation has been modeled using descriptive, mechanistic, and normative approaches and the use of AI in such models. Next, we discuss how AI can advance neuroscience, how neuroscience can advance AI, and the limitations of these approaches. We finally conclude by highlighting promising lines of research in which spatial navigation can be the point of intersection between neuroscience and AI and how this can contribute to the advancement of the understanding of intelligent behavior.
Collapse
Affiliation(s)
| | - Benjamin J. Clark
- Department of Psychology, University of New Mexico, Albuquerque, NM, United States
| | - Aaron Wilber
- Department of Psychology, Program in Neuroscience, Florida State University, Tallahassee, FL, United States
| |
Collapse
|