1
|
Aldarondo D, Merel J, Marshall JD, Hasenclever L, Klibaite U, Gellis A, Tassa Y, Wayne G, Botvinick M, Ölveczky BP. A virtual rodent predicts the structure of neural activity across behaviours. Nature 2024; 632:594-602. [PMID: 38862024 DOI: 10.1038/s41586-024-07633-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 05/30/2024] [Indexed: 06/13/2024]
Abstract
Animals have exquisite control of their bodies, allowing them to perform a diverse range of behaviours. How such control is implemented by the brain, however, remains unclear. Advancing our understanding requires models that can relate principles of control to the structure of neural activity in behaving animals. Here, to facilitate this, we built a 'virtual rodent', in which an artificial neural network actuates a biomechanically realistic model of the rat1 in a physics simulator2. We used deep reinforcement learning3-5 to train the virtual agent to imitate the behaviour of freely moving rats, thus allowing us to compare neural activity recorded in real rats to the network activity of a virtual rodent mimicking their behaviour. We found that neural activity in the sensorimotor striatum and motor cortex was better predicted by the virtual rodent's network activity than by any features of the real rat's movements, consistent with both regions implementing inverse dynamics6. Furthermore, the network's latent variability predicted the structure of neural variability across behaviours and afforded robustness in a way consistent with the minimal intervention principle of optimal feedback control7. These results demonstrate how physical simulation of biomechanically realistic virtual animals can help interpret the structure of neural activity across behaviour and relate it to theoretical principles of motor control.
Collapse
Affiliation(s)
- Diego Aldarondo
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Fauna Robotics, New York, NY, USA.
| | - Josh Merel
- DeepMind, Google, London, UK
- Fauna Robotics, New York, NY, USA
| | - Jesse D Marshall
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
- Reality Labs, Meta, New York, NY, USA
| | | | - Ugne Klibaite
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Amanda Gellis
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | | | | | - Matthew Botvinick
- DeepMind, Google, London, UK
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| | - Bence P Ölveczky
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
2
|
Cutler B, Haesemeyer M. Vertebrate behavioral thermoregulation: knowledge and future directions. NEUROPHOTONICS 2024; 11:033409. [PMID: 38769950 PMCID: PMC11105118 DOI: 10.1117/1.nph.11.3.033409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 04/10/2024] [Accepted: 05/03/2024] [Indexed: 05/22/2024]
Abstract
Thermoregulation is critical for survival across species. In animals, the nervous system detects external and internal temperatures, integrates this information with internal states, and ultimately forms a decision on appropriate thermoregulatory actions. Recent work has identified critical molecules and sensory and motor pathways controlling thermoregulation. However, especially with regard to behavioral thermoregulation, many open questions remain. Here, we aim to both summarize the current state of research, the "knowledge," as well as what in our mind is still largely missing, the "future directions." Given the host of circuit entry points that have been discovered, we specifically see that the time is ripe for a neuro-computational perspective on thermoregulation. Such a perspective is largely lacking but is increasingly fueled and made possible by the development of advanced tools and modeling strategies.
Collapse
Affiliation(s)
- Bradley Cutler
- Graduate program in Molecular, Cellular and Developmental Biology, Columbus, Ohio, United States
- The Ohio State University, Columbus, Ohio, United States
| | | |
Collapse
|
3
|
Marin Vargas A, Bisi A, Chiappa AS, Versteeg C, Miller LE, Mathis A. Task-driven neural network models predict neural dynamics of proprioception. Cell 2024; 187:1745-1761.e19. [PMID: 38518772 DOI: 10.1016/j.cell.2024.02.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 12/06/2023] [Accepted: 02/27/2024] [Indexed: 03/24/2024]
Abstract
Proprioception tells the brain the state of the body based on distributed sensory neurons. Yet, the principles that govern proprioceptive processing are poorly understood. Here, we employ a task-driven modeling approach to investigate the neural code of proprioceptive neurons in cuneate nucleus (CN) and somatosensory cortex area 2 (S1). We simulated muscle spindle signals through musculoskeletal modeling and generated a large-scale movement repertoire to train neural networks based on 16 hypotheses, each representing different computational goals. We found that the emerging, task-optimized internal representations generalize from synthetic data to predict neural dynamics in CN and S1 of primates. Computational tasks that aim to predict the limb position and velocity were the best at predicting the neural activity in both areas. Since task optimization develops representations that better predict neural activity during active than passive movements, we postulate that neural activity in the CN and S1 is top-down modulated during goal-directed movements.
Collapse
Affiliation(s)
- Alessandro Marin Vargas
- Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland; NeuroX Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland
| | - Axel Bisi
- Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland; NeuroX Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland
| | - Alberto S Chiappa
- Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland; NeuroX Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland
| | - Chris Versteeg
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA; Department of Physical Medicine and Rehabilitation, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA; Department of Biomedical Engineering, McCormick School of Engineering, Northwestern University, Evanston, IL 60208, USA; Shirley Ryan AbilityLab, Chicago, IL 60611, USA
| | - Lee E Miller
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA; Department of Physical Medicine and Rehabilitation, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA; Department of Biomedical Engineering, McCormick School of Engineering, Northwestern University, Evanston, IL 60208, USA; Shirley Ryan AbilityLab, Chicago, IL 60611, USA
| | - Alexander Mathis
- Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland; NeuroX Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland.
| |
Collapse
|
4
|
Ichikawa K, Kaneko K. Bayesian inference is facilitated by modular neural networks with different time scales. PLoS Comput Biol 2024; 20:e1011897. [PMID: 38478575 PMCID: PMC10962854 DOI: 10.1371/journal.pcbi.1011897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 03/25/2024] [Accepted: 02/06/2024] [Indexed: 03/26/2024] Open
Abstract
Various animals, including humans, have been suggested to perform Bayesian inferences to handle noisy, time-varying external information. In performing Bayesian inference by the brain, the prior distribution must be acquired and represented by sampling noisy external inputs. However, the mechanism by which neural activities represent such distributions has not yet been elucidated. Our findings reveal that networks with modular structures, composed of fast and slow modules, are adept at representing this prior distribution, enabling more accurate Bayesian inferences. Specifically, the modular network that consists of a main module connected with input and output layers and a sub-module with slower neural activity connected only with the main module outperformed networks with uniform time scales. Prior information was represented specifically by the slow sub-module, which could integrate observed signals over an appropriate period and represent input means and variances. Accordingly, the neural network could effectively predict the time-varying inputs. Furthermore, by training the time scales of neurons starting from networks with uniform time scales and without modular structure, the above slow-fast modular network structure and the division of roles in which prior knowledge is selectively represented in the slow sub-modules spontaneously emerged. These results explain how the prior distribution for Bayesian inference is represented in the brain, provide insight into the relevance of modular structure with time scale hierarchy to information processing, and elucidate the significance of brain areas with slower time scales.
Collapse
Affiliation(s)
- Kohei Ichikawa
- Department of Basic Science, Graduate School of Arts and Sciences, University of Tokyo, Meguro-ku, Tokyo, Japan
| | - Kunihiko Kaneko
- Research Center for Complex Systems Biology, University of Tokyo, Bunkyo-ku, Tokyo, Japan
- The Niels Bohr Institute, University of Copenhagen, Blegdamsvej, Copenhagen, Denmark
| |
Collapse
|
5
|
Sandbrink KJ, Mamidanna P, Michaelis C, Bethge M, Mathis MW, Mathis A. Contrasting action and posture coding with hierarchical deep neural network models of proprioception. eLife 2023; 12:e81499. [PMID: 37254843 PMCID: PMC10361732 DOI: 10.7554/elife.81499] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 05/16/2023] [Indexed: 06/01/2023] Open
Abstract
Biological motor control is versatile, efficient, and depends on proprioceptive feedback. Muscles are flexible and undergo continuous changes, requiring distributed adaptive control mechanisms that continuously account for the body's state. The canonical role of proprioception is representing the body state. We hypothesize that the proprioceptive system could also be critical for high-level tasks such as action recognition. To test this theory, we pursued a task-driven modeling approach, which allowed us to isolate the study of proprioception. We generated a large synthetic dataset of human arm trajectories tracing characters of the Latin alphabet in 3D space, together with muscle activities obtained from a musculoskeletal model and model-based muscle spindle activity. Next, we compared two classes of tasks: trajectory decoding and action recognition, which allowed us to train hierarchical models to decode either the position and velocity of the end-effector of one's posture or the character (action) identity from the spindle firing patterns. We found that artificial neural networks could robustly solve both tasks, and the networks' units show tuning properties similar to neurons in the primate somatosensory cortex and the brainstem. Remarkably, we found uniformly distributed directional selective units only with the action-recognition-trained models and not the trajectory-decoding-trained models. This suggests that proprioceptive encoding is additionally associated with higher-level functions such as action recognition and therefore provides new, experimentally testable hypotheses of how proprioception aids in adaptive motor control.
Collapse
Affiliation(s)
- Kai J Sandbrink
- The Rowland Institute at Harvard, Harvard UniversityCambridgeUnited States
| | - Pranav Mamidanna
- Tübingen AI Center, Eberhard Karls Universität Tübingen & Institute for Theoretical PhysicsTübingenGermany
| | - Claudio Michaelis
- Tübingen AI Center, Eberhard Karls Universität Tübingen & Institute for Theoretical PhysicsTübingenGermany
| | - Matthias Bethge
- Tübingen AI Center, Eberhard Karls Universität Tübingen & Institute for Theoretical PhysicsTübingenGermany
| | - Mackenzie Weygandt Mathis
- The Rowland Institute at Harvard, Harvard UniversityCambridgeUnited States
- Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneGenèveSwitzerland
| | - Alexander Mathis
- The Rowland Institute at Harvard, Harvard UniversityCambridgeUnited States
- Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneGenèveSwitzerland
| |
Collapse
|
6
|
Blevins AS, Bassett DS, Scott EK, Vanwalleghem GC. From calcium imaging to graph topology. Netw Neurosci 2022; 6:1125-1147. [PMID: 38800465 PMCID: PMC11117109 DOI: 10.1162/netn_a_00262] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 06/13/2022] [Indexed: 05/29/2024] Open
Abstract
Systems neuroscience is facing an ever-growing mountain of data. Recent advances in protein engineering and microscopy have together led to a paradigm shift in neuroscience; using fluorescence, we can now image the activity of every neuron through the whole brain of behaving animals. Even in larger organisms, the number of neurons that we can record simultaneously is increasing exponentially with time. This increase in the dimensionality of the data is being met with an explosion of computational and mathematical methods, each using disparate terminology, distinct approaches, and diverse mathematical concepts. Here we collect, organize, and explain multiple data analysis techniques that have been, or could be, applied to whole-brain imaging, using larval zebrafish as an example model. We begin with methods such as linear regression that are designed to detect relations between two variables. Next, we progress through network science and applied topological methods, which focus on the patterns of relations among many variables. Finally, we highlight the potential of generative models that could provide testable hypotheses on wiring rules and network progression through time, or disease progression. While we use examples of imaging from larval zebrafish, these approaches are suitable for any population-scale neural network modeling, and indeed, to applications beyond systems neuroscience. Computational approaches from network science and applied topology are not limited to larval zebrafish, or even to systems neuroscience, and we therefore conclude with a discussion of how such methods can be applied to diverse problems across the biological sciences.
Collapse
Affiliation(s)
- Ann S. Blevins
- Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Dani S. Bassett
- Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Electrical and Systems Engineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
- Department of Physics and Astronomy, College of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, USA
- Santa Fe Institute, Santa Fe, NM, USA
| | - Ethan K. Scott
- Queensland Brain Institute, University of Queensland, Brisbane, Australia
- Department of Anatomy and Physiology, School of Biomedical Sciences, University of Melbourne, Parkville, Australia
| | - Gilles C. Vanwalleghem
- Danish Research Institute of Translational Neuroscience (DANDRITE), Nordic EMBL Partnership for Molecular Medicine, Aarhus University, Aarhus, Denmark
- Department of Molecular Biology and Genetics, Aarhus University, Aarhus, Denmark
| |
Collapse
|
7
|
Hennig JA, Oby ER, Losey DM, Batista AP, Yu BM, Chase SM. How learning unfolds in the brain: toward an optimization view. Neuron 2021; 109:3720-3735. [PMID: 34648749 PMCID: PMC8639641 DOI: 10.1016/j.neuron.2021.09.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/25/2021] [Accepted: 09/02/2021] [Indexed: 12/17/2022]
Abstract
How do changes in the brain lead to learning? To answer this question, consider an artificial neural network (ANN), where learning proceeds by optimizing a given objective or cost function. This "optimization framework" may provide new insights into how the brain learns, as many idiosyncratic features of neural activity can be recapitulated by an ANN trained to perform the same task. Nevertheless, there are key features of how neural population activity changes throughout learning that cannot be readily explained in terms of optimization and are not typically features of ANNs. Here we detail three of these features: (1) the inflexibility of neural variability throughout learning, (2) the use of multiple learning processes even during simple tasks, and (3) the presence of large task-nonspecific activity changes. We propose that understanding the role of these features in the brain will be key to describing biological learning using an optimization framework.
Collapse
Affiliation(s)
- Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Darby M Losey
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
8
|
Mano O, Creamer MS, Badwan BA, Clark DA. Predicting individual neuron responses with anatomically constrained task optimization. Curr Biol 2021; 31:4062-4075.e4. [PMID: 34324832 DOI: 10.1016/j.cub.2021.06.090] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 05/24/2021] [Accepted: 06/29/2021] [Indexed: 01/28/2023]
Abstract
Artificial neural networks trained to solve sensory tasks can develop statistical representations that match those in biological circuits. However, it remains unclear whether they can reproduce properties of individual neurons. Here, we investigated how artificial networks predict individual neuron properties in the visual motion circuits of the fruit fly Drosophila. We trained anatomically constrained networks to predict movement in natural scenes, solving the same inference problem as fly motion detectors. Units in the artificial networks adopted many properties of analogous individual neurons, even though they were not explicitly trained to match these properties. Among these properties was the split into ON and OFF motion detectors, which is not predicted by classical motion detection models. The match between model and neurons was closest when models were trained to be robust to noise. These results demonstrate how anatomical, task, and noise constraints can explain properties of individual neurons in a small neural network.
Collapse
Affiliation(s)
- Omer Mano
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA
| | - Matthew S Creamer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Bara A Badwan
- School of Engineering and Applied Science, Yale University, New Haven, CT 06511, USA
| | - Damon A Clark
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
9
|
Hausmann SB, Vargas AM, Mathis A, Mathis MW. Measuring and modeling the motor system with machine learning. Curr Opin Neurobiol 2021; 70:11-23. [PMID: 34116423 DOI: 10.1016/j.conb.2021.04.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Revised: 03/23/2021] [Accepted: 04/18/2021] [Indexed: 12/11/2022]
Abstract
The utility of machine learning in understanding the motor system is promising a revolution in how to collect, measure, and analyze data. The field of movement science already elegantly incorporates theory and engineering principles to guide experimental work, and in this review we discuss the growing use of machine learning: from pose estimation, kinematic analyses, dimensionality reduction, and closed-loop feedback, to its use in understanding neural correlates and untangling sensorimotor systems. We also give our perspective on new avenues, where markerless motion capture combined with biomechanical modeling and neural networks could be a new platform for hypothesis-driven research.
Collapse
Affiliation(s)
| | | | - Alexander Mathis
- EPFL, Swiss Federal Institute of Technology, Lausanne, Switzerland.
| | | |
Collapse
|
10
|
Abstract
Thermoregulation is critical for survival and animals therefore employ strategies to keep their body temperature within a physiological range. As ectotherms, fish exclusively rely on behavioral strategies for thermoregulation. Different species of fish seek out their specific optimal temperatures through thermal navigation by biasing behavioral output based on experienced environmental temperatures. Like other vertebrates, fish sense water temperature using thermoreceptors in trigeminal and dorsal root ganglia neurons that innervate the skin. Recent research in larval zebrafish has revealed how neural circuits subsequently transform this sensation of temperature into thermoregulatory behaviors. Across fish species, thermoregulatory strategies rely on a modulation of swim vigor based on current temperature and a modulation of turning based on temperature change. Interestingly, temperature preferences are not fixed but depend on other environmental cues and internal states. The following review is intended as an overview on the current knowledge as well as open questions in fish thermoregulation.
Collapse
Affiliation(s)
- Martin Haesemeyer
- The Ohio State University College of Medicine, Department of Neuroscience, Columbus, OH, USA.
| |
Collapse
|
11
|
Biswas T, Bishop WE, Fitzgerald JE. Theoretical principles for illuminating sensorimotor processing with brain-wide neuronal recordings. Curr Opin Neurobiol 2020; 65:138-145. [PMID: 33248437 PMCID: PMC8754199 DOI: 10.1016/j.conb.2020.10.021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 10/28/2020] [Accepted: 10/29/2020] [Indexed: 11/24/2022]
Abstract
Modern recording techniques now permit brain-wide sensorimotor circuits to be observed at single neuron resolution in small animals. Extracting theoretical understanding from these recordings requires principles that organize findings and guide future experiments. Here we review theoretical principles that shed light onto brain-wide sensorimotor processing. We begin with an analogy that conceptualizes principles as streetlamps that illuminate the empirical terrain, and we illustrate the analogy by showing how two familiar principles apply in new ways to brain-wide phenomena. We then focus the bulk of the review on describing three more principles that have wide utility for mapping brain-wide neural activity, making testable predictions from highly parameterized mechanistic models, and investigating the computational determinants of neuronal response patterns across the brain.
Collapse
Affiliation(s)
- Tirthabir Biswas
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - William E Bishop
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - James E Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| |
Collapse
|
12
|
Loring MD, Thomson EE, Naumann EA. Whole-brain interactions underlying zebrafish behavior. Curr Opin Neurobiol 2020; 65:88-99. [PMID: 33221591 PMCID: PMC10697041 DOI: 10.1016/j.conb.2020.09.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 09/28/2020] [Accepted: 09/30/2020] [Indexed: 12/13/2022]
Abstract
Detailed quantification of neural dynamics across the entire brain will be the key to genuinely understanding perception and behavior. With the recent developments in microscopy and biosensor engineering, the zebrafish has made a grand entrance in neuroscience as its small size and optical transparency enable imaging access to its entire brain at cellular and even subcellular resolution. However, until recently many neurobiological insights were largely correlational or provided little mechanistic insight into the brain-wide population dynamics generated by diverse types of neurons. Now with increasingly sophisticated behavioral, imaging, and causal intervention paradigms, zebrafish are revealing how entire vertebrate brains function. Here we review recent research that fulfills promises made by the early wave of technical advances. These studies reveal new features of brain-wide neural processing and the importance of integrative investigation and computational modelling. Moreover, we outline the future tools necessary for solving broader brain-scale circuit problems.
Collapse
Affiliation(s)
- Matthew D Loring
- Duke School of Medicine, Department of Neurobiology, Durham, NC 27710, United States
| | - Eric E Thomson
- Duke School of Medicine, Department of Neurobiology, Durham, NC 27710, United States
| | - Eva A Naumann
- Duke School of Medicine, Department of Neurobiology, Durham, NC 27710, United States.
| |
Collapse
|
13
|
Ahrens MB. Zebrafish Neuroscience: Using Artificial Neural Networks to Help Understand Brains. Curr Biol 2019; 29:R1138-R1140. [PMID: 31689401 DOI: 10.1016/j.cub.2019.09.039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Brains are notoriously hard to understand, and neuroscientists need all the tools they can get their hands on to have a realistic shot at it. Advances in machine learning are proving instrumental, illustrated by their recent use to shed light on navigational strategies implemented by zebrafish brains.
Collapse
Affiliation(s)
- Misha B Ahrens
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| |
Collapse
|
14
|
Musall S, Urai AE, Sussillo D, Churchland AK. Harnessing behavioral diversity to understand neural computations for cognition. Curr Opin Neurobiol 2019; 58:229-238. [PMID: 31670073 PMCID: PMC6931281 DOI: 10.1016/j.conb.2019.09.011] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 08/28/2019] [Accepted: 09/11/2019] [Indexed: 11/28/2022]
Abstract
With the increasing acquisition of large-scale neural recordings comes the challenge of inferring the computations they perform and understanding how these give rise to behavior. Here, we review emerging conceptual and technological advances that begin to address this challenge, garnering insights from both biological and artificial neural networks. We argue that neural data should be recorded during rich behavioral tasks, to model cognitive processes and estimate latent behavioral variables. Careful quantification of animal movements can also provide a more complete picture of how movements shape neural dynamics and reflect changes in brain state, such as arousal or stress. Artificial neural networks (ANNs) could serve as artificial model organisms to connect neural dynamics and rich behavioral data. ANNs have already begun to reveal how a wide range of different behaviors can be implemented, generating hypotheses about how observed neural activity might drive behavior and explaining diversity in behavioral strategies.
Collapse
Affiliation(s)
- Simon Musall
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA
| | - Anne E Urai
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA
| | - David Sussillo
- Google AI, Google, Inc., Mountain View, CA, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Stanford Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Anne K Churchland
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA.
| |
Collapse
|