1
|
Zhang R, Pitkow X, Angelaki DE. Inductive biases of neural network modularity in spatial navigation. SCIENCE ADVANCES 2024; 10:eadk1256. [PMID: 39028809 PMCID: PMC11259174 DOI: 10.1126/sciadv.adk1256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 06/14/2024] [Indexed: 07/21/2024]
Abstract
The brain may have evolved a modular architecture for daily tasks, with circuits featuring functionally specialized modules that match the task structure. We hypothesize that this architecture enables better learning and generalization than architectures with less specialized modules. To test this, we trained reinforcement learning agents with various neural architectures on a naturalistic navigation task. We found that the modular agent, with an architecture that segregates computations of state representation, value, and action into specialized modules, achieved better learning and generalization. Its learned state representation combines prediction and observation, weighted by their relative uncertainty, akin to recursive Bayesian estimation. This agent's behavior also resembles macaques' behavior more closely. Our results shed light on the possible rationale for the brain's modularity and suggest that artificial systems can use this insight from neuroscience to improve learning and generalization in natural tasks.
Collapse
Affiliation(s)
- Ruiyi Zhang
- Tandon School of Engineering, New York University, New York, NY, USA
| | - Xaq Pitkow
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Dora E. Angelaki
- Tandon School of Engineering, New York University, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
2
|
Noel JP, Balzani E, Savin C, Angelaki DE. Context-invariant beliefs are supported by dynamic reconfiguration of single unit functional connectivity in prefrontal cortex of male macaques. Nat Commun 2024; 15:5738. [PMID: 38982106 PMCID: PMC11233555 DOI: 10.1038/s41467-024-50203-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 07/02/2024] [Indexed: 07/11/2024] Open
Abstract
Natural behaviors occur in closed action-perception loops and are supported by dynamic and flexible beliefs abstracted away from our immediate sensory milieu. How this real-world flexibility is instantiated in neural circuits remains unknown. Here, we have male macaques navigate in a virtual environment by primarily leveraging sensory (optic flow) signals, or by more heavily relying on acquired internal models. We record single-unit spiking activity simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and the dorso-lateral prefrontal cortex (dlPFC). Results show that while animals were able to maintain adaptive task-relevant beliefs regardless of sensory context, the fine-grain statistical dependencies between neurons, particularly in 7a and dlPFC, dynamically remapped with the changing computational demands. In dlPFC, but not 7a, destroying these statistical dependencies abolished the area's ability for cross-context decoding. Lastly, correlational analyses suggested that the more unit-to-unit couplings remapped in dlPFC, and the less they did so in MSTd, the less were population codes and behavior impacted by the loss of sensory evidence. We conclude that dynamic functional connectivity between neurons in prefrontal cortex maintain a stable population code and context-invariant beliefs during naturalistic behavior.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, USA.
- Department of Neuroscience, University of Minnesota, Minneapolis, MN, USA.
| | - Edoardo Balzani
- Center for Neural Science, New York University, New York City, NY, USA
- Flatiron Institute, Simons Foundation, New York, NY, USA
| | - Cristina Savin
- Center for Neural Science, New York University, New York City, NY, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York City, NY, USA
| |
Collapse
|
3
|
Cisek P, Green AM. Toward a neuroscience of natural behavior. Curr Opin Neurobiol 2024; 86:102859. [PMID: 38583263 DOI: 10.1016/j.conb.2024.102859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 03/04/2024] [Indexed: 04/09/2024]
Abstract
One of the most exciting new developments in systems neuroscience is the progress being made toward neurophysiological experiments that move beyond simplified laboratory settings and address the richness of natural behavior. This is enabled by technological advances such as wireless recording in freely moving animals, automated quantification of behavior, and new methods for analyzing large data sets. Beyond new empirical methods and data, however, there is also a need for new theories and concepts to interpret that data. Such theories need to address the particular challenges of natural behavior, which often differ significantly from the scenarios studied in traditional laboratory settings. Here, we discuss some strategies for developing such novel theories and concepts and some example hypotheses being proposed.
Collapse
Affiliation(s)
- Paul Cisek
- Department of Neuroscience, University of Montréal, Montréal, Québec, Canada.
| | - Andrea M Green
- Department of Neuroscience, University of Montréal, Montréal, Québec, Canada
| |
Collapse
|
4
|
Ambrad Giovannetti E, Rancz E. Behind mouse eyes: The function and control of eye movements in mice. Neurosci Biobehav Rev 2024; 161:105671. [PMID: 38604571 DOI: 10.1016/j.neubiorev.2024.105671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 03/12/2024] [Accepted: 04/08/2024] [Indexed: 04/13/2024]
Abstract
The mouse visual system has become the most popular model to study the cellular and circuit mechanisms of sensory processing. However, the importance of eye movements only started to be appreciated recently. Eye movements provide a basis for predictive sensing and deliver insights into various brain functions and dysfunctions. A plethora of knowledge on the central control of eye movements and their role in perception and behaviour arose from work on primates. However, an overview of various eye movements in mice and a comparison to primates is missing. Here, we review the eye movement types described to date in mice and compare them to those observed in primates. We discuss the central neuronal mechanisms for their generation and control. Furthermore, we review the mounting literature on eye movements in mice during head-fixed and freely moving behaviours. Finally, we highlight gaps in our understanding and suggest future directions for research.
Collapse
Affiliation(s)
| | - Ede Rancz
- INMED, INSERM, Aix-Marseille University, Marseille, France.
| |
Collapse
|
5
|
Müller MM, Scherer J, Unterbrink P, Bertrand OJN, Egelhaaf M, Boeddeker N. The Virtual Navigation Toolbox: Providing tools for virtual navigation experiments. PLoS One 2023; 18:e0293536. [PMID: 37943845 PMCID: PMC10635524 DOI: 10.1371/journal.pone.0293536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 10/16/2023] [Indexed: 11/12/2023] Open
Abstract
Spatial navigation research in humans increasingly relies on experiments using virtual reality (VR) tools, which allow for the creation of highly flexible, and immersive study environments, that can react to participant interaction in real time. Despite the popularity of VR, tools simplifying the creation and data management of such experiments are rare and often restricted to a specific scope-limiting usability and comparability. To overcome those limitations, we introduce the Virtual Navigation Toolbox (VNT), a collection of interchangeable and independent tools for the development of spatial navigation VR experiments using the popular Unity game engine. The VNT's features are packaged in loosely coupled and reusable modules, facilitating convenient implementation of diverse experimental designs. Here, we depict how the VNT fulfils feature requirements of different VR environments and experiments, guiding through the implementation and execution of a showcase study using the toolbox. The presented showcase study reveals that homing performance in a classic triangle completion task is invariant to translation velocity of the participant's avatar, but highly sensitive to the number of landmarks. The VNT is freely available under a creative commons license, and we invite researchers to contribute, extending and improving tools using the provided repository.
Collapse
Affiliation(s)
- Martin M. Müller
- Department of Neurobiology, Bielefeld University, Bielefeld, NRW, Germany
| | - Jonas Scherer
- Department of Neurobiology, Bielefeld University, Bielefeld, NRW, Germany
| | - Patrick Unterbrink
- Department of Neurobiology, Bielefeld University, Bielefeld, NRW, Germany
| | | | - Martin Egelhaaf
- Department of Neurobiology, Bielefeld University, Bielefeld, NRW, Germany
| | - Norbert Boeddeker
- Department of Cognitive Neuroscience, Bielefeld University, Bielefeld, NRW, Germany
| |
Collapse
|
6
|
Falconbridge M, Stamps RL, Edwards M, Badcock DR. Continuous psychophysics for two-variable experiments; A new "Bayesian participant" approach. Iperception 2023; 14:20416695231214440. [PMID: 38690062 PMCID: PMC11058635 DOI: 10.1177/20416695231214440] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 09/19/2023] [Indexed: 05/02/2024] Open
Abstract
Interest in continuous psychophysical approaches as a means of collecting data quickly under natural conditions is growing. Such approaches require stimuli to be changed randomly on a continuous basis so that participants can not guess future stimulus states. Participants are generally tasked with responding continuously using a continuum of response options. These features introduce variability in the data that is not present in traditional trial-based experiments. Given the unique weaknesses and strengths of continuous psychophysical approaches, we propose that they are well suited to quickly mapping out relationships between above-threshold stimulus variables such as the perceived direction of a moving target as a function of the direction of the background against which the target is moving. We show that modelling the participant in such a two-variable experiment using a novel "Bayesian Participant" model facilitates the conversion of the noisy continuous data into a less-noisy form that resembles data from an equivalent trial-based experiment. We also show that adaptation can result from longer-than-usual stimulus exposure times during continuous experiments, even to features that the participant is not aware of. Methods for mitigating the effects of adaptation are discussed.
Collapse
Affiliation(s)
| | | | - Mark Edwards
- Research School of Psychology, Australian National University, Canberra, ACT, Australia
| | - David R. Badcock
- School of Psychology, University of Western Australia, Crawley, WA, Australia
| |
Collapse
|
7
|
Jerjian SJ, Harsch DR, Fetsch CR. Self-motion perception and sequential decision-making: where are we heading? Philos Trans R Soc Lond B Biol Sci 2023; 378:20220333. [PMID: 37545301 PMCID: PMC10404932 DOI: 10.1098/rstb.2022.0333] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
To navigate and guide adaptive behaviour in a dynamic environment, animals must accurately estimate their own motion relative to the external world. This is a fundamentally multisensory process involving integration of visual, vestibular and kinesthetic inputs. Ideal observer models, paired with careful neurophysiological investigation, helped to reveal how visual and vestibular signals are combined to support perception of linear self-motion direction, or heading. Recent work has extended these findings by emphasizing the dimension of time, both with regard to stimulus dynamics and the trade-off between speed and accuracy. Both time and certainty-i.e. the degree of confidence in a multisensory decision-are essential to the ecological goals of the system: terminating a decision process is necessary for timely action, and predicting one's accuracy is critical for making multiple decisions in a sequence, as in navigation. Here, we summarize a leading model for multisensory decision-making, then show how the model can be extended to study confidence in heading discrimination. Lastly, we preview ongoing efforts to bridge self-motion perception and navigation per se, including closed-loop virtual reality and active self-motion. The design of unconstrained, ethologically inspired tasks, accompanied by large-scale neural recordings, raise promise for a deeper understanding of spatial perception and decision-making in the behaving animal. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Steven J. Jerjian
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Devin R. Harsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
- Center for Neuroscience and Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
8
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220344. [PMID: 37545300 PMCID: PMC10404925 DOI: 10.1098/rstb.2022.0344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/20/2023] [Indexed: 08/08/2023] Open
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of causal inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief about (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modelling results, we show that humans report targets as stationary and steer towards their initial rather than final position more often when they are themselves moving, suggesting a putative misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results support both of these predictions. Lastly, analysis of eye movements show that, while initial saccades toward targets were largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Johannes Bill
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Department of Psychology, Harvard University, Boston, MA 02115, USA
| | - Haoran Ding
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - John Vastola
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14611, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA
- Tandon School of Engineering, New York University, New York, NY 10003, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Center for Brain Science, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
9
|
Noel JP, Balzani E, Savin C, Angelaki DE. Context-invariant beliefs are supported by dynamic reconfiguration of single unit functional connectivity in prefrontal cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.30.551169. [PMID: 37577498 PMCID: PMC10418097 DOI: 10.1101/2023.07.30.551169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Natural behaviors occur in closed action-perception loops and are supported by dynamic and flexible beliefs abstracted away from our immediate sensory milieu. How this real-world flexibility is instantiated in neural circuits remains unknown. Here we have macaques navigate in a virtual environment by primarily leveraging sensory (optic flow) signals, or by more heavily relying on acquired internal models. We record single-unit spiking activity simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and the dorso-lateral prefrontal cortex (dlPFC). Results show that while animals were able to maintain adaptive task-relevant beliefs regardless of sensory context, the fine-grain statistical dependencies between neurons, particularly in 7a and dlPFC, dynamically remapped with the changing computational demands. In dlPFC, but not 7a, destroying these statistical dependencies abolished the area's ability for cross-context decoding. Lastly, correlation analyses suggested that the more unit-to-unit couplings remapped in dlPFC, and the less they did so in MSTd, the less were population codes and behavior impacted by the loss of sensory evidence. We conclude that dynamic functional connectivity between prefrontal cortex neurons maintains a stable population code and context-invariant beliefs during naturalistic behavior with closed action-perception loops.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, USA
| | - Edoardo Balzani
- Center for Neural Science, New York University, New York City, NY, USA
| | - Cristina Savin
- Center for Neural Science, New York University, New York City, NY, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York City, NY, USA
| |
Collapse
|
10
|
Zhu SL, Lakshminarasimhan KJ, Angelaki DE. Computational cross-species views of the hippocampal formation. Hippocampus 2023; 33:586-599. [PMID: 37038890 PMCID: PMC10947336 DOI: 10.1002/hipo.23535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/17/2023] [Accepted: 03/21/2023] [Indexed: 04/12/2023]
Abstract
The discovery of place cells and head direction cells in the hippocampal formation of freely foraging rodents has led to an emphasis of its role in encoding allocentric spatial relationships. In contrast, studies in head-fixed primates have additionally found representations of spatial views. We review recent experiments in freely moving monkeys that expand upon these findings and show that postural variables such as eye/head movements strongly influence neural activity in the hippocampal formation, suggesting that the function of the hippocampus depends on where the animal looks. We interpret these results in the light of recent studies in humans performing challenging navigation tasks which suggest that depending on the context, eye/head movements serve one of two roles-gathering information about the structure of the environment (active sensing) or externalizing the contents of internal beliefs/deliberation (embodied cognition). These findings prompt future experimental investigations into the information carried by signals flowing between the hippocampal formation and the brain regions controlling postural variables, and constitute a basis for updating computational theories of the hippocampal system to accommodate the influence of eye/head movements.
Collapse
Affiliation(s)
- Seren L Zhu
- Center for Neural Science, New York University, New York, New York, USA
| | - Kaushik J Lakshminarasimhan
- Center for Theoretical Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, New York, USA
- Mechanical and Aerospace Engineering, Tandon School of Engineering, New York University, New York, New York, USA
| |
Collapse
|
11
|
Lakshminarasimhan KJ, Avila E, Pitkow X, Angelaki DE. Dynamical latent state computation in the male macaque posterior parietal cortex. Nat Commun 2023; 14:1832. [PMID: 37005470 PMCID: PMC10067966 DOI: 10.1038/s41467-023-37400-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 03/15/2023] [Indexed: 04/04/2023] Open
Abstract
Success in many real-world tasks depends on our ability to dynamically track hidden states of the world. We hypothesized that neural populations estimate these states by processing sensory history through recurrent interactions which reflect the internal model of the world. To test this, we recorded brain activity in posterior parietal cortex (PPC) of monkeys navigating by optic flow to a hidden target location within a virtual environment, without explicit position cues. In addition to sequential neural dynamics and strong interneuronal interactions, we found that the hidden state - monkey's displacement from the goal - was encoded in single neurons, and could be dynamically decoded from population activity. The decoded estimates predicted navigation performance on individual trials. Task manipulations that perturbed the world model induced substantial changes in neural interactions, and modified the neural representation of the hidden state, while representations of sensory and motor variables remained stable. The findings were recapitulated by a task-optimized recurrent neural network model, suggesting that task demands shape the neural interactions in PPC, leading them to embody a world model that consolidates information and tracks task-relevant hidden states.
Collapse
Affiliation(s)
| | - Eric Avila
- Center for Neural Science, New York University, New York City, NY, USA
| | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Electrical & Computer Engineering, Rice University, Houston, TX, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York City, NY, USA
- Department of Mechanical and Aerospace Engineering, New York University, New York City, NY, USA
| |
Collapse
|
12
|
Kang YHR, Wolpert DM, Lengyel M. Spatial uncertainty and environmental geometry in navigation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.30.526278. [PMID: 36778354 PMCID: PMC9915518 DOI: 10.1101/2023.01.30.526278] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Variations in the geometry of the environment, such as the shape and size of an enclosure, have profound effects on navigational behavior and its neural underpinning. Here, we show that these effects arise as a consequence of a single, unifying principle: to navigate efficiently, the brain must maintain and update the uncertainty about one's location. We developed an image-computable Bayesian ideal observer model of navigation, continually combining noisy visual and self-motion inputs, and a neural encoding model optimized to represent the location uncertainty computed by the ideal observer. Through mathematical analysis and numerical simulations, we show that the ideal observer accounts for a diverse range of sometimes paradoxical distortions of human homing behavior in anisotropic and deformed environments, including 'boundary tethering', and its neural encoding accounts for distortions of rodent grid cell responses under identical environmental manipulations. Our results demonstrate that spatial uncertainty plays a key role in navigation.
Collapse
Affiliation(s)
- Yul HR Kang
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Department of Biological and Experimental Psychology, Queen Mary University of London, London, UK
| | - Daniel M Wolpert
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University, New York, NY, USA
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
13
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.27.525974. [PMID: 36778376 PMCID: PMC9915492 DOI: 10.1101/2023.01.27.525974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of Bayesian Causal Inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief over (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modeling results, we show that humans report targets as stationary and steer toward their initial rather than final position more often when they are themselves moving, suggesting a misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results confirm both of these predictions. Lastly, analysis of eye-movements show that, while initial saccades toward targets are largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, United States
| | - Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Department of Psychology, Harvard University, Cambridge, MA, United States
| | - Haoran Ding
- Center for Neural Science, New York University, New York City, NY, United States
| | - John Vastola
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, United States
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York City, NY, United States
- Tandon School of Engineering, New York University, New York City, NY, United states
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Center for Brain Science, Harvard University, Boston, MA, United States
| |
Collapse
|
14
|
Thurley K. Naturalistic neuroscience and virtual reality. Front Syst Neurosci 2022; 16:896251. [PMID: 36467978 PMCID: PMC9712202 DOI: 10.3389/fnsys.2022.896251] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 10/31/2022] [Indexed: 04/04/2024] Open
Abstract
Virtual reality (VR) is one of the techniques that became particularly popular in neuroscience over the past few decades. VR experiments feature a closed-loop between sensory stimulation and behavior. Participants interact with the stimuli and not just passively perceive them. Several senses can be stimulated at once, large-scale environments can be simulated as well as social interactions. All of this makes VR experiences more natural than those in traditional lab paradigms. Compared to the situation in field research, a VR simulation is highly controllable and reproducible, as required of a laboratory technique used in the search for neural correlates of perception and behavior. VR is therefore considered a middle ground between ecological validity and experimental control. In this review, I explore the potential of VR in eliciting naturalistic perception and behavior in humans and non-human animals. In this context, I give an overview of recent virtual reality approaches used in neuroscientific research.
Collapse
Affiliation(s)
- Kay Thurley
- Faculty of Biology, Ludwig-Maximilians-Universität München, Munich, Germany
- Bernstein Center for Computational Neuroscience Munich, Munich, Germany
| |
Collapse
|
15
|
Maisson DJN, Wikenheiser A, Noel JPG, Keinath AT. Making Sense of the Multiplicity and Dynamics of Navigational Codes in the Brain. J Neurosci 2022; 42:8450-8459. [PMID: 36351831 PMCID: PMC9665915 DOI: 10.1523/jneurosci.1124-22.2022] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 08/18/2022] [Accepted: 08/23/2022] [Indexed: 11/17/2022] Open
Abstract
Since the discovery of conspicuously spatially tuned neurons in the hippocampal formation over 50 years ago, characterizing which, where, and how neurons encode navigationally relevant variables has been a major thrust of navigational neuroscience. While much of this effort has centered on the hippocampal formation and functionally-adjacent structures, recent work suggests that spatial codes, in some form or another, can be found throughout the brain, even in areas traditionally associated with sensation, movement, and executive function. In this review, we highlight these unexpected results, draw insights from comparison of these codes across contexts, regions, and species, and finally suggest an avenue for future work to make sense of these diverse and dynamic navigational codes.
Collapse
Affiliation(s)
- David J-N Maisson
- Department of Neuroscience, University of Minnesota, Minneapolis, Minnesota 55455
| | - Andrew Wikenheiser
- Department of Psychology, University of California, Los Angeles, California 90024
| | - Jean-Paul G Noel
- Center for Neural Science, New York University, New York, New York 10003
| | - Alexandra T Keinath
- Department of Psychiatry, Douglas Hospital Research Centre, McGill University, Verdun H3A 0G4, Quebec Canada
- Department of Psychology, University of IL Chicago, Chicago, Illinois 60607
| |
Collapse
|
16
|
Mao D. Neural Correlates of Spatial Navigation in Primate Hippocampus. Neurosci Bull 2022; 39:315-327. [PMID: 36319893 PMCID: PMC9905402 DOI: 10.1007/s12264-022-00968-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Accepted: 06/16/2022] [Indexed: 11/07/2022] Open
Abstract
The hippocampus has been extensively implicated in spatial navigation in rodents and more recently in bats. Numerous studies have revealed that various kinds of spatial information are encoded across hippocampal regions. In contrast, investigations of spatial behavioral correlates in the primate hippocampus are scarce and have been mostly limited to head-restrained subjects during virtual navigation. However, recent advances made in freely-moving primates suggest marked differences in spatial representations from rodents, albeit some similarities. Here, we review empirical studies examining the neural correlates of spatial navigation in the primate (including human) hippocampus at the levels of local field potentials and single units. The lower frequency theta oscillations are often intermittent. Single neuron responses are highly mixed and task-dependent. We also discuss neuronal selectivity in the eye and head coordinates. Finally, we propose that future studies should focus on investigating both intrinsic and extrinsic population activity and examining spatial coding properties in large-scale hippocampal-neocortical networks across tasks.
Collapse
Affiliation(s)
- Dun Mao
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China. .,University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
17
|
Noel JP, Balzani E, Avila E, Lakshminarasimhan KJ, Bruni S, Alefantis P, Savin C, Angelaki DE. Coding of latent variables in sensory, parietal, and frontal cortices during closed-loop virtual navigation. eLife 2022; 11:e80280. [PMID: 36282071 PMCID: PMC9668339 DOI: 10.7554/elife.80280] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 10/24/2022] [Indexed: 11/13/2022] Open
Abstract
We do not understand how neural nodes operate and coordinate within the recurrent action-perception loops that characterize naturalistic self-environment interactions. Here, we record single-unit spiking activity and local field potentials (LFPs) simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and dorsolateral prefrontal cortex (dlPFC) as monkeys navigate in virtual reality to 'catch fireflies'. This task requires animals to actively sample from a closed-loop virtual environment while concurrently computing continuous latent variables: (i) the distance and angle travelled (i.e., path integration) and (ii) the distance and angle to a memorized firefly location (i.e., a hidden spatial goal). We observed a patterned mixed selectivity, with the prefrontal cortex most prominently coding for latent variables, parietal cortex coding for sensorimotor variables, and MSTd most often coding for eye movements. However, even the traditionally considered sensory area (i.e., MSTd) tracked latent variables, demonstrating path integration and vector coding of hidden spatial goals. Further, global encoding profiles and unit-to-unit coupling (i.e., noise correlations) suggested a functional subnetwork composed by MSTd and dlPFC, and not between these and 7a, as anatomy would suggest. We show that the greater the unit-to-unit coupling between MSTd and dlPFC, the more the animals' gaze position was indicative of the ongoing location of the hidden spatial goal. We suggest this MSTd-dlPFC subnetwork reflects the monkeys' natural and adaptive task strategy wherein they continuously gaze toward the location of the (invisible) target. Together, these results highlight the distributed nature of neural coding during closed action-perception loops and suggest that fine-grain functional subnetworks may be dynamically established to subserve (embodied) task strategies.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Edoardo Balzani
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Eric Avila
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Kaushik J Lakshminarasimhan
- Center for Neural Science, New York UniversityNew York CityUnited States
- Center for Theoretical Neuroscience, Columbia UniversityNew YorkUnited States
| | - Stefania Bruni
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Panos Alefantis
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Cristina Savin
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Dora E Angelaki
- Center for Neural Science, New York UniversityNew York CityUnited States
| |
Collapse
|
18
|
Causal contribution of optic flow signal in Macaque extrastriate visual cortex for roll perception. Nat Commun 2022; 13:5479. [PMID: 36123363 PMCID: PMC9485245 DOI: 10.1038/s41467-022-33245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 09/08/2022] [Indexed: 11/08/2022] Open
Abstract
Optic flow is a powerful cue for inferring self-motion status which is critical for postural control, spatial orientation, locomotion and navigation. In primates, neurons in extrastriate visual cortex (MSTd) are predominantly modulated by high-order optic flow patterns (e.g., spiral), yet a functional link to direct perception is lacking. Here, we applied electrical microstimulation to selectively manipulate population of MSTd neurons while macaques discriminated direction of rotation around line-of-sight (roll) or direction of linear-translation (heading), two tasks which were orthogonal in 3D spiral coordinate using a four-alternative-forced-choice paradigm. Microstimulation frequently biased animal's roll perception towards coded labeled-lines of the artificial-stimulated neurons in either context with spiral or pure-rotation stimuli. Choice frequency was also altered between roll and translation flow-pattern. Our results provide direct causal-link evidence supporting that roll signals in MSTd, despite often mixed with translation signals, can be extracted by downstream areas for perception of rotation relative to gravity-vertical.
Collapse
|
19
|
Tseng SY, Chettih SN, Arlt C, Barroso-Luque R, Harvey CD. Shared and specialized coding across posterior cortical areas for dynamic navigation decisions. Neuron 2022; 110:2484-2502.e16. [PMID: 35679861 PMCID: PMC9357051 DOI: 10.1016/j.neuron.2022.05.012] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 01/31/2022] [Accepted: 05/13/2022] [Indexed: 11/25/2022]
Abstract
Animals adaptively integrate sensation, planning, and action to navigate toward goal locations in ever-changing environments, but the functional organization of cortex supporting these processes remains unclear. We characterized encoding in approximately 90,000 neurons across the mouse posterior cortex during a virtual navigation task with rule switching. The encoding of task and behavioral variables was highly distributed across cortical areas but differed in magnitude, resulting in three spatial gradients for visual cue, spatial position plus dynamics of choice formation, and locomotion, with peaks respectively in visual, retrosplenial, and parietal cortices. Surprisingly, the conjunctive encoding of these variables in single neurons was similar throughout the posterior cortex, creating high-dimensional representations in all areas instead of revealing computations specialized for each area. We propose that, for guiding navigation decisions, the posterior cortex operates in parallel rather than hierarchically, and collectively generates a state representation of the behavior and environment, with each area specialized in handling distinct information modalities.
Collapse
Affiliation(s)
- Shih-Yi Tseng
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115, USA
| | - Selmaan N Chettih
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115, USA
| | - Charlotte Arlt
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115, USA
| | | | | |
Collapse
|
20
|
Alefantis P, Lakshminarasimhan K, Avila E, Noel JP, Pitkow X, Angelaki DE. Sensory Evidence Accumulation Using Optic Flow in a Naturalistic Navigation Task. J Neurosci 2022; 42:5451-5462. [PMID: 35641186 PMCID: PMC9270913 DOI: 10.1523/jneurosci.2203-21.2022] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 04/01/2022] [Accepted: 04/22/2022] [Indexed: 11/21/2022] Open
Abstract
Sensory evidence accumulation is considered a hallmark of decision-making in noisy environments. Integration of sensory inputs has been traditionally studied using passive stimuli, segregating perception from action. Lessons learned from this approach, however, may not generalize to ethological behaviors like navigation, where there is an active interplay between perception and action. We designed a sensory-based sequential decision task in virtual reality in which humans and monkeys navigated to a memorized location by integrating optic flow generated by their own joystick movements. A major challenge in such closed-loop tasks is that subjects' actions will determine future sensory input, causing ambiguity about whether they rely on sensory input rather than expectations based solely on a learned model of the dynamics. To test whether subjects integrated optic flow over time, we used three independent experimental manipulations, unpredictable optic flow perturbations, which pushed subjects off their trajectory; gain manipulation of the joystick controller, which changed the consequences of actions; and manipulation of the optic flow density, which changed the information borne by sensory evidence. Our results suggest that both macaques (male) and humans (female/male) relied heavily on optic flow, thereby demonstrating a critical role for sensory evidence accumulation during naturalistic action-perception closed-loop tasks.SIGNIFICANCE STATEMENT The temporal integration of evidence is a fundamental component of mammalian intelligence. Yet, it has traditionally been studied using experimental paradigms that fail to capture the closed-loop interaction between actions and sensations inherent in real-world continuous behaviors. These conventional paradigms use binary decision tasks and passive stimuli with statistics that remain stationary over time. Instead, we developed a naturalistic visuomotor visual navigation paradigm that mimics the causal structure of real-world sensorimotor interactions and probed the extent to which participants integrate sensory evidence by adding task manipulations that reveal complementary aspects of the computation.
Collapse
Affiliation(s)
- Panos Alefantis
- Center for Neural Science, New York University, New York, New York 10003
| | | | - Eric Avila
- Center for Neural Science, New York University, New York, New York 10003
| | - Jean-Paul Noel
- Center for Neural Science, New York University, New York, New York 10003
| | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030
- Department of Electrical and Computer Engineering, Rice University, Houston, Texas 77005-1892
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas 77030
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, New York 10003
- Tandon School of Engineering, New York University, New York, New York 11201
| |
Collapse
|
21
|
Zhu S, Lakshminarasimhan KJ, Arfaei N, Angelaki DE. Eye movements reveal spatiotemporal dynamics of visually-informed planning in navigation. eLife 2022; 11:e73097. [PMID: 35503099 PMCID: PMC9135400 DOI: 10.7554/elife.73097] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 05/01/2022] [Indexed: 11/28/2022] Open
Abstract
Goal-oriented navigation is widely understood to depend upon internal maps. Although this may be the case in many settings, humans tend to rely on vision in complex, unfamiliar environments. To study the nature of gaze during visually-guided navigation, we tasked humans to navigate to transiently visible goals in virtual mazes of varying levels of difficulty, observing that they took near-optimal trajectories in all arenas. By analyzing participants' eye movements, we gained insights into how they performed visually-informed planning. The spatial distribution of gaze revealed that environmental complexity mediated a striking trade-off in the extent to which attention was directed towards two complimentary aspects of the world model: the reward location and task-relevant transitions. The temporal evolution of gaze revealed rapid, sequential prospection of the future path, evocative of neural replay. These findings suggest that the spatiotemporal characteristics of gaze during navigation are significantly shaped by the unique cognitive computations underlying real-world, sequential decision making.
Collapse
Affiliation(s)
- Seren Zhu
- Center for Neural Science, New York UniversityNew YorkUnited States
| | | | - Nastaran Arfaei
- Department of Psychology, New York UniversityNew YorkUnited States
| | - Dora E Angelaki
- Center for Neural Science, New York UniversityNew YorkUnited States
- Department of Mechanical and Aerospace Engineering, New York UniversityNew YorkUnited States
| |
Collapse
|
22
|
Abstract
Navigating by path integration requires continuously estimating one's self-motion. This estimate may be derived from visual velocity and/or vestibular acceleration signals. Importantly, these senses in isolation are ill-equipped to provide accurate estimates, and thus visuo-vestibular integration is an imperative. After a summary of the visual and vestibular pathways involved, the crux of this review focuses on the human and theoretical approaches that have outlined a normative account of cue combination in behavior and neurons, as well as on the systems neuroscience efforts that are searching for its neural implementation. We then highlight a contemporary frontier in our state of knowledge: understanding how velocity cues with time-varying reliabilities are integrated into an evolving position estimate over prolonged time periods. Further, we discuss how the brain builds internal models inferring when cues ought to be integrated versus segregated-a process of causal inference. Lastly, we suggest that the study of spatial navigation has not yet addressed its initial condition: self-location.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA;
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA;
- Tandon School of Engineering, New York University, New York, NY 11201, USA
| |
Collapse
|
23
|
Mao D, Avila E, Caziot B, Laurens J, Dickman JD, Angelaki DE. Spatial modulation of hippocampal activity in freely moving macaques. Neuron 2021; 109:3521-3534.e6. [PMID: 34644546 DOI: 10.1016/j.neuron.2021.09.032] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 07/30/2021] [Accepted: 09/14/2021] [Indexed: 02/08/2023]
Abstract
The hippocampal formation is linked to spatial navigation, but there is little corroboration from freely moving primates with concurrent monitoring of head and gaze stances. We recorded neural activity across hippocampal regions in rhesus macaques during free foraging in an open environment while tracking their head and eye. Theta activity was intermittently present at movement onset and modulated by saccades. Many neurons were phase-locked to theta, with few showing phase precession. Most neurons encoded a mixture of spatial variables beyond place and grid tuning. Spatial representations were dominated by facing location and allocentric direction, mostly in head, rather than gaze, coordinates. Importantly, eye movements strongly modulated neural activity in all regions. These findings reveal that the macaque hippocampal formation represents three-dimensional (3D) space using a multiplexed code, with head orientation and eye movement properties being dominant during free exploration.
Collapse
Affiliation(s)
- Dun Mao
- Center for Neural Science, New York University, New York, NY 10003, USA; Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA.
| | - Eric Avila
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Baptiste Caziot
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Jean Laurens
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany
| | - J David Dickman
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA; Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Tandon School of Engineering, New York University, New York, NY 11201, USA.
| |
Collapse
|
24
|
Noel JP, Caziot B, Bruni S, Fitzgerald NE, Avila E, Angelaki DE. Supporting generalization in non-human primate behavior by tapping into structural knowledge: Examples from sensorimotor mappings, inference, and decision-making. Prog Neurobiol 2021; 201:101996. [PMID: 33454361 PMCID: PMC8096669 DOI: 10.1016/j.pneurobio.2021.101996] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 12/15/2020] [Accepted: 01/12/2021] [Indexed: 02/05/2023]
Abstract
The complex behaviors we ultimately wish to understand are far from those currently used in systems neuroscience laboratories. A salient difference are the closed loops between action and perception prominently present in natural but not laboratory behaviors. The framework of reinforcement learning and control naturally wades across action and perception, and thus is poised to inform the neurosciences of tomorrow, not only from a data analyses and modeling framework, but also in guiding experimental design. We argue that this theoretical framework emphasizes active sensing, dynamical planning, and the leveraging of structural regularities as key operations for intelligent behavior within uncertain, time-varying environments. Similarly, we argue that we may study natural task strategies and their neural circuits without over-training animals when the tasks we use tap into our animal's structural knowledge. As proof-of-principle, we teach animals to navigate through a virtual environment - i.e., explore a well-defined and repetitive structure governed by the laws of physics - using a joystick. Once these animals have learned to 'drive', without further training they naturally (i) show zero- or one-shot learning of novel sensorimotor contingencies, (ii) infer the evolving path of dynamically changing latent variables, and (iii) make decisions consistent with maximizing reward rate. Such task designs allow for the study of flexible and generalizable, yet controlled, behaviors. In turn, they allow for the exploitation of pillars of intelligence - flexibility, prediction, and generalization -, properties whose neural underpinning have remained elusive.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, USA
| | - Baptiste Caziot
- Center for Neural Science, New York University, New York, USA
| | - Stefania Bruni
- Center for Neural Science, New York University, New York, USA
| | | | - Eric Avila
- Center for Neural Science, New York University, New York, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, USA; Tandon School of Engineering, New York University, New York, USA.
| |
Collapse
|
25
|
Chow HM, Knöll J, Madsen M, Spering M. Look where you go: Characterizing eye movements toward optic flow. J Vis 2021; 21:19. [PMID: 33735378 PMCID: PMC7991960 DOI: 10.1167/jov.21.3.19] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Accepted: 02/08/2021] [Indexed: 11/24/2022] Open
Abstract
When we move through our environment, objects in the visual scene create optic flow patterns on the retina. Even though optic flow is ubiquitous in everyday life, it is not well understood how our eyes naturally respond to it. In small groups of human and non-human primates, optic flow triggers intuitive, uninstructed eye movements to the focus of expansion of the pattern (Knöll, Pillow, & Huk, 2018). Here, we investigate whether such intuitive oculomotor responses to optic flow are generalizable to a larger group of human observers and how eye movements are affected by motion signal strength and task instructions. Observers (N = 43) viewed expanding or contracting optic flow constructed by a cloud of moving dots radiating from or converging toward a focus of expansion that could randomly shift. Results show that 84% of observers tracked the focus of expansion with their eyes without being explicitly instructed to track. Intuitive tracking was tuned to motion signal strength: Saccades landed closer to the focus of expansion, and smooth tracking was more accurate when dot contrast, motion coherence, and translational speed were high. Under explicit tracking instruction, the eyes aligned with the focus of expansion more closely than without instruction. Our results highlight the sensitivity of intuitive eye movements as indicators of visual motion processing in dynamic contexts.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Jonas Knöll
- Institute of Animal Welfare and Animal Husbandry, Friedrich-Loeffler-Institut, Celle, Germany
| | - Matthew Madsen
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada
- Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
26
|
Kwon M, Daptardar S, Schrater P, Pitkow X. Inverse Rational Control with Partially Observable Continuous Nonlinear Dynamics. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2020; 33:7898-7909. [PMID: 34712038 PMCID: PMC8549572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
A fundamental question in neuroscience is how the brain creates an internal model of the world to guide actions using sequences of ambiguous sensory information. This is naturally formulated as a reinforcement learning problem under partial observations, where an agent must estimate relevant latent variables in the world from its evidence, anticipate possible future states, and choose actions that optimize total expected reward. This problem can be solved by control theory, which allows us to find the optimal actions for a given system dynamics and objective function. However, animals often appear to behave suboptimally. Why? We hypothesize that animals have their own flawed internal model of the world, and choose actions with the highest expected subjective reward according to that flawed model. We describe this behavior as rational but not optimal. The problem of Inverse Rational Control (IRC) aims to identify which internal model would best explain an agent's actions. Our contribution here generalizes past work on Inverse Rational Control which solved this problem for discrete control in partially observable Markov decision processes. Here we accommodate continuous nonlinear dynamics and continuous actions, and impute sensory observations corrupted by unknown noise that is private to the animal. We first build an optimal Bayesian agent that learns an optimal policy generalized over the entire model space of dynamics and subjective rewards using deep reinforcement learning. Crucially, this allows us to compute a likelihood over models for experimentally observable action trajectories acquired from a suboptimal agent. We then find the model parameters that maximize the likelihood using gradient ascent. Our method successfully recovers the true model of rational agents. This approach provides a foundation for interpreting the behavioral and neural dynamics of animal brains during complex tasks.
Collapse
Affiliation(s)
- Minhae Kwon
- School of Electronic Engineering, Soongsil University, Seoul, Republic of Korea
| | | | - Paul Schrater
- Department of Computer Science, University of Minnesota, Minnesota, IN, USA
| | - Xaq Pitkow
- Electrical and Computer Engineering, Rice University, Houston, TX, USA
| |
Collapse
|
27
|
Increased variability but intact integration during visual navigation in Autism Spectrum Disorder. Proc Natl Acad Sci U S A 2020; 117:11158-11166. [PMID: 32358192 DOI: 10.1073/pnas.2000216117] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023] Open
Abstract
Autism Spectrum Disorder (ASD) is a common neurodevelopmental disturbance afflicting a variety of functions. The recent computational focus suggesting aberrant Bayesian inference in ASD has yielded promising but conflicting results in attempting to explain a wide variety of phenotypes by canonical computations. Here, we used a naturalistic visual path integration task that combines continuous action with active sensing and allows tracking of subjects' dynamic belief states. Both groups showed a previously documented bias pattern by overshooting the radial distance and angular eccentricity of targets. For both control and ASD groups, these errors were driven by misestimated velocity signals due to a nonuniform speed prior rather than imperfect integration. We tracked participants' beliefs and found no difference in the speed prior, but there was heightened variability in the ASD group. Both end point variance and trajectory irregularities correlated with ASD symptom severity. With feedback, variance was reduced, and ASD performance approached that of controls. These findings highlight the need for both more naturalistic tasks and a broader computational perspective to understand the ASD phenotype and pathology.
Collapse
|