1
|
Hacohen-Brown S, Gilboa-Schechtman E, Zaidel A. Modality-specific effects of threat on self-motion perception. BMC Biol 2024; 22:120. [PMID: 38783286 PMCID: PMC11119305 DOI: 10.1186/s12915-024-01911-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 05/08/2024] [Indexed: 05/25/2024] Open
Abstract
BACKGROUND Threat and individual differences in threat-processing bias perception of stimuli in the environment. Yet, their effect on perception of one's own (body-based) self-motion in space is unknown. Here, we tested the effects of threat on self-motion perception using a multisensory motion simulator with concurrent threatening or neutral auditory stimuli. RESULTS Strikingly, threat had opposite effects on vestibular and visual self-motion perception, leading to overestimation of vestibular, but underestimation of visual self-motions. Trait anxiety tended to be associated with an enhanced effect of threat on estimates of self-motion for both modalities. CONCLUSIONS Enhanced vestibular perception under threat might stem from shared neural substrates with emotional processing, whereas diminished visual self-motion perception may indicate that a threatening stimulus diverts attention away from optic flow integration. Thus, threat induces modality-specific biases in everyday experiences of self-motion.
Collapse
Affiliation(s)
- Shira Hacohen-Brown
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, 5290002, Ramat Gan, Israel
| | - Eva Gilboa-Schechtman
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, 5290002, Ramat Gan, Israel
- Department of Psychology, Bar-Ilan University, 5290002, Ramat-Gan, Israel
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, 5290002, Ramat Gan, Israel.
| |
Collapse
|
2
|
Liu B, Shan J, Gu Y. Temporal and spatial properties of vestibular signals for perception of self-motion. Front Neurol 2023; 14:1266513. [PMID: 37780704 PMCID: PMC10534010 DOI: 10.3389/fneur.2023.1266513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 08/29/2023] [Indexed: 10/03/2023] Open
Abstract
It is well recognized that the vestibular system is involved in numerous important cognitive functions, including self-motion perception, spatial orientation, locomotion, and vector-based navigation, in addition to basic reflexes, such as oculomotor or body postural control. Consistent with this rationale, vestibular signals exist broadly in the brain, including several regions of the cerebral cortex, potentially allowing tight coordination with other sensory systems to improve the accuracy and precision of perception or action during self-motion. Recent neurophysiological studies in animal models based on single-cell resolution indicate that vestibular signals exhibit complex spatiotemporal dynamics, producing challenges in identifying their exact functions and how they are integrated with other modality signals. For example, vestibular and optic flow could provide congruent and incongruent signals regarding spatial tuning functions, reference frames, and temporal dynamics. Comprehensive studies, including behavioral tasks, neural recording across sensory and sensory-motor association areas, and causal link manipulations, have provided some insights into the neural mechanisms underlying multisensory self-motion perception.
Collapse
Affiliation(s)
- Bingyu Liu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jiayu Shan
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
3
|
Keshavarzi S, Velez-Fort M, Margrie TW. Cortical Integration of Vestibular and Visual Cues for Navigation, Visual Processing, and Perception. Annu Rev Neurosci 2023; 46:301-320. [PMID: 37428601 DOI: 10.1146/annurev-neuro-120722-100503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Despite increasing evidence of its involvement in several key functions of the cerebral cortex, the vestibular sense rarely enters our consciousness. Indeed, the extent to which these internal signals are incorporated within cortical sensory representation and how they might be relied upon for sensory-driven decision-making, during, for example, spatial navigation, is yet to be understood. Recent novel experimental approaches in rodents have probed both the physiological and behavioral significance of vestibular signals and indicate that their widespread integration with vision improves both the cortical representation and perceptual accuracy of self-motion and orientation. Here, we summarize these recent findings with a focus on cortical circuits involved in visual perception and spatial navigation and highlight the major remaining knowledge gaps. We suggest that vestibulo-visual integration reflects a process of constant updating regarding the status of self-motion, and access to such information by the cortex is used for sensory perception and predictions that may be implemented for rapid, navigation-related decision-making.
Collapse
Affiliation(s)
- Sepiedeh Keshavarzi
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Mateo Velez-Fort
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Troy W Margrie
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| |
Collapse
|
4
|
Horrocks EAB, Mareschal I, Saleem AB. Walking humans and running mice: perception and neural encoding of optic flow during self-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210450. [PMID: 36511417 PMCID: PMC9745880 DOI: 10.1098/rstb.2021.0450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Locomotion produces full-field optic flow that often dominates the visual motion inputs to an observer. The perception of optic flow is in turn important for animals to guide their heading and interact with moving objects. Understanding how locomotion influences optic flow processing and perception is therefore essential to understand how animals successfully interact with their environment. Here, we review research investigating how perception and neural encoding of optic flow are altered during self-motion, focusing on locomotion. Self-motion has been found to influence estimation and sensitivity for optic flow speed and direction. Nonvisual self-motion signals also increase compensation for self-driven optic flow when parsing the visual motion of moving objects. The integration of visual and nonvisual self-motion signals largely follows principles of Bayesian inference and can improve the precision and accuracy of self-motion perception. The calibration of visual and nonvisual self-motion signals is dynamic, reflecting the changing visuomotor contingencies across different environmental contexts. Throughout this review, we consider experimental research using humans, non-human primates and mice. We highlight experimental challenges and opportunities afforded by each of these species and draw parallels between experimental findings. These findings reveal a profound influence of locomotion on optic flow processing and perception across species. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Edward A. B. Horrocks
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary, University of London, London E1 4NS, UK
| | - Aman B. Saleem
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
5
|
Ruehl RM, Flanagin VL, Ophey L, Raiser TM, Seiderer K, Ertl M, Conrad J, Zu Eulenburg P. The human egomotion network. Neuroimage 2022; 264:119715. [PMID: 36334557 DOI: 10.1016/j.neuroimage.2022.119715] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 10/18/2022] [Accepted: 10/25/2022] [Indexed: 11/07/2022] Open
Abstract
All volitional movement in a three-dimensional space requires multisensory integration, in particular of visual and vestibular signals. Where and how the human brain processes and integrates self-motion signals remains enigmatic. Here, we applied visual and vestibular self-motion stimulation using fast and precise whole-brain neuroimaging to delineate and characterize the entire cortical and subcortical egomotion network in a substantial cohort (n=131). Our results identify a core egomotion network consisting of areas in the cingulate sulcus (CSv, PcM/pCi), the cerebellum (uvula), and the temporo-parietal cortex including area VPS and an unnamed region in the supramarginal gyrus. Based on its cerebral connectivity pattern and anatomical localization, we propose that this region represents the human homologue of macaque area 7a. Whole-brain connectivity and gradient analyses imply an essential role of the connections between the cingulate sulcus and the cerebellar uvula in egomotion perception. This could be via feedback loops involved updating visuo-spatial and vestibular information. The unique functional connectivity patterns of PcM/pCi hint at central role in multisensory integration essential for the perception of self-referential spatial awareness. All cortical egomotion hubs showed modular functional connectivity with other visual, vestibular, somatosensory and higher order motor areas, underlining their mutual function in general sensorimotor integration.
Collapse
Affiliation(s)
- Ria Maxine Ruehl
- Department of Neurology, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany.
| | - Virginia L Flanagin
- Department of Neurology, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; Graduate School of Systemic Neurosciences, Department of Biology II and Neurobiology, Großhaderner Str. 2, 82151 Planegg-Martinsried, Ludwig-Maximilians-University Munich, Germany
| | - Leoni Ophey
- German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany
| | - Theresa Marie Raiser
- Department of Neurology, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany
| | - Katharina Seiderer
- German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany
| | - Matthias Ertl
- Institute of Psychology and Inselspital, Fabrikstrasse 8, 3012 Bern, University of Bern, Switzerland
| | - Julian Conrad
- Department of Neurology, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; Department of Neurology, Theodor-Kutze Ufer 1-3, 68167 Mannheim, Medical Faculty Mannheim, University of Heidelberg, Germany
| | - Peter Zu Eulenburg
- German Center for Vertigo and Balance Disorders, IFB-LMU, University Hospital Munich, Ludwig-Maximilians-University Munich, Marchionini Str. 15, 81377 Munich, Germany; Graduate School of Systemic Neurosciences, Department of Biology II and Neurobiology, Großhaderner Str. 2, 82151 Planegg-Martinsried, Ludwig-Maximilians-University Munich, Germany; Institute for Neuroradiology, University Hospital Munich, Marchionini Str. 15, 81377 Munich, Ludwig-Maximilians-University Munich, Germany
| |
Collapse
|
6
|
Causal contribution of optic flow signal in Macaque extrastriate visual cortex for roll perception. Nat Commun 2022; 13:5479. [PMID: 36123363 PMCID: PMC9485245 DOI: 10.1038/s41467-022-33245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 09/08/2022] [Indexed: 11/08/2022] Open
Abstract
Optic flow is a powerful cue for inferring self-motion status which is critical for postural control, spatial orientation, locomotion and navigation. In primates, neurons in extrastriate visual cortex (MSTd) are predominantly modulated by high-order optic flow patterns (e.g., spiral), yet a functional link to direct perception is lacking. Here, we applied electrical microstimulation to selectively manipulate population of MSTd neurons while macaques discriminated direction of rotation around line-of-sight (roll) or direction of linear-translation (heading), two tasks which were orthogonal in 3D spiral coordinate using a four-alternative-forced-choice paradigm. Microstimulation frequently biased animal's roll perception towards coded labeled-lines of the artificial-stimulated neurons in either context with spiral or pure-rotation stimuli. Choice frequency was also altered between roll and translation flow-pattern. Our results provide direct causal-link evidence supporting that roll signals in MSTd, despite often mixed with translation signals, can be extracted by downstream areas for perception of rotation relative to gravity-vertical.
Collapse
|
7
|
Falconbridge M, Hewitt K, Haille J, Badcock DR, Edwards M. The induced motion effect is a high-level visual phenomenon: Psychophysical evidence. Iperception 2022; 13:20416695221118111. [PMID: 36092511 PMCID: PMC9459461 DOI: 10.1177/20416695221118111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 07/20/2022] [Indexed: 11/16/2022] Open
Abstract
Induced motion is the illusory motion of a target away from the direction of motion of the unattended background. If it is a result of assigning background motion to self-motion and judging target motion relative to the scene as suggested by the flow parsing hypothesis then the effect must be mediated in higher levels of the visual motion pathway where self-motion is assessed. We provide evidence for a high-level mechanism in two broad ways. Firstly, we show that the effect is insensitive to a set of low-level spatial aspects of the scene, namely, the spatial arrangement, the spatial frequency content and the orientation content of the background relative to the target. Secondly, we show that the effect is the same whether the target and background are composed of the same kind of local elements-one-dimensional (1D) or two-dimensional (2D)-or one is composed of one, and the other composed of the other. The latter finding is significant because 1D and 2D local elements are integrated by two different mechanisms so the induced motion effect is likely to be mediated in a visual motion processing area that follows the two separate integration mechanisms. Area medial superior temporal in monkeys and the equivalent in humans is suggested as a viable site. We present a simple flow-parsing-inspired model and demonstrate a good fit to our data and to data from a previous induced motion study.
Collapse
|
8
|
Sarel A, Palgi S, Blum D, Aljadeff J, Las L, Ulanovsky N. Natural switches in behaviour rapidly modulate hippocampal coding. Nature 2022; 609:119-127. [PMID: 36002570 PMCID: PMC9433324 DOI: 10.1038/s41586-022-05112-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Accepted: 07/14/2022] [Indexed: 11/30/2022]
Abstract
Throughout their daily lives, animals and humans often switch between different behaviours. However, neuroscience research typically studies the brain while the animal is performing one behavioural task at a time, and little is known about how brain circuits represent switches between different behaviours. Here we tested this question using an ethological setting: two bats flew together in a long 135 m tunnel, and switched between navigation when flying alone (solo) and collision avoidance as they flew past each other (cross-over). Bats increased their echolocation click rate before each cross-over, indicating attention to the other bat1–9. Hippocampal CA1 neurons represented the bat’s own position when flying alone (place coding10–14). Notably, during cross-overs, neurons switched rapidly to jointly represent the interbat distance by self-position. This neuronal switch was very fast—as fast as 100 ms—which could be revealed owing to the very rapid natural behavioural switch. The neuronal switch correlated with the attention signal, as indexed by echolocation. Interestingly, the different place fields of the same neuron often exhibited very different tuning to interbat distance, creating a complex non-separable coding of position by distance. Theoretical analysis showed that this complex representation yields more efficient coding. Overall, our results suggest that during dynamic natural behaviour, hippocampal neurons can rapidly switch their core computation to represent the relevant behavioural variables, supporting behavioural flexibility. During rapid behavioural switches in flying bats, hippocampal neurons can rapidly switch their core computation to represent the relevant behavioural variables, supporting behavioural flexibility.
Collapse
Affiliation(s)
- Ayelet Sarel
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Shaked Palgi
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Dan Blum
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Johnatan Aljadeff
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel.,Department of Neurobiology, University of California, San Diego, CA, USA
| | - Liora Las
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel.
| | - Nachum Ulanovsky
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel.
| |
Collapse
|
9
|
Zhang J, Gu Y, Chen A, Yu Y. Unveiling Dynamic System Strategies for Multisensory Processing: From Neuronal Fixed-Criterion Integration to Population Bayesian Inference. Research (Wash D C) 2022; 2022:9787040. [PMID: 36072271 PMCID: PMC9422331 DOI: 10.34133/2022/9787040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/18/2022] [Indexed: 11/17/2022] Open
Abstract
Multisensory processing is of vital importance for survival in the external world. Brain circuits can both integrate and separate visual and vestibular senses to infer self-motion and the motion of other objects. However, it is largely debated how multisensory brain regions process such multisensory information and whether they follow the Bayesian strategy in this process. Here, we combined macaque physiological recordings in the dorsal medial superior temporal area (MST-d) with modeling of synaptically coupled multilayer continuous attractor neural networks (CANNs) to study the underlying neuronal circuit mechanisms. In contrast to previous theoretical studies that focused on unisensory direction preference, our analysis showed that synaptic coupling induced cooperation and competition in the multisensory circuit and caused single MST-d neurons to switch between sensory integration or separation modes based on the fixed-criterion causal strategy, which is determined by the synaptic coupling strength. Furthermore, the prior of sensory reliability was represented by pooling diversified criteria at the MST-d population level, and the Bayesian strategy was achieved in downstream neurons whose causal inference flexibly changed with the prior. The CANN model also showed that synaptic input balance is the dynamic origin of neuronal direction preference formation and further explained the misalignment between direction preference and inference observed in previous studies. This work provides a computational framework for a new brain-inspired algorithm underlying multisensory computation.
Collapse
Affiliation(s)
- Jiawei Zhang
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, Human Phenome Institute, Shanghai 200433, China
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China
| | - Yuguo Yu
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, Human Phenome Institute, Shanghai 200433, China
| |
Collapse
|
10
|
Cortical Mechanisms of Multisensory Linear Self-motion Perception. Neurosci Bull 2022; 39:125-137. [PMID: 35821337 PMCID: PMC9849545 DOI: 10.1007/s12264-022-00916-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 01/22/2023] Open
Abstract
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Collapse
|
11
|
Kim HR, Angelaki DE, DeAngelis GC. A neural mechanism for detecting object motion during self-motion. eLife 2022; 11:74971. [PMID: 35642599 PMCID: PMC9159750 DOI: 10.7554/elife.74971] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/17/2022] [Indexed: 11/17/2022] Open
Abstract
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.,Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
12
|
McFadyen JR, Heider B, Karkhanis AN, Cloherty SL, Muñoz F, Siegel RM, Morris AP. Robust Coding of Eye Position in Posterior Parietal Cortex despite Context-Dependent Tuning. J Neurosci 2022; 42:4116-4130. [PMID: 35410881 PMCID: PMC9121829 DOI: 10.1523/jneurosci.0674-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 03/30/2022] [Accepted: 03/30/2022] [Indexed: 11/21/2022] Open
Abstract
Neurons in posterior parietal cortex (PPC) encode many aspects of the sensory world (e.g., scene structure), the posture of the body, and plans for action. For a downstream computation, however, only some of these dimensions are relevant; the rest are "nuisance variables" because their influence on neural activity changes with sensory and behavioral context, potentially corrupting the read-out of relevant information. Here we show that a key postural variable for vision (eye position) is represented robustly in male macaque PPC across a range of contexts, although the tuning of single neurons depended strongly on context. Contexts were defined by different stages of a visually guided reaching task, including (1) a visually sparse epoch, (2) a visually rich epoch, (3) a "go" epoch in which the reach was cued, and (4) during the reach itself. Eye position was constant within trials but varied across trials in a 3 × 3 grid spanning 24° × 24°. Using demixed principal component analysis of neural spike-counts, we found that the subspace of the population response encoding eye position is orthogonal to that encoding task context. Accordingly, a context-naive (fixed-parameter) decoder was nevertheless able to estimate eye position reliably across contexts. Errors were small given the sample size (∼1.78°) and would likely be even smaller with larger populations. Moreover, they were comparable to that of decoders that were optimized for each context. Our results suggest that population codes in PPC shield encoded signals from crosstalk to support robust sensorimotor transformations across contexts.SIGNIFICANCE STATEMENT Neurons in posterior parietal cortex (PPC) which are sensitive to gaze direction are thought to play a key role in spatial perception and behavior (e.g., reaching, navigation), and provide a potential substrate for brain-controlled prosthetics. Many, however, change their tuning under different sensory and behavioral contexts, raising the prospect that they provide unreliable representations of egocentric space. Here, we analyze the structure of encoding dimensions for gaze direction and context in PPC during different stages of a visually guided reaching task. We use demixed dimensionality reduction and decoding techniques to show that the coding of gaze direction in PPC is mostly invariant to context. This suggests that PPC can provide reliable spatial information across sensory and behavioral contexts.
Collapse
Affiliation(s)
- Jamie R McFadyen
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Barbara Heider
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Anushree N Karkhanis
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Shaun L Cloherty
- School of Engineering, RMIT University, Melbourne, VIC, 3001, Australia
| | - Fabian Muñoz
- Department of Neuroscience, Columbia University, New York, NY, 10027
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| | - Ralph M Siegel
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Adam P Morris
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
- Monash Data Futures Institute, Monash University, Clayton, VIC, 3800, Australia
| |
Collapse
|
13
|
Hennestad E, Witoelar A, Chambers AR, Vervaeke K. Mapping vestibular and visual contributions to angular head velocity tuning in the cortex. Cell Rep 2021; 37:110134. [PMID: 34936869 PMCID: PMC8721284 DOI: 10.1016/j.celrep.2021.110134] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 09/21/2021] [Accepted: 11/24/2021] [Indexed: 11/19/2022] Open
Abstract
Neurons that signal the angular velocity of head movements (AHV cells) are important for processing visual and spatial information. However, it has been challenging to isolate the sensory modality that drives them and to map their cortical distribution. To address this, we develop a method that enables rotating awake, head-fixed mice under a two-photon microscope in a visual environment. Starting in layer 2/3 of the retrosplenial cortex, a key area for vision and navigation, we find that 10% of neurons report angular head velocity (AHV). Their tuning properties depend on vestibular input with a smaller contribution of vision at lower speeds. Mapping the spatial extent, we find AHV cells in all cortical areas that we explored, including motor, somatosensory, visual, and posterior parietal cortex. Notably, the vestibular and visual contributions to AHV are area dependent. Thus, many cortical circuits have access to AHV, enabling a diverse integration with sensorimotor and cognitive information.
Collapse
Affiliation(s)
- Eivind Hennestad
- Institute of Basic Medical Sciences, Section of Physiology, University of Oslo, Oslo, Norway
| | - Aree Witoelar
- Institute of Basic Medical Sciences, Section of Physiology, University of Oslo, Oslo, Norway
| | - Anna R Chambers
- Institute of Basic Medical Sciences, Section of Physiology, University of Oslo, Oslo, Norway
| | - Koen Vervaeke
- Institute of Basic Medical Sciences, Section of Physiology, University of Oslo, Oslo, Norway.
| |
Collapse
|
14
|
Di Marco S, Sulpizio V, Bellagamba M, Fattori P, Galati G, Galletti C, Lappe M, Maltempo T, Pitzalis S. Multisensory integration in cortical regions responding to locomotion-related visual and somatomotor signals. Neuroimage 2021; 244:118581. [PMID: 34543763 DOI: 10.1016/j.neuroimage.2021.118581] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 09/08/2021] [Accepted: 09/14/2021] [Indexed: 11/18/2022] Open
Abstract
During real-world locomotion, in order to be able to move along a path or avoid an obstacle, continuous changes in self-motion direction (i.e. heading) are needed. Control of heading changes during locomotion requires the integration of multiple signals (i.e., visual, somatomotor, vestibular). Recent fMRI studies have shown that both somatomotor areas (human PEc [hPEc], human PE [hPE], primary somatosensory cortex [S-I]) and egomotion visual regions (cingulate sulcus visual area [CSv], posterior cingulate area [pCi], posterior insular cortex [PIC]) respond to either leg movements and egomotion-compatible visual stimulations, suggesting a role in the analysis of both visual attributes of egomotion and somatomotor signals with the aim of guiding locomotion. However, whether these regions are able to integrate egomotion-related visual signals with somatomotor inputs coming from leg movements during heading changes remains an open question. Here we used a combined approach of individual functional localizers and task-evoked activity by fMRI. In thirty subjects we first localized three egomotion areas (CSv, pCi, PIC) and three somatomotor regions (S-I, hPE, hPEc). Then, we tested their responses in a multisensory integration experiment combining visual and somatomotor signals relevant to locomotion in congruent or incongruent trials. We used an fMR-adaptation paradigm to explore the sensitivity to the repeated presentation of these bimodal stimuli in the six regions of interest. Results revealed that hPE, S-I and CSv showed an adaptation effect regardless of congruency, while PIC, pCi and hPEc showed sensitivity to congruency. PIC exhibited a preference for congruent trials compared to incongruent trials. Areas pCi and hPEc exhibited an adaptation effect only for congruent and incongruent trials, respectively. PIC, pCi and hPEc sensitivity to the congruency relationship between visual (locomotion-compatible) cues and (leg-related) somatomotor inputs suggests that these regions are involved in multisensory integration processes, likely in order to guide/adjust leg movements during heading changes.
Collapse
Affiliation(s)
- Sara Di Marco
- Department of Psychology, "Sapienza" University of Rome, Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| | - Valentina Sulpizio
- Department of Psychology, "Sapienza" University of Rome, Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Martina Bellagamba
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Gaspare Galati
- Department of Psychology, "Sapienza" University of Rome, Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Markus Lappe
- Institute for Psychology, University of Muenster, Muenster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Teresa Maltempo
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy
| | - Sabrina Pitzalis
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy
| |
Collapse
|
15
|
Noel JP, Angelaki DE. Cognitive, Systems, and Computational Neurosciences of the Self in Motion. Annu Rev Psychol 2021; 73:103-129. [PMID: 34546803 DOI: 10.1146/annurev-psych-021021-103038] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Navigating by path integration requires continuously estimating one's self-motion. This estimate may be derived from visual velocity and/or vestibular acceleration signals. Importantly, these senses in isolation are ill-equipped to provide accurate estimates, and thus visuo-vestibular integration is an imperative. After a summary of the visual and vestibular pathways involved, the crux of this review focuses on the human and theoretical approaches that have outlined a normative account of cue combination in behavior and neurons, as well as on the systems neuroscience efforts that are searching for its neural implementation. We then highlight a contemporary frontier in our state of knowledge: understanding how velocity cues with time-varying reliabilities are integrated into an evolving position estimate over prolonged time periods. Further, we discuss how the brain builds internal models inferring when cues ought to be integrated versus segregated-a process of causal inference. Lastly, we suggest that the study of spatial navigation has not yet addressed its initial condition: self-location. Expected final online publication date for the Annual Review of Psychology, Volume 73 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA;
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA; .,Tandon School of Engineering, New York University, New York, NY 11201, USA
| |
Collapse
|
16
|
Downer JD, Verhein JR, Rapone BC, O'Connor KN, Sutter ML. An Emergent Population Code in Primary Auditory Cortex Supports Selective Attention to Spectral and Temporal Sound Features. J Neurosci 2021; 41:7561-7577. [PMID: 34210783 PMCID: PMC8425978 DOI: 10.1523/jneurosci.0693-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 05/19/2021] [Accepted: 05/28/2021] [Indexed: 11/21/2022] Open
Abstract
Textbook descriptions of primary sensory cortex (PSC) revolve around single neurons' representation of low-dimensional sensory features, such as visual object orientation in primary visual cortex (V1), location of somatic touch in primary somatosensory cortex (S1), and sound frequency in primary auditory cortex (A1). Typically, studies of PSC measure neurons' responses along few (one or two) stimulus and/or behavioral dimensions. However, real-world stimuli usually vary along many feature dimensions and behavioral demands change constantly. In order to illuminate how A1 supports flexible perception in rich acoustic environments, we recorded from A1 neurons while rhesus macaques (one male, one female) performed a feature-selective attention task. We presented sounds that varied along spectral and temporal feature dimensions (carrier bandwidth and temporal envelope, respectively). Within a block, subjects attended to one feature of the sound in a selective change detection task. We found that single neurons tend to be high-dimensional, in that they exhibit substantial mixed selectivity for both sound features, as well as task context. We found no overall enhancement of single-neuron coding of the attended feature, as attention could either diminish or enhance this coding. However, a population-level analysis reveals that ensembles of neurons exhibit enhanced encoding of attended sound features, and this population code tracks subjects' performance. Importantly, surrogate neural populations with intact single-neuron tuning but shuffled higher-order correlations among neurons fail to yield attention- related effects observed in the intact data. These results suggest that an emergent population code not measurable at the single-neuron level might constitute the functional unit of sensory representation in PSC.SIGNIFICANCE STATEMENT The ability to adapt to a dynamic sensory environment promotes a range of important natural behaviors. We recorded from single neurons in monkey primary auditory cortex (A1), while subjects attended to either the spectral or temporal features of complex sounds. Surprisingly, we found no average increase in responsiveness to, or encoding of, the attended feature across single neurons. However, when we pooled the activity of the sampled neurons via targeted dimensionality reduction (TDR), we found enhanced population-level representation of the attended feature and suppression of the distractor feature. This dissociation of the effects of attention at the level of single neurons versus the population highlights the synergistic nature of cortical sound encoding and enriches our understanding of sensory cortical function.
Collapse
Affiliation(s)
- Joshua D Downer
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Otolaryngology, Head and Neck Surgery, University of California, San Francisco, California 94143
| | - Jessica R Verhein
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- School of Medicine, Stanford University, Stanford, California 94305
| | - Brittany C Rapone
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- School of Social Sciences, Oxford Brookes University, Oxford, OX4 0BP, United Kingdom
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, California 95618
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, California 95618
| |
Collapse
|
17
|
Wild B, Treue S. Primate extrastriate cortical area MST: a gateway between sensation and cognition. J Neurophysiol 2021; 125:1851-1882. [PMID: 33656951 DOI: 10.1152/jn.00384.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Primate visual cortex consists of dozens of distinct brain areas, each providing a highly specialized component to the sophisticated task of encoding the incoming sensory information and creating a representation of our visual environment that underlies our perception and action. One such area is the medial superior temporal cortex (MST), a motion-sensitive, direction-selective part of the primate visual cortex. It receives most of its input from the middle temporal (MT) area, but MST cells have larger receptive fields and respond to more complex motion patterns. The finding that MST cells are tuned for optic flow patterns has led to the suggestion that the area plays an important role in the perception of self-motion. This hypothesis has received further support from studies showing that some MST cells also respond selectively to vestibular cues. Furthermore, the area is part of a network that controls the planning and execution of smooth pursuit eye movements and its activity is modulated by cognitive factors, such as attention and working memory. This review of more than 90 studies focuses on providing clarity of the heterogeneous findings on MST in the macaque cortex and its putative homolog in the human cortex. From this analysis of the unique anatomical and functional position in the hierarchy of areas and processing steps in primate visual cortex, MST emerges as a gateway between perception, cognition, and action planning. Given this pivotal role, this area represents an ideal model system for the transition from sensation to cognition.
Collapse
Affiliation(s)
- Benedict Wild
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Goettingen Graduate Center for Neurosciences, Biophysics, and Molecular Biosciences (GGNB), University of Goettingen, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany
| |
Collapse
|
18
|
Abstract
Flow parsing is a way to estimate the direction of scene-relative motion of independently moving objects during self-motion of the observer. So far, this has been tested for simple geometric shapes such as dots or bars. Whether further cues such as prior knowledge about typical directions of an object’s movement, e.g., typical human motion, are considered in the estimations is currently unclear. Here, we adjudicated between the theory that the direction of scene-relative motion of humans is estimated exclusively by flow parsing, just like for simple geometric objects, and the theory that prior knowledge about biological motion affects estimation of perceived direction of scene-relative motion of humans. We placed a human point-light walker in optic flow fields that simulated forward motion of the observer. We introduced conflicts between biological features of the walker (i.e., facing and articulation) and the direction of scene-relative motion. We investigated whether perceived direction of scene-relative motion was biased towards biological features and compared the results to perceived direction of scene-relative motion of scrambled walkers and dot clouds. We found that for humans the perceived direction of scene-relative motion was biased towards biological features. Additionally, we found larger flow parsing gain for humans compared to the other walker types. This indicates that flow parsing is not the only visual mechanism relevant for estimating the direction of scene-relative motion of independently moving objects during self-motion: observers also rely on prior knowledge about typical object motion, such as typical facing and articulation of humans.
Collapse
|
19
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
20
|
Flexible coding of object motion in multiple reference frames by parietal cortex neurons. Nat Neurosci 2020; 23:1004-1015. [PMID: 32541964 PMCID: PMC7474851 DOI: 10.1038/s41593-020-0656-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/14/2020] [Indexed: 12/28/2022]
Abstract
Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. We examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates, but should be ignored when judging object motion in head coordinates. Neural responses in the ventral intraparietal area are modulated by the task reference frame, such that population activity represents object direction in either reference frame. In contrast, responses in the lateral portion of the medial superior temporal area primarily represent object motion in head coordinates. Our findings demonstrate a neural representation of object motion that changes with task requirements.
Collapse
|
21
|
Field DT, Biagi N, Inman LA. The role of the ventral intraparietal area (VIP/pVIP) in the perception of object-motion and self-motion. Neuroimage 2020; 213:116679. [DOI: 10.1016/j.neuroimage.2020.116679] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 01/15/2020] [Accepted: 02/23/2020] [Indexed: 10/24/2022] Open
|
22
|
A model of how depth facilitates scene-relative object motion perception. PLoS Comput Biol 2019; 15:e1007397. [PMID: 31725723 PMCID: PMC6879150 DOI: 10.1371/journal.pcbi.1007397] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 11/26/2019] [Accepted: 09/12/2019] [Indexed: 12/02/2022] Open
Abstract
Many everyday interactions with moving objects benefit from an accurate perception of their movement. Self-motion, however, complicates object motion perception because it generates a global pattern of motion on the observer’s retina and radically influences an object’s retinal motion. There is strong evidence that the brain compensates by suppressing the retinal motion due to self-motion, however, this requires estimates of depth relative to the object—otherwise the appropriate self-motion component to remove cannot be determined. The underlying neural mechanisms are unknown, but neurons in brain areas MT and MST may contribute given their sensitivity to motion parallax and depth through joint direction, speed, and disparity tuning. We developed a neural model to investigate whether cells in areas MT and MST with well-established neurophysiological properties can account for human object motion judgments during self-motion. We tested the model by comparing simulated object motion signals to human object motion judgments in environments with monocular, binocular, and ambiguous depth. Our simulations show how precise depth information, such as that from binocular disparity, may improve estimates of the retinal motion pattern due the self-motion through increased selectivity among units that respond to the global self-motion pattern. The enhanced self-motion estimates emerged from recurrent feedback connections in MST and allowed the model to better suppress the appropriate direction, speed, and disparity signals from the object’s retinal motion, improving the accuracy of the object’s movement direction represented by motion signals. Research has shown that the accuracy with which humans perceive object motion during self-motion improves in the presence of stereo cues. Using a neural modelling approach, we explore whether this finding can be explained through improved estimation of the retinal motion induced by self-motion. Our results show that depth cues that provide information about scene structure may have a large effect on the specificity with which the neural mechanisms for motion perception represent the visual self-motion signal. This in turn enables effective removal of the retinal motion due to self-motion when the goal is to perceive object motion relative to the stationary world. These results reveal a hitherto unknown critical function of stereo tuning in the MT-MST complex, and shed important light on how the brain may recruit signals from upstream and downstream brain areas to simultaneously perceive self-motion and object motion.
Collapse
|
23
|
Zhang WH, Wang H, Chen A, Gu Y, Lee TS, Wong KM, Wu S. Complementary congruent and opposite neurons achieve concurrent multisensory integration and segregation. eLife 2019; 8:43753. [PMID: 31120416 PMCID: PMC6565362 DOI: 10.7554/elife.43753] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Accepted: 05/22/2019] [Indexed: 11/13/2022] Open
Abstract
Our brain perceives the world by exploiting multisensory cues to extract information about various aspects of external stimuli. The sensory cues from the same stimulus should be integrated to improve perception, and otherwise segregated to distinguish different stimuli. In reality, however, the brain faces the challenge of recognizing stimuli without knowing in advance the sources of sensory cues. To address this challenge, we propose that the brain conducts integration and segregation concurrently with complementary neurons. Studying the inference of heading-direction via visual and vestibular cues, we develop a network model with two reciprocally connected modules modeling interacting visual-vestibular areas. In each module, there are two groups of neurons whose tunings under each sensory cue are either congruent or opposite. We show that congruent neurons implement integration, while opposite neurons compute cue disparity information for segregation, and the interplay between two groups of neurons achieves efficient multisensory information processing.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong.,Center of the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States
| | - He Wang
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics, Primate Research Center, East China Normal University, Shanghai, China
| | - Yong Gu
- Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Tai Sing Lee
- Center of the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States
| | - Ky Michael Wong
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong
| | - Si Wu
- School of Electronics Engineering and Computer Science, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China
| |
Collapse
|
24
|
Causal inference accounts for heading perception in the presence of object motion. Proc Natl Acad Sci U S A 2019; 116:9060-9065. [PMID: 30996126 DOI: 10.1073/pnas.1820373116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Collapse
|
25
|
Sasaki R, Angelaki DE, DeAngelis GC. Processing of object motion and self-motion in the lateral subdivision of the medial superior temporal area in macaques. J Neurophysiol 2019; 121:1207-1221. [PMID: 30699042 DOI: 10.1152/jn.00497.2018] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Multiple areas of macaque cortex are involved in visual motion processing, but their relative functional roles remain unclear. The medial superior temporal (MST) area is typically divided into lateral (MSTl) and dorsal (MSTd) subdivisions that are thought to be involved in processing object motion and self-motion, respectively. Whereas MSTd has been studied extensively with regard to processing visual and nonvisual self-motion cues, little is known about self-motion signals in MSTl, especially nonvisual signals. Moreover, little is known about how self-motion and object motion signals interact in MSTl and how this differs from interactions in MSTd. We compared the visual and vestibular heading tuning of neurons in MSTl and MSTd using identical stimuli. Our findings reveal that both visual and vestibular heading signals are weaker in MSTl than in MSTd, suggesting that MSTl is less well suited to participate in self-motion perception than MSTd. We also tested neurons in both areas with a variety of combinations of object motion and self-motion. Our findings reveal that vestibular signals improve the separability of coding of heading and object direction in both areas, albeit more strongly in MSTd due to the greater strength of vestibular signals. Based on a marginalization technique, population decoding reveals that heading and object direction can be more effectively dissociated from MSTd responses than MSTl responses. Our findings help to clarify the respective contributions that MSTl and MSTd make to processing of object motion and self-motion, although our conclusions may be somewhat specific to the multipart moving objects that we employed. NEW & NOTEWORTHY Retinal image motion reflects contributions from both the observer's self-motion and the movement of objects in the environment. The neural mechanisms by which the brain dissociates self-motion and object motion remain unclear. This study provides the first systematic examination of how the lateral subdivision of area MST (MSTl) contributes to dissociating object motion and self-motion. We also examine, for the first time, how MSTl neurons represent translational self-motion based on both vestibular and visual cues.
Collapse
Affiliation(s)
- Ryo Sasaki
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester , Rochester, New York
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine , Houston, Texas.,Department of Electrical and Computer Engineering, Rice University , Houston, Texas
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester , Rochester, New York
| |
Collapse
|
26
|
Nagy AJ, Takeuchi Y, Berényi A. Coding of self-motion-induced and self-independent visual motion in the rat dorsomedial striatum. PLoS Biol 2018; 16:e2004712. [PMID: 29939998 PMCID: PMC6034886 DOI: 10.1371/journal.pbio.2004712] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2017] [Revised: 07/06/2018] [Accepted: 06/11/2018] [Indexed: 11/21/2022] Open
Abstract
Evolutionary development of vision has provided us with the capacity to detect moving objects. Concordant shifts of visual features suggest movements of the observer, whereas discordant changes are more likely to be indicating independently moving objects, such as predators or prey. Such distinction helps us to focus attention, adapt our behavior, and adjust our motor patterns to meet behavioral challenges. However, the neural basis of distinguishing self-induced and self-independent visual motions is not clarified in unrestrained animals yet. In this study, we investigated the presence and origin of motion-related visual information in the striatum of rats, a hub of action selection and procedural memory. We found that while almost half of the neurons in the dorsomedial striatum are sensitive to visual motion congruent with locomotion (and that many of them also code for spatial location), only a small subset of them are composed of fast-firing interneurons that could also perceive self-independent visual stimuli. These latter cells receive their visual input at least partially from the secondary visual cortex (V2). This differential visual sensitivity may be an important support in adjusting behavior to salient environmental events. It emphasizes the importance of investigating visual motion perception in unrestrained animals.
Collapse
Affiliation(s)
- Anett J. Nagy
- MTA-SZTE “Momentum” Oscillatory Neuronal Networks Research Group, Department of Physiology, University of Szeged, Szeged, Hungary
| | - Yuichi Takeuchi
- MTA-SZTE “Momentum” Oscillatory Neuronal Networks Research Group, Department of Physiology, University of Szeged, Szeged, Hungary
| | - Antal Berényi
- MTA-SZTE “Momentum” Oscillatory Neuronal Networks Research Group, Department of Physiology, University of Szeged, Szeged, Hungary
- Neuroscience Institute, New York University, New York, New York, United States of America
| |
Collapse
|
27
|
How Does the Brain Tell Self-Motion from Object Motion? J Neurosci 2018; 38:3875-3877. [PMID: 29669798 DOI: 10.1523/jneurosci.0039-18.2018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Revised: 02/22/2018] [Accepted: 03/01/2018] [Indexed: 11/21/2022] Open
|