1
|
Gebehart C, Büschges A. The processing of proprioceptive signals in distributed networks: insights from insect motor control. J Exp Biol 2024; 227:jeb246182. [PMID: 38180228 DOI: 10.1242/jeb.246182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2024]
Abstract
The integration of sensory information is required to maintain body posture and to generate robust yet flexible locomotion through unpredictable environments. To anticipate required adaptations in limb posture and enable compensation of sudden perturbations, an animal's nervous system assembles external (exteroception) and internal (proprioception) cues. Coherent neuronal representations of the proprioceptive context of the body and the appendages arise from the concerted action of multiple sense organs monitoring body kinetics and kinematics. This multimodal proprioceptive information, together with exteroceptive signals and brain-derived descending motor commands, converges onto premotor networks - i.e. the local neuronal circuitry controlling motor output and movements - within the ventral nerve cord (VNC), the insect equivalent of the vertebrate spinal cord. This Review summarizes existing knowledge and recent advances in understanding how local premotor networks in the VNC use convergent information to generate contextually appropriate activity, focusing on the example of posture control. We compare the role and advantages of distributed sensory processing over dedicated neuronal pathways, and the challenges of multimodal integration in distributed networks. We discuss how the gain of distributed networks may be tuned to enable the behavioral repertoire of these systems, and argue that insect premotor networks might compensate for their limited neuronal population size by, in comparison to vertebrate networks, relying more heavily on the specificity of their connections. At a time in which connectomics and physiological recording techniques enable anatomical and functional circuit dissection at an unprecedented resolution, insect motor systems offer unique opportunities to identify the mechanisms underlying multimodal integration for flexible motor control.
Collapse
Affiliation(s)
- Corinna Gebehart
- Champalimaud Foundation, Champalimaud Research, 1400-038 Lisbon, Portugal
| | - Ansgar Büschges
- Department of Animal Physiology, Institute of Zoology, Biocenter Cologne, University of Cologne, Zülpicher Strasse 47b, 50674 Cologne, Germany
| |
Collapse
|
2
|
Falconbridge M, Stamps RL, Edwards M, Badcock DR. Target motion misjudgments reflect a misperception of the background; revealed using continuous psychophysics. Iperception 2023; 14:20416695231214439. [PMID: 38680843 PMCID: PMC11046177 DOI: 10.1177/20416695231214439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 10/29/2023] [Indexed: 05/01/2024] Open
Abstract
Determining the velocities of target objects as we navigate complex environments is made more difficult by the fact that our own motion adds systematic motion signals to the visual scene. The flow-parsing hypothesis asserts that the background motion is subtracted from visual scenes in such cases as a way for the visual system to determine target motions relative to the scene. Here, we address the question of why backgrounds are only partially subtracted in lab settings. At the same time, we probe a much-neglected aspect of scene perception in flow-parsing studies, that is, the perception of the background itself. Here, we present results from three experienced psychophysical participants and one inexperienced participant who took part in three continuous psychophysics experiments. We show that, when the background optic flow pattern is composed of local elements whose motions are congruent with the global optic flow pattern, the incompleteness of the background subtraction can be entirely accounted for by a misperception of the background. When the local velocities comprising the background are randomly dispersed around the average global velocity, an additional factor is needed to explain the subtraction incompleteness. We show that a model where background perception is a result of the brain attempting to infer scene motion due to self-motion can account for these results.
Collapse
Affiliation(s)
- Michael Falconbridge
- School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| | - Robert L. Stamps
- Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Mark Edwards
- Research School of Psychology, Australian National University, Canberra, Australia
| | - David R. Badcock
- School of Psychology, University of Western Australia, Crawley, Western Australia, Australia
| |
Collapse
|
3
|
Lin R, Zeng F, Wang Q, Chen A. Cross-Modal Plasticity during Self-Motion Perception. Brain Sci 2023; 13:1504. [PMID: 38002465 PMCID: PMC10669852 DOI: 10.3390/brainsci13111504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 10/13/2023] [Accepted: 10/23/2023] [Indexed: 11/26/2023] Open
Abstract
To maintain stable and coherent perception in an ever-changing environment, the brain needs to continuously and dynamically calibrate information from multiple sensory sources, using sensory and non-sensory information in a flexible manner. Here, we review how the vestibular and visual signals are recalibrated during self-motion perception. We illustrate two different types of recalibration: one long-term cross-modal (visual-vestibular) recalibration concerning how multisensory cues recalibrate over time in response to a constant cue discrepancy, and one rapid-term cross-modal (visual-vestibular) recalibration concerning how recent prior stimuli and choices differentially affect subsequent self-motion decisions. In addition, we highlight the neural substrates of long-term visual-vestibular recalibration, with profound differences observed in neuronal recalibration across multisensory cortical areas. We suggest that multisensory recalibration is a complex process in the brain, is modulated by many factors, and requires the coordination of many distinct cortical areas. We hope this review will shed some light on research into the neural circuits of visual-vestibular recalibration and help develop a more generalized theory for cross-modal plasticity.
Collapse
Affiliation(s)
- Rushi Lin
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Fu Zeng
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Qingjun Wang
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
- NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai 200122, China
| |
Collapse
|
4
|
Johnston WJ, Freedman DJ. Redundant representations are required to disambiguate simultaneously presented complex stimuli. PLoS Comput Biol 2023; 19:e1011327. [PMID: 37556470 PMCID: PMC10442167 DOI: 10.1371/journal.pcbi.1011327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 08/21/2023] [Accepted: 07/04/2023] [Indexed: 08/11/2023] Open
Abstract
A pedestrian crossing a street during rush hour often looks and listens for potential danger. When they hear several different horns, they localize the cars that are honking and decide whether or not they need to modify their motor plan. How does the pedestrian use this auditory information to pick out the corresponding cars in visual space? The integration of distributed representations like these is called the assignment problem, and it must be solved to integrate distinct representations across but also within sensory modalities. Here, we identify and analyze a solution to the assignment problem: the representation of one or more common stimulus features in pairs of relevant brain regions-for example, estimates of the spatial position of cars are represented in both the visual and auditory systems. We characterize how the reliability of this solution depends on different features of the stimulus set (e.g., the size of the set and the complexity of the stimuli) and the details of the split representations (e.g., the precision of each stimulus representation and the amount of overlapping information). Next, we implement this solution in a biologically plausible receptive field code and show how constraints on the number of neurons and spikes used by the code force the brain to navigate a tradeoff between local and catastrophic errors. We show that, when many spikes and neurons are available, representing stimuli from a single sensory modality can be done more reliably across multiple brain regions, despite the risk of assignment errors. Finally, we show that a feedforward neural network can learn the optimal solution to the assignment problem, even when it receives inputs in two distinct representational formats. We also discuss relevant results on assignment errors from the human working memory literature and show that several key predictions of our theory already have support.
Collapse
Affiliation(s)
- W. Jeffrey Johnston
- Graduate Program in Computational Neuroscience and the Department of Neurobiology, The University of Chicago, Chicago, Illinois, United States of America
- Center for Theoretical Neuroscience and Mortimer B. Zuckerman Mind, Brain and Behavior Institute, Columbia University, New York, New York, United States of America
| | - David J. Freedman
- Graduate Program in Computational Neuroscience and the Department of Neurobiology, The University of Chicago, Chicago, Illinois, United States of America
- Neuroscience Institute, The University of Chicago, Chicago, Illinois, United States of America
| |
Collapse
|
5
|
Keshavarzi S, Velez-Fort M, Margrie TW. Cortical Integration of Vestibular and Visual Cues for Navigation, Visual Processing, and Perception. Annu Rev Neurosci 2023; 46:301-320. [PMID: 37428601 DOI: 10.1146/annurev-neuro-120722-100503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Despite increasing evidence of its involvement in several key functions of the cerebral cortex, the vestibular sense rarely enters our consciousness. Indeed, the extent to which these internal signals are incorporated within cortical sensory representation and how they might be relied upon for sensory-driven decision-making, during, for example, spatial navigation, is yet to be understood. Recent novel experimental approaches in rodents have probed both the physiological and behavioral significance of vestibular signals and indicate that their widespread integration with vision improves both the cortical representation and perceptual accuracy of self-motion and orientation. Here, we summarize these recent findings with a focus on cortical circuits involved in visual perception and spatial navigation and highlight the major remaining knowledge gaps. We suggest that vestibulo-visual integration reflects a process of constant updating regarding the status of self-motion, and access to such information by the cortex is used for sensory perception and predictions that may be implemented for rapid, navigation-related decision-making.
Collapse
Affiliation(s)
- Sepiedeh Keshavarzi
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Mateo Velez-Fort
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Troy W Margrie
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| |
Collapse
|
6
|
Yan M, Zhang WH, Wang H, Wong KYM. Bimodular continuous attractor neural networks with static and moving stimuli. Phys Rev E 2023; 107:064302. [PMID: 37464697 DOI: 10.1103/physreve.107.064302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Accepted: 05/08/2023] [Indexed: 07/20/2023]
Abstract
We investigated the dynamical behaviors of bimodular continuous attractor neural networks, each processing a modality of sensory input and interacting with each other. We found that when bumps coexist in both modules, the position of each bump is shifted towards the other input when the intermodular couplings are excitatory and is shifted away when inhibitory. When one intermodular coupling is excitatory while another is moderately inhibitory, temporally modulated population spikes can be generated. On further increase of the inhibitory coupling, momentary spikes will emerge. In the regime of bump coexistence, bump heights are primarily strengthened by excitatory intermodular couplings, but there is a lesser weakening effect due to a bump being displaced from the direct input. When bimodular networks serve as decoders of multisensory integration, we extend the Bayesian framework to show that excitatory and inhibitory couplings encode attractive and repulsive priors, respectively. At low disparity, the bump positions decode the posterior means in the Bayesian framework, whereas at high disparity, multiple steady states exist. In the regime of multiple steady states, the less stable state can be accessed if the input causing the more stable state arrives after a sufficiently long delay. When one input is moving, the bump in the corresponding module is pinned when the moving stimulus is weak, unpinned at intermediate stimulus strength, and tracks the input at strong stimulus strength, and the stimulus strengths for these transitions increase with the velocity of the moving stimulus. These results are important to understanding multisensory integration of static and dynamic stimuli.
Collapse
Affiliation(s)
- Min Yan
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong SAR, People's Republic of China
| | - Wen-Hao Zhang
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, Texas 75390, USA
- O'Donnell Brain Institute, UT Southwestern Medical Center, Dallas, Texas 75390, USA
| | - He Wang
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong SAR, People's Republic of China
- Hong Kong University of Science and Technology, Shenzhen Research Institute, Shenzhen 518057, China
| | - K Y Michael Wong
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong SAR, People's Republic of China
| |
Collapse
|
7
|
Zeng F, Zaidel A, Chen A. Contrary neuronal recalibration in different multisensory cortical areas. eLife 2023; 12:82895. [PMID: 36877555 PMCID: PMC9988259 DOI: 10.7554/elife.82895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 02/21/2023] [Indexed: 03/07/2023] Open
Abstract
The adult brain demonstrates remarkable multisensory plasticity by dynamically recalibrating itself based on information from multiple sensory sources. After a systematic visual-vestibular heading offset is experienced, the unisensory perceptual estimates for subsequently presented stimuli are shifted toward each other (in opposite directions) to reduce the conflict. The neural substrate of this recalibration is unknown. Here, we recorded single-neuron activity from the dorsal medial superior temporal (MSTd), parietoinsular vestibular cortex (PIVC), and ventral intraparietal (VIP) areas in three male rhesus macaques during this visual-vestibular recalibration. Both visual and vestibular neuronal tuning curves in MSTd shifted - each according to their respective cues' perceptual shifts. Tuning of vestibular neurons in PIVC also shifted in the same direction as vestibular perceptual shifts (cells were not robustly tuned to the visual stimuli). By contrast, VIP neurons demonstrated a unique phenomenon: both vestibular and visual tuning shifted in accordance with vestibular perceptual shifts. Such that, visual tuning shifted, surprisingly, contrary to visual perceptual shifts. Therefore, while unsupervised recalibration (to reduce cue conflict) occurs in early multisensory cortices, higher-level VIP reflects only a global shift, in vestibular space.
Collapse
Affiliation(s)
- Fu Zeng
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal UniversityShanghaiChina
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan UniversityRamat GanIsrael
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal UniversityShanghaiChina
| |
Collapse
|
8
|
Gao W, Lin Y, Shen J, Han J, Song X, Lu Y, Zhan H, Li Q, Ge H, Lin Z, Shi W, Drugowitsch J, Tang H, Chen X. Diverse effects of gaze direction on heading perception in humans. Cereb Cortex 2023:7024719. [PMID: 36734278 DOI: 10.1093/cercor/bhac541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 12/24/2022] [Accepted: 12/27/2022] [Indexed: 02/04/2023] Open
Abstract
Gaze change can misalign spatial reference frames encoding visual and vestibular signals in cortex, which may affect the heading discrimination. Here, by systematically manipulating the eye-in-head and head-on-body positions to change the gaze direction of subjects, the performance of heading discrimination was tested with visual, vestibular, and combined stimuli in a reaction-time task in which the reaction time is under the control of subjects. We found the gaze change induced substantial biases in perceived heading, increased the threshold of discrimination and reaction time of subjects in all stimulus conditions. For the visual stimulus, the gaze effects were induced by changing the eye-in-world position, and the perceived heading was biased in the opposite direction of gaze. In contrast, the vestibular gaze effects were induced by changing the eye-in-head position, and the perceived heading was biased in the same direction of gaze. Although the bias was reduced when the visual and vestibular stimuli were combined, integration of the 2 signals substantially deviated from predictions of an extended diffusion model that accumulates evidence optimally over time and across sensory modalities. These findings reveal diverse gaze effects on the heading discrimination and emphasize that the transformation of spatial reference frames may underlie the effects.
Collapse
Affiliation(s)
- Wei Gao
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Yipeng Lin
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Jiangrong Shen
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Jianing Han
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaoxiao Song
- Department of Liberal Arts, School of Art Administration and Education, China Academy of Art, 218 Nanshan Road, Shangcheng District, Hangzhou 310002, China
| | - Yukun Lu
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Huijia Zhan
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Qianbing Li
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Haoting Ge
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Zheng Lin
- Department of Psychiatry, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Shangcheng District, Hangzhou 310009, China
| | - Wenlei Shi
- Center for the Study of the History of Chinese Language and Center for the Study of Language and Cognition, Zhejiang University, 866 Yuhangtang Road, Xihu District, Hangzhou 310058, China
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Longwood Avenue 220, Boston, MA 02116, United States
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaodong Chen
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| |
Collapse
|
9
|
Horrocks EAB, Mareschal I, Saleem AB. Walking humans and running mice: perception and neural encoding of optic flow during self-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210450. [PMID: 36511417 PMCID: PMC9745880 DOI: 10.1098/rstb.2021.0450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Locomotion produces full-field optic flow that often dominates the visual motion inputs to an observer. The perception of optic flow is in turn important for animals to guide their heading and interact with moving objects. Understanding how locomotion influences optic flow processing and perception is therefore essential to understand how animals successfully interact with their environment. Here, we review research investigating how perception and neural encoding of optic flow are altered during self-motion, focusing on locomotion. Self-motion has been found to influence estimation and sensitivity for optic flow speed and direction. Nonvisual self-motion signals also increase compensation for self-driven optic flow when parsing the visual motion of moving objects. The integration of visual and nonvisual self-motion signals largely follows principles of Bayesian inference and can improve the precision and accuracy of self-motion perception. The calibration of visual and nonvisual self-motion signals is dynamic, reflecting the changing visuomotor contingencies across different environmental contexts. Throughout this review, we consider experimental research using humans, non-human primates and mice. We highlight experimental challenges and opportunities afforded by each of these species and draw parallels between experimental findings. These findings reveal a profound influence of locomotion on optic flow processing and perception across species. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Edward A. B. Horrocks
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary, University of London, London E1 4NS, UK
| | - Aman B. Saleem
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
10
|
Layton OW, Parade MS, Fajen BR. The accuracy of object motion perception during locomotion. Front Psychol 2023; 13:1068454. [PMID: 36710725 PMCID: PMC9878598 DOI: 10.3389/fpsyg.2022.1068454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/19/2022] [Indexed: 01/15/2023] Open
Abstract
Human observers are capable of perceiving the motion of moving objects relative to the stationary world, even while undergoing self-motion. Perceiving world-relative object motion is complicated because the local optical motion of objects is influenced by both observer and object motion, and reflects object motion in observer coordinates. It has been proposed that observers recover world-relative object motion using global optic flow to factor out the influence of self-motion. However, object-motion judgments during simulated self-motion are biased, as if the visual system cannot completely compensate for the influence of self-motion. Recently, Xie et al. demonstrated that humans are capable of accurately judging world-relative object motion when self-motion is real, actively generated by walking, and accompanied by optic flow. However, the conditions used in that study differ from those found in the real world in that the moving object was a small dot with negligible optical expansion that moved at a fixed speed in retinal (rather than world) coordinates and was only visible for 500 ms. The present study investigated the accuracy of object motion perception under more ecologically valid conditions. Subjects judged the trajectory of an object that moved through a virtual environment viewed through a head-mounted display. Judgments exhibited bias in the case of simulated self-motion but were accurate when self-motion was real, actively generated, and accompanied by optic flow. The findings are largely consistent with the conclusions of Xie et al. and demonstrate that observers are capable of accurately perceiving world-relative object motion under ecologically valid conditions.
Collapse
Affiliation(s)
- Oliver W. Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States,Department of Computer Science, Colby College, Waterville, ME, United States,*Correspondence: Oliver W. Layton, ✉
| | - Melissa S. Parade
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States
| | - Brett R. Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States
| |
Collapse
|
11
|
Kim HR, Angelaki DE, DeAngelis GC. A neural mechanism for detecting object motion during self-motion. eLife 2022; 11:74971. [PMID: 35642599 PMCID: PMC9159750 DOI: 10.7554/elife.74971] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/17/2022] [Indexed: 11/17/2022] Open
Abstract
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.,Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
12
|
Noel JP, Shivkumar S, Dokka K, Haefner RM, Angelaki DE. Aberrant causal inference and presence of a compensatory mechanism in autism spectrum disorder. eLife 2022; 11:71866. [PMID: 35579424 PMCID: PMC9170250 DOI: 10.7554/elife.71866] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 05/15/2022] [Indexed: 12/02/2022] Open
Abstract
Autism spectrum disorder (ASD) is characterized by a panoply of social, communicative, and sensory anomalies. As such, a central goal of computational psychiatry is to ascribe the heterogenous phenotypes observed in ASD to a limited set of canonical computations that may have gone awry in the disorder. Here, we posit causal inference - the process of inferring a causal structure linking sensory signals to hidden world causes - as one such computation. We show that audio-visual integration is intact in ASD and in line with optimal models of cue combination, yet multisensory behavior is anomalous in ASD because this group operates under an internal model favoring integration (vs. segregation). Paradoxically, during explicit reports of common cause across spatial or temporal disparities, individuals with ASD were less and not more likely to report common cause, particularly at small cue disparities. Formal model fitting revealed differences in both the prior probability for common cause (p-common) and choice biases, which are dissociable in implicit but not explicit causal inference tasks. Together, this pattern of results suggests (i) different internal models in attributing world causes to sensory signals in ASD relative to neurotypical individuals given identical sensory cues, and (ii) the presence of an explicit compensatory mechanism in ASD, with these individuals putatively having learned to compensate for their bias to integrate in explicit reports.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, United States
| | | | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| | - Ralf M Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York City, United States.,Department of Neuroscience, Baylor College of Medicine, Houston, United States
| |
Collapse
|
13
|
Chaudhary S, Saywell N, Taylor D. The Differentiation of Self-Motion From External Motion Is a Prerequisite for Postural Control: A Narrative Review of Visual-Vestibular Interaction. Front Hum Neurosci 2022; 16:697739. [PMID: 35210998 PMCID: PMC8860980 DOI: 10.3389/fnhum.2022.697739] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 01/18/2022] [Indexed: 11/13/2022] Open
Abstract
The visual system is a source of sensory information that perceives environmental stimuli and interacts with other sensory systems to generate visual and postural responses to maintain postural stability. Although the three sensory systems; the visual, vestibular, and somatosensory systems work concurrently to maintain postural control, the visual and vestibular system interaction is vital to differentiate self-motion from external motion to maintain postural stability. The visual system influences postural control playing a key role in perceiving information required for this differentiation. The visual system’s main afferent information consists of optic flow and retinal slip that lead to the generation of visual and postural responses. Visual fixations generated by the visual system interact with the afferent information and the vestibular system to maintain visual and postural stability. This review synthesizes the roles of the visual system and their interaction with the vestibular system, to maintain postural stability.
Collapse
|
14
|
Motyka P, Akbal M, Litwin P. Forward optic flow is prioritised in visual awareness independently of walking direction. PLoS One 2021; 16:e0250905. [PMID: 33945563 PMCID: PMC8096117 DOI: 10.1371/journal.pone.0250905] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 04/15/2021] [Indexed: 12/31/2022] Open
Abstract
When two different images are presented separately to each eye, one experiences smooth transitions between them-a phenomenon called binocular rivalry. Previous studies have shown that exposure to signals from other senses can enhance the access of stimulation-congruent images to conscious perception. However, despite our ability to infer perceptual consequences from bodily movements, evidence that action can have an analogous influence on visual awareness is scarce and mainly limited to hand movements. Here, we investigated whether one's direction of locomotion affects perceptual access to optic flow patterns during binocular rivalry. Participants walked forwards and backwards on a treadmill while viewing highly-realistic visualisations of self-motion in a virtual environment. We hypothesised that visualisations congruent with walking direction would predominate in visual awareness over incongruent ones, and that this effect would increase with the precision of one's active proprioception. These predictions were not confirmed: optic flow consistent with forward locomotion was prioritised in visual awareness independently of walking direction and proprioceptive abilities. Our findings suggest the limited role of kinaesthetic-proprioceptive information in disambiguating visually perceived direction of self-motion and indicate that vision might be tuned to the (expanding) optic flow patterns prevalent in everyday life.
Collapse
Affiliation(s)
- Paweł Motyka
- Faculty of Psychology, University of Warsaw, Warsaw, Poland
| | - Mert Akbal
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Academy of Fine Arts Saar, Saarbrücken, Germany
| | - Piotr Litwin
- Faculty of Psychology, University of Warsaw, Warsaw, Poland
| |
Collapse
|
15
|
Wild B, Treue S. Primate extrastriate cortical area MST: a gateway between sensation and cognition. J Neurophysiol 2021; 125:1851-1882. [PMID: 33656951 DOI: 10.1152/jn.00384.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Primate visual cortex consists of dozens of distinct brain areas, each providing a highly specialized component to the sophisticated task of encoding the incoming sensory information and creating a representation of our visual environment that underlies our perception and action. One such area is the medial superior temporal cortex (MST), a motion-sensitive, direction-selective part of the primate visual cortex. It receives most of its input from the middle temporal (MT) area, but MST cells have larger receptive fields and respond to more complex motion patterns. The finding that MST cells are tuned for optic flow patterns has led to the suggestion that the area plays an important role in the perception of self-motion. This hypothesis has received further support from studies showing that some MST cells also respond selectively to vestibular cues. Furthermore, the area is part of a network that controls the planning and execution of smooth pursuit eye movements and its activity is modulated by cognitive factors, such as attention and working memory. This review of more than 90 studies focuses on providing clarity of the heterogeneous findings on MST in the macaque cortex and its putative homolog in the human cortex. From this analysis of the unique anatomical and functional position in the hierarchy of areas and processing steps in primate visual cortex, MST emerges as a gateway between perception, cognition, and action planning. Given this pivotal role, this area represents an ideal model system for the transition from sensation to cognition.
Collapse
Affiliation(s)
- Benedict Wild
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Goettingen Graduate Center for Neurosciences, Biophysics, and Molecular Biosciences (GGNB), University of Goettingen, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany
| |
Collapse
|
16
|
Abstract
Flow parsing is a way to estimate the direction of scene-relative motion of independently moving objects during self-motion of the observer. So far, this has been tested for simple geometric shapes such as dots or bars. Whether further cues such as prior knowledge about typical directions of an object’s movement, e.g., typical human motion, are considered in the estimations is currently unclear. Here, we adjudicated between the theory that the direction of scene-relative motion of humans is estimated exclusively by flow parsing, just like for simple geometric objects, and the theory that prior knowledge about biological motion affects estimation of perceived direction of scene-relative motion of humans. We placed a human point-light walker in optic flow fields that simulated forward motion of the observer. We introduced conflicts between biological features of the walker (i.e., facing and articulation) and the direction of scene-relative motion. We investigated whether perceived direction of scene-relative motion was biased towards biological features and compared the results to perceived direction of scene-relative motion of scrambled walkers and dot clouds. We found that for humans the perceived direction of scene-relative motion was biased towards biological features. Additionally, we found larger flow parsing gain for humans compared to the other walker types. This indicates that flow parsing is not the only visual mechanism relevant for estimating the direction of scene-relative motion of independently moving objects during self-motion: observers also rely on prior knowledge about typical object motion, such as typical facing and articulation of humans.
Collapse
|
17
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
18
|
Mohl JT, Pearson JM, Groh JM. Monkeys and humans implement causal inference to simultaneously localize auditory and visual stimuli. J Neurophysiol 2020; 124:715-727. [PMID: 32727263 DOI: 10.1152/jn.00046.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
The environment is sampled by multiple senses, which are woven together to produce a unified perceptual state. However, optimally unifying such signals requires assigning particular signals to the same or different underlying objects or events. Many prior studies (especially in animals) have assumed fusion of cross-modal information, whereas recent work in humans has begun to probe the appropriateness of this assumption. Here we present results from a novel behavioral task in which both monkeys (Macaca mulatta) and humans localized visual and auditory stimuli and reported their perceived sources through saccadic eye movements. When the locations of visual and auditory stimuli were widely separated, subjects made two saccades, while when the two stimuli were presented at the same location they made only a single saccade. Intermediate levels of separation produced mixed response patterns: a single saccade to an intermediate position on some trials or separate saccades to both locations on others. The distribution of responses was well described by a hierarchical causal inference model that accurately predicted both the explicit "same vs. different" source judgments as well as biases in localization of the source(s) under each of these conditions. The results from this task are broadly consistent with prior work in humans across a wide variety of analogous tasks, extending the study of multisensory causal inference to nonhuman primates and to a natural behavioral task with both a categorical assay of the number of perceived sources and a continuous report of the perceived position of the stimuli.NEW & NOTEWORTHY We developed a novel behavioral paradigm for the study of multisensory causal inference in both humans and monkeys and found that both species make causal judgments in the same Bayes-optimal fashion. To our knowledge, this is the first demonstration of behavioral causal inference in animals, and this cross-species comparison lays the groundwork for future experiments using neuronal recording techniques that are impractical or impossible in human subjects.
Collapse
Affiliation(s)
- Jeff T Mohl
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina
| | - John M Pearson
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Biostatistics and Bioinformatics, Duke University Medical School, Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina
| |
Collapse
|
19
|
Flexible coding of object motion in multiple reference frames by parietal cortex neurons. Nat Neurosci 2020; 23:1004-1015. [PMID: 32541964 PMCID: PMC7474851 DOI: 10.1038/s41593-020-0656-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/14/2020] [Indexed: 12/28/2022]
Abstract
Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. We examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates, but should be ignored when judging object motion in head coordinates. Neural responses in the ventral intraparietal area are modulated by the task reference frame, such that population activity represents object direction in either reference frame. In contrast, responses in the lateral portion of the medial superior temporal area primarily represent object motion in head coordinates. Our findings demonstrate a neural representation of object motion that changes with task requirements.
Collapse
|
20
|
White O, Gaveau J, Bringoux L, Crevecoeur F. The gravitational imprint on sensorimotor planning and control. J Neurophysiol 2020; 124:4-19. [PMID: 32348686 DOI: 10.1152/jn.00381.2019] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Humans excel at learning complex tasks, and elite performers such as musicians or athletes develop motor skills that defy biomechanical constraints. All actions require the movement of massive bodies. Of particular interest in the process of sensorimotor learning and control is the impact of gravitational forces on the body. Indeed, efficient control and accurate internal representations of the body configuration in space depend on our ability to feel and anticipate the action of gravity. Here we review studies on perception and sensorimotor control in both normal and altered gravity. Behavioral and modeling studies together suggested that the nervous system develops efficient strategies to take advantage of gravitational forces across a wide variety of tasks. However, when the body was exposed to altered gravity, the rate and amount of adaptation exhibited substantial variation from one experiment to another and sometimes led to partial adjustment only. Overall, these results support the hypothesis that the brain uses a multimodal and flexible representation of the effect of gravity on our body and movements. Future work is necessary to better characterize the nature of this internal representation and the extent to which it can adapt to novel contexts.
Collapse
Affiliation(s)
- O White
- INSERM UMR1093-CAPS, UFR des Sciences du Sport, Université Bourgogne Franche-Comté, Dijon, France
| | - J Gaveau
- INSERM UMR1093-CAPS, UFR des Sciences du Sport, Université Bourgogne Franche-Comté, Dijon, France
| | - L Bringoux
- Institut des Sciences du Mouvement, CNRS, Aix Marseille Université, Marseille, France
| | - F Crevecoeur
- Institute of Communication and Information Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium.,Institute of Neuroscience (IoNS), UCLouvain, Belgium
| |
Collapse
|
21
|
Yakubovich S, Israeli-Korn S, Halperin O, Yahalom G, Hassin-Baer S, Zaidel A. Visual self-motion cues are impaired yet overweighted during visual-vestibular integration in Parkinson's disease. Brain Commun 2020; 2:fcaa035. [PMID: 32954293 PMCID: PMC7425426 DOI: 10.1093/braincomms/fcaa035] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 02/17/2020] [Accepted: 03/11/2020] [Indexed: 11/25/2022] Open
Abstract
Parkinson's disease is prototypically a movement disorder. Although perceptual and motor functions are highly interdependent, much less is known about perceptual deficits in Parkinson's disease, which are less observable by nature, and might go unnoticed if not tested directly. It is therefore imperative to seek and identify these, to fully understand the challenges facing patients with Parkinson's disease. Also, perceptual deficits may be related to motor symptoms. Posture, gait and balance, affected in Parkinson's disease, rely on veridical perception of one's own motion (self-motion) in space. Yet it is not known whether self-motion perception is impaired in Parkinson's disease. Using a well-established multisensory paradigm of heading discrimination (that has not been previously applied to Parkinson's disease), we tested unisensory visual and vestibular self-motion perception, as well as multisensory integration of visual and vestibular cues, in 19 Parkinson's disease, 23 healthy age-matched and 20 healthy young-adult participants. After experiencing vestibular (on a motion platform), visual (optic flow) or multisensory (combined visual-vestibular) self-motion stimuli at various headings, participants reported whether their perceived heading was to the right or left of straight ahead. Parkinson's disease participants and age-matched controls were tested twice (Parkinson's disease participants on and off medication). Parkinson's disease participants demonstrated significantly impaired visual self-motion perception compared with age-matched controls on both visits, irrespective of medication status. Young controls performed slightly (but not significantly) better than age-matched controls and significantly better than the Parkinson's disease group. The visual self-motion perception impairment in Parkinson's disease correlated significantly with clinical disease severity. By contrast, vestibular performance was unimpaired in Parkinson's disease. Remarkably, despite impaired visual self-motion perception, Parkinson's disease participants significantly overweighted the visual cues during multisensory (visual-vestibular ) integration (compared with Bayesian predictions of optimal integration) and significantly more than controls. These findings indicate that self-motion perception in Parkinson's disease is affected by impaired visual cues and by suboptimal visual-vestibular integration (overweighting of visual cues). Notably, vestibular self-motion perception was unimpaired. Thus, visual self-motion perception is specifically impaired in early-stage Parkinson's disease. This can impact Parkinson's disease diagnosis and subtyping. Overweighting of visual cues could reflect a general multisensory integration deficit in Parkinson's disease, or specific overestimation of visual cue reliability. Finally, impaired self-motion perception in Parkinson's disease may contribute to impaired balance and gait control. Future investigation into this connection might open up new avenues of alternative therapies to better treat these difficult symptoms.
Collapse
Affiliation(s)
- Sol Yakubovich
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Simon Israeli-Korn
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- The Neurology and Neurosurgery Department, The Sackler School of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Orly Halperin
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Gilad Yahalom
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- Department of Neurology, Movement Disorders Clinic, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Sharon Hassin-Baer
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- The Neurology and Neurosurgery Department, The Sackler School of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| |
Collapse
|
22
|
Cortical circuits for integration of self-motion and visual-motion signals. Curr Opin Neurobiol 2019; 60:122-128. [PMID: 31869592 DOI: 10.1016/j.conb.2019.11.013] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 11/13/2019] [Accepted: 11/15/2019] [Indexed: 12/19/2022]
Abstract
The cerebral cortex contains cells which respond to movement of the head, and these cells are thought to be involved in the perception of self-motion. In particular, studies in the primary visual cortex of mice show that both running speed and passive whole-body rotation modulates neuronal activity, and modern genetically targeted viral tracing approaches have begun to identify previously unknown circuits that underlie these responses. Here we review recent experimental findings and provide a road map for future work in mice to elucidate the functional architecture and emergent properties of a cortical network potentially involved in the generation of egocentric-based visual representations for navigation.
Collapse
|
23
|
A model of how depth facilitates scene-relative object motion perception. PLoS Comput Biol 2019; 15:e1007397. [PMID: 31725723 PMCID: PMC6879150 DOI: 10.1371/journal.pcbi.1007397] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 11/26/2019] [Accepted: 09/12/2019] [Indexed: 12/02/2022] Open
Abstract
Many everyday interactions with moving objects benefit from an accurate perception of their movement. Self-motion, however, complicates object motion perception because it generates a global pattern of motion on the observer’s retina and radically influences an object’s retinal motion. There is strong evidence that the brain compensates by suppressing the retinal motion due to self-motion, however, this requires estimates of depth relative to the object—otherwise the appropriate self-motion component to remove cannot be determined. The underlying neural mechanisms are unknown, but neurons in brain areas MT and MST may contribute given their sensitivity to motion parallax and depth through joint direction, speed, and disparity tuning. We developed a neural model to investigate whether cells in areas MT and MST with well-established neurophysiological properties can account for human object motion judgments during self-motion. We tested the model by comparing simulated object motion signals to human object motion judgments in environments with monocular, binocular, and ambiguous depth. Our simulations show how precise depth information, such as that from binocular disparity, may improve estimates of the retinal motion pattern due the self-motion through increased selectivity among units that respond to the global self-motion pattern. The enhanced self-motion estimates emerged from recurrent feedback connections in MST and allowed the model to better suppress the appropriate direction, speed, and disparity signals from the object’s retinal motion, improving the accuracy of the object’s movement direction represented by motion signals. Research has shown that the accuracy with which humans perceive object motion during self-motion improves in the presence of stereo cues. Using a neural modelling approach, we explore whether this finding can be explained through improved estimation of the retinal motion induced by self-motion. Our results show that depth cues that provide information about scene structure may have a large effect on the specificity with which the neural mechanisms for motion perception represent the visual self-motion signal. This in turn enables effective removal of the retinal motion due to self-motion when the goal is to perceive object motion relative to the stationary world. These results reveal a hitherto unknown critical function of stereo tuning in the MT-MST complex, and shed important light on how the brain may recruit signals from upstream and downstream brain areas to simultaneously perceive self-motion and object motion.
Collapse
|
24
|
Anson ER, Ehrenburg MR, Wei EX, Bakar D, Simonsick E, Agrawal Y. Saccular function is associated with both angular and distance errors on the triangle completion test. Clin Neurophysiol 2019; 130:2137-2143. [PMID: 31569041 PMCID: PMC6874399 DOI: 10.1016/j.clinph.2019.08.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Revised: 08/22/2019] [Accepted: 08/24/2019] [Indexed: 11/22/2022]
Abstract
OBJECTIVE The present study was designed to determine whether healthy older adults with age-related vestibular loss have deficits in spatial navigation. METHODS 154 adults participating in the Baltimore Longitudinal Study of Aging were tested for semicircular canal, saccular, and utricular function and spatial navigation ability using the blindfolded Triangle Completion Test (TCT). Multiple linear regression was used to investigate the relationships between each measure of vestibular function and performance on the TCT (angular error, end point error, and distance walked) while controlling for age and sex. RESULTS Individuals with abnormal saccular function made larger angular errors (β = 4.2°, p < 0.05) and larger end point errors (β = 13.6 cm, p < 0.05). Independent of vestibular function, older age was associated with larger angular (β's = 2.2-2.8°, p's < 0.005) and end point errors (β's = 7.5-9.0 cm, p's < 0.005) for each decade increment in age. CONCLUSIONS Saccular function appears to play a prominent role in accurate spatial navigation during a blindfolded navigation task. SIGNIFICANCE We hypothesize that gravitational cues detected by the saccule may be integrated into estimation of place as well as heading direction.
Collapse
Affiliation(s)
- E R Anson
- Department of Otolaryngology - Head & Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Otolaryngology, University of Rochester, Rochester, NY, USA.
| | - M R Ehrenburg
- Department of Otolaryngology - Head & Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - E X Wei
- Department of Otolaryngology - Head & Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - D Bakar
- Department of Otolaryngology - Head & Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA; School of Medicine, Brown University, Providence, RI, USA
| | - E Simonsick
- Longitudinal Studies Section, National Institute on Aging, Baltimore, MD, USA
| | - Y Agrawal
- Department of Otolaryngology - Head & Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
25
|
Impaired cerebellar Purkinje cell potentiation generates unstable spatial map orientation and inaccurate navigation. Nat Commun 2019; 10:2251. [PMID: 31113954 PMCID: PMC6529420 DOI: 10.1038/s41467-019-09958-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 04/05/2019] [Indexed: 12/29/2022] Open
Abstract
Cerebellar activity supported by PKC-dependent long-term depression in Purkinje cells (PCs) is involved in the stabilization of self-motion based hippocampal representation, but the existence of cerebellar processes underlying integration of allocentric cues remains unclear. Using mutant-mice lacking PP2B in PCs (L7-PP2B mice) we here assess the role of PP2B-dependent PC potentiation in hippocampal representation and spatial navigation. L7-PP2B mice display higher susceptibility to spatial map instability relative to the allocentric cue and impaired allocentric as well as self-motion goal-directed navigation. These results indicate that PP2B-dependent potentiation in PCs contributes to maintain a stable hippocampal representation of a familiar environment in an allocentric reference frame as well as to support optimal trajectory toward a goal during navigation. It is known that Purkinje cell PKC-dependent depression is involved in the stabilization of self-motion based hippocampal representation. Here the authors describe decreased stability of hippocampal place cells based on allocentric cues in mice lacking Purkinje cell PP2B-dependent potentiation.
Collapse
|
26
|
Causal inference accounts for heading perception in the presence of object motion. Proc Natl Acad Sci U S A 2019; 116:9060-9065. [PMID: 30996126 DOI: 10.1073/pnas.1820373116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Collapse
|
27
|
Britton Z, Arshad Q. Vestibular and Multi-Sensory Influences Upon Self-Motion Perception and the Consequences for Human Behavior. Front Neurol 2019; 10:63. [PMID: 30899238 PMCID: PMC6416181 DOI: 10.3389/fneur.2019.00063] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Accepted: 01/17/2019] [Indexed: 11/16/2022] Open
Abstract
In this manuscript, we comprehensively review both the human and animal literature regarding vestibular and multi-sensory contributions to self-motion perception. This covers the anatomical basis and how and where the signals are processed at all levels from the peripheral vestibular system to the brainstem and cerebellum and finally to the cortex. Further, we consider how and where these vestibular signals are integrated with other sensory cues to facilitate self-motion perception. We conclude by demonstrating the wide-ranging influences of the vestibular system and self-motion perception upon behavior, namely eye movement, postural control, and spatial awareness as well as new discoveries that such perception can impact upon numerical cognition, human affect, and bodily self-consciousness.
Collapse
Affiliation(s)
- Zelie Britton
- Department of Neuro-Otology, Charing Cross Hospital, Imperial College London, London, United Kingdom
| | - Qadeer Arshad
- Department of Neuro-Otology, Charing Cross Hospital, Imperial College London, London, United Kingdom
| |
Collapse
|
28
|
Meijer GT, Mertens PEC, Pennartz CMA, Olcese U, Lansink CS. The circuit architecture of cortical multisensory processing: Distinct functions jointly operating within a common anatomical network. Prog Neurobiol 2019; 174:1-15. [PMID: 30677428 DOI: 10.1016/j.pneurobio.2019.01.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 12/21/2018] [Accepted: 01/21/2019] [Indexed: 12/16/2022]
Abstract
Our perceptual systems continuously process sensory inputs from different modalities and organize these streams of information such that our subjective representation of the outside world is a unified experience. By doing so, they also enable further cognitive processing and behavioral action. While cortical multisensory processing has been extensively investigated in terms of psychophysics and mesoscale neural correlates, an in depth understanding of the underlying circuit-level mechanisms is lacking. Previous studies on circuit-level mechanisms of multisensory processing have predominantly focused on cue integration, i.e. the mechanism by which sensory features from different modalities are combined to yield more reliable stimulus estimates than those obtained by using single sensory modalities. In this review, we expand the framework on the circuit-level mechanisms of cortical multisensory processing by highlighting that multisensory processing is a family of functions - rather than a single operation - which involves not only the integration but also the segregation of modalities. In addition, multisensory processing not only depends on stimulus features, but also on cognitive resources, such as attention and memory, as well as behavioral context, to determine the behavioral outcome. We focus on rodent models as a powerful instrument to study the circuit-level bases of multisensory processes, because they enable combining cell-type-specific recording and interventional techniques with complex behavioral paradigms. We conclude that distinct multisensory processes share overlapping anatomical substrates, are implemented by diverse neuronal micro-circuitries that operate in parallel, and are flexibly recruited based on factors such as stimulus features and behavioral constraints.
Collapse
Affiliation(s)
- Guido T Meijer
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Paul E C Mertens
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Cyriel M A Pennartz
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Umberto Olcese
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| | - Carien S Lansink
- Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, the Netherlands.
| |
Collapse
|
29
|
Community-dwelling adults with a history of falling report lower perceived postural stability during a foam eyes closed test than non-fallers. Exp Brain Res 2019; 237:769-776. [PMID: 30604020 DOI: 10.1007/s00221-018-5458-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Accepted: 12/18/2018] [Indexed: 01/27/2023]
Abstract
Perceived postural stability has been reported to decrease as sway area increases on firm surfaces. However, changes in perceived stability under increasingly challenging conditions (e.g., removal of sensory inputs) and the relationship with sway area are not well characterized. Moreover, whether perceived stability varies as a function of age or history of falls is unknown. Here we investigate how perceived postural stability is related to sway area and whether this relationship varies as a function of age and fall history while vision and proprioceptive information are manipulated. Sway area was measured in 427 participants from the Baltimore Longitudinal Study of Aging while standing with eyes open and eyes closed on the floor and a foam cushion. Participants rated their stability [0 (completely unstable) to 10 (completely stable)] after each condition, and reported whether they had fallen in the past year. Perceived stability was negatively associated with sway area (cm2) such that individuals who swayed more felt less stable across all conditions (β = - 0.53, p < 0.001). Perceived stability decreased with increasing age (β = - 0.019, p < 0.001), independent of sway area. Fallers had a greater decline in perceived stability across conditions (F = 2.76, p = 0.042) compared to non-fallers, independent of sway area. Perceived postural stability declined as sway area increased during a multisensory balance test. A history of falling negatively impacts perceived postural stability when vision and proprioception are simultaneously challenged. Perceived postural stability may provide additional information useful for identifying individuals at risk of falls.
Collapse
|
30
|
Kirsch V, Boegle R, Keeser D, Kierig E, Ertl-Wagner B, Brandt T, Dieterich M. Handedness-dependent functional organizational patterns within the bilateral vestibular cortical network revealed by fMRI connectivity based parcellation. Neuroimage 2018; 178:224-237. [DOI: 10.1016/j.neuroimage.2018.05.018] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2018] [Revised: 05/02/2018] [Accepted: 05/05/2018] [Indexed: 12/19/2022] Open
|
31
|
Acerbi L, Dokka K, Angelaki DE, Ma WJ. Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception. PLoS Comput Biol 2018; 14:e1006110. [PMID: 30052625 PMCID: PMC6063401 DOI: 10.1371/journal.pcbi.1006110] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 03/28/2018] [Indexed: 11/18/2022] Open
Abstract
The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.
Collapse
Affiliation(s)
- Luigi Acerbi
- Center for Neural Science, New York University, New York, NY, United States of America
| | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York, NY, United States of America
- Department of Psychology, New York University, New York, NY, United States of America
| |
Collapse
|
32
|
Yu X, Hou H, Spillmann L, Gu Y. Causal Evidence of Motion Signals in Macaque Middle Temporal Area Weighted-Pooled for Global Heading Perception. Cereb Cortex 2018; 28:612-624. [PMID: 28057722 DOI: 10.1093/cercor/bhw402] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 12/13/2016] [Indexed: 11/14/2022] Open
Abstract
Accurate heading perception relies on visual information integrated across a wide field, that is, optic flow. Numerous computational studies have speculated how local visual information might be pooled by the brain to compute heading, but these hypotheses lack direct neurophysiological support. In the current study, we instructed human and monkey subjects to judge heading directions based on global optic flow. We showed that a local perturbation cue applied within only a small part of the visual field could bias the subjects' heading judgments, and shift the neuronal tuning in the macaque middle temporal (MT) area at the same time. Electrical microstimulation in MT significantly biased the animals' heading judgments predictable from the tuning of the stimulated neurons. Masking the visual stimuli within these neurons' receptive fields could not remove the stimulation effect, indicating a sufficient role of the MT signals pooled by downstream neurons for global heading estimation. Interestingly, this pooling is not homogeneous because stimulating neurons with excitatory surrounds produced relatively larger effects than stimulating neurons with inhibitory surrounds. Thus our data not only provide direct causal evidence, but also new insights into the neural mechanisms of pooling local motion information for global heading estimation.
Collapse
Affiliation(s)
- Xuefei Yu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Han Hou
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lothar Spillmann
- On leave of absence from Department of Neurology, University of Freiburg, Freiburg 79110, Germany
| | - Yong Gu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
33
|
Dissociation of Self-Motion and Object Motion by Linear Population Decoding That Approximates Marginalization. J Neurosci 2017; 37:11204-11219. [PMID: 29030435 DOI: 10.1523/jneurosci.1177-17.2017] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2017] [Revised: 10/02/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
We use visual image motion to judge the movement of objects, as well as our own movements through the environment. Generally, image motion components caused by object motion and self-motion are confounded in the retinal image. Thus, to estimate heading, the brain would ideally marginalize out the effects of object motion (or vice versa), but little is known about how this is accomplished neurally. Behavioral studies suggest that vestibular signals play a role in dissociating object motion and self-motion, and recent computational work suggests that a linear decoder can approximate marginalization by taking advantage of diverse multisensory representations. By measuring responses of MSTd neurons in two male rhesus monkeys and by applying a recently-developed method to approximate marginalization by linear population decoding, we tested the hypothesis that vestibular signals help to dissociate self-motion and object motion. We show that vestibular signals stabilize tuning for heading in neurons with congruent visual and vestibular heading preferences, whereas they stabilize tuning for object motion in neurons with discrepant preferences. Thus, vestibular signals enhance the separability of joint tuning for object motion and self-motion. We further show that a linear decoder, designed to approximate marginalization, allows the population to represent either self-motion or object motion with good accuracy. Decoder weights are broadly consistent with a readout strategy, suggested by recent computational work, in which responses are decoded according to the vestibular preferences of multisensory neurons. These results demonstrate, at both single neuron and population levels, that vestibular signals help to dissociate self-motion and object motion.SIGNIFICANCE STATEMENT The brain often needs to estimate one property of a changing environment while ignoring others. This can be difficult because multiple properties of the environment may be confounded in sensory signals. The brain can solve this problem by marginalizing over irrelevant properties to estimate the property-of-interest. We explore this problem in the context of self-motion and object motion, which are inherently confounded in the retinal image. We examine how diversity in a population of multisensory neurons may be exploited to decode self-motion and object motion from the population activity of neurons in macaque area MSTd.
Collapse
|
34
|
Jetzschke S, Ernst MO, Froehlich J, Boeddeker N. Finding Home: Landmark Ambiguity in Human Navigation. Front Behav Neurosci 2017; 11:132. [PMID: 28769773 PMCID: PMC5513971 DOI: 10.3389/fnbeh.2017.00132] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2017] [Accepted: 07/03/2017] [Indexed: 11/26/2022] Open
Abstract
Memories of places often include landmark cues, i.e., information provided by the spatial arrangement of distinct objects with respect to the target location. To study how humans combine landmark information for navigation, we conducted two experiments: To this end, participants were either provided with auditory landmarks while walking in a large sports hall or with visual landmarks while walking on a virtual-reality treadmill setup. We found that participants cannot reliably locate their home position due to ambiguities in the spatial arrangement when only one or two uniform landmarks provide cues with respect to the target. With three visual landmarks that look alike, the task is solved without ambiguity, while audio landmarks need to play three unique sounds for a similar performance. This reduction in ambiguity through integration of landmark information from 1, 2, and 3 landmarks is well modeled using a probabilistic approach based on maximum likelihood estimation. Unlike any deterministic model of human navigation (based e.g., on distance or angle information), this probabilistic model predicted both the precision and accuracy of the human homing performance. To further examine how landmark cues are integrated we introduced systematic conflicts in the visual landmark configuration between training of the home position and tests of the homing performance. The participants integrated the spatial information from each landmark near-optimally to reduce spatial variability. When the conflict becomes big, this integration breaks down and precision is sacrificed for accuracy. That is, participants return again closer to the home position, because they start ignoring the deviant third landmark. Relying on two instead of three landmarks, however, goes along with responses that are scattered over a larger area, thus leading to higher variability. To model the breakdown of integration with increasing conflict, the probabilistic model based on a simple Gaussian distribution used for Experiment 1 needed a slide extension in from of a mixture of Gaussians. All parameters for the Mixture Model were fixed based on the homing performance in the baseline condition which contained a single landmark. from the 1-Landmark Condition. This way we found that the Mixture Model could predict the integration performance and its breakdown with no additional free parameters. Overall these data suggest that humans use similar optimal probabilistic strategies in visual and auditory navigation, integrating landmark information to improve homing precision and balance homing precision with homing accuracy.
Collapse
Affiliation(s)
- Simon Jetzschke
- Department of Biology, Cognitive Neuroscience, Bielefeld UniversityBielefeld, Germany
- Cognitive Interaction Technology–Cluster of Excellence, Bielefeld UniversityBielefeld, Germany
| | - Marc O. Ernst
- Cognitive Interaction Technology–Cluster of Excellence, Bielefeld UniversityBielefeld, Germany
- Appl. Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm UniversityUlm, Germany
| | - Julia Froehlich
- Department of Biology, Cognitive Neuroscience, Bielefeld UniversityBielefeld, Germany
| | - Norbert Boeddeker
- Department of Biology, Cognitive Neuroscience, Bielefeld UniversityBielefeld, Germany
- Cognitive Interaction Technology–Cluster of Excellence, Bielefeld UniversityBielefeld, Germany
| |
Collapse
|
35
|
Smith AT, Greenlee MW, DeAngelis GC, Angelaki D. Distributed Visual–Vestibular Processing in the Cerebral Cortex of Man and Macaque. Multisens Res 2017. [DOI: 10.1163/22134808-00002568] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Recent advances in understanding the neurobiological underpinnings of visual–vestibular interactions underlying self-motion perception are reviewed with an emphasis on comparisons between the macaque and human brains. In both species, several distinct cortical regions have been identified that are active during both visual and vestibular stimulation and in some of these there is clear evidence for sensory integration. Several possible cross-species homologies between cortical regions are identified. A key feature of cortical organization is that the same information is apparently represented in multiple, anatomically diverse cortical regions, suggesting that information about self-motion is used for different purposes in different brain regions.
Collapse
Affiliation(s)
- Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| | - Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, 93053 Regensburg, Germany
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030, USA
| |
Collapse
|
36
|
Layton OW, Fajen BR. Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow. PLoS Comput Biol 2016; 12:e1004942. [PMID: 27341686 PMCID: PMC4920404 DOI: 10.1371/journal.pcbi.1004942] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Accepted: 04/22/2016] [Indexed: 11/18/2022] Open
Abstract
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model's heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading.
Collapse
Affiliation(s)
- Oliver W. Layton
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America
- * E-mail:
| | - Brett R. Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| |
Collapse
|
37
|
Kim HR, Pitkow X, Angelaki DE, DeAngelis GC. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons. J Neurophysiol 2016; 116:1449-67. [PMID: 27334948 DOI: 10.1152/jn.00005.2016] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2016] [Accepted: 06/16/2016] [Indexed: 11/22/2022] Open
Abstract
Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: "congruent" and "opposite" cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York
| | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York;
| |
Collapse
|