1
|
Steel A, Silson EH, Garcia BD, Robertson CE. A retinotopic code structures the interaction between perception and memory systems. Nat Neurosci 2024; 27:339-347. [PMID: 38168931 PMCID: PMC10923171 DOI: 10.1038/s41593-023-01512-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 10/31/2023] [Indexed: 01/05/2024]
Abstract
Conventional views of brain organization suggest that regions at the top of the cortical hierarchy processes internally oriented information using an abstract amodal neural code. Despite this, recent reports have described the presence of retinotopic coding at the cortical apex, including the default mode network. What is the functional role of retinotopic coding atop the cortical hierarchy? Here we report that retinotopic coding structures interactions between internally oriented (mnemonic) and externally oriented (perceptual) brain areas. Using functional magnetic resonance imaging, we observed robust inverted (negative) retinotopic coding in category-selective memory areas at the cortical apex, which is functionally linked to the classic (positive) retinotopic coding in category-selective perceptual areas in high-level visual cortex. These functionally linked retinotopic populations in mnemonic and perceptual areas exhibit spatially specific opponent responses during both bottom-up perception and top-down recall, suggesting that these areas are interlocked in a mutually inhibitory dynamic. These results show that retinotopic coding structures interactions between perceptual and mnemonic neural systems, providing a scaffold for their dynamic interaction.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| | - Edward H Silson
- Psychosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh, UK
| | - Brenda D Garcia
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Caroline E Robertson
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
2
|
Zheng Q, Gu Y. From Multisensory Integration to Multisensory Decision-Making. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:23-35. [PMID: 38270851 DOI: 10.1007/978-981-99-7611-9_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Organisms live in a dynamic environment in which sensory information from multiple sources is ever changing. A conceptually complex task for the organisms is to accumulate evidence across sensory modalities and over time, a process known as multisensory decision-making. This is a new concept, in terms of that previous researches have been largely conducted in parallel disciplines. That is, much efforts have been put either in sensory integration across modalities using activity summed over a duration of time, or in decision-making with only one sensory modality that evolves over time. Recently, a few studies with neurophysiological measurements emerge to study how different sensory modality information is processed, accumulated, and integrated over time in decision-related areas such as the parietal or frontal lobes in mammals. In this review, we summarize and comment on these studies that combine the long-existed two parallel fields of multisensory integration and decision-making. We show how the new findings provide insight into our understanding about neural mechanisms mediating multisensory information processing in a more complete way.
Collapse
Affiliation(s)
- Qihao Zheng
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Yong Gu
- Systems Neuroscience, SInstitute of Neuroscience, Chinese Academy of Sciences, Shanghai, China.
| |
Collapse
|
3
|
Steel A, Silson EH, Garcia BD, Robertson CE. A retinotopic code structures the interaction between perception and memory systems. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.15.540807. [PMID: 37292758 PMCID: PMC10245578 DOI: 10.1101/2023.05.15.540807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Conventional views of brain organization suggest that the cortical apex processes internally-oriented information using an abstract, amodal neural code. Yet, recent reports have described the presence of retinotopic coding at the cortical apex, including the default mode network. What is the functional role of retinotopic coding atop the cortical hierarchy? Here, we report that retinotopic coding structures interactions between internally-oriented (mnemonic) and externally-oriented (perceptual) brain areas. Using fMRI, we observed robust, inverted (negative) retinotopic coding in category-selective memory areas at the cortical apex, which is functionally linked to the classic (positive) retinotopic coding in category-selective perceptual areas in high-level visual cortex. Specifically, these functionally-linked retinotopic populations in mnemonic and perceptual areas exhibit spatially-specific opponent responses during both bottom-up perception and top-down recall, suggesting that these areas are interlocked in a mutually-inhibitory dynamic. Together, these results show that retinotopic coding structures interactions between perceptual and mnemonic neural systems, thereby scaffolding their dynamic interaction.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, 03755
| | - Edward H. Silson
- Psychology, School of Philosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh, UK EH8 9JZ
| | - Brenda D. Garcia
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, 03755
| | | |
Collapse
|
4
|
Jerjian SJ, Harsch DR, Fetsch CR. Self-motion perception and sequential decision-making: where are we heading? Philos Trans R Soc Lond B Biol Sci 2023; 378:20220333. [PMID: 37545301 PMCID: PMC10404932 DOI: 10.1098/rstb.2022.0333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
To navigate and guide adaptive behaviour in a dynamic environment, animals must accurately estimate their own motion relative to the external world. This is a fundamentally multisensory process involving integration of visual, vestibular and kinesthetic inputs. Ideal observer models, paired with careful neurophysiological investigation, helped to reveal how visual and vestibular signals are combined to support perception of linear self-motion direction, or heading. Recent work has extended these findings by emphasizing the dimension of time, both with regard to stimulus dynamics and the trade-off between speed and accuracy. Both time and certainty-i.e. the degree of confidence in a multisensory decision-are essential to the ecological goals of the system: terminating a decision process is necessary for timely action, and predicting one's accuracy is critical for making multiple decisions in a sequence, as in navigation. Here, we summarize a leading model for multisensory decision-making, then show how the model can be extended to study confidence in heading discrimination. Lastly, we preview ongoing efforts to bridge self-motion perception and navigation per se, including closed-loop virtual reality and active self-motion. The design of unconstrained, ethologically inspired tasks, accompanied by large-scale neural recordings, raise promise for a deeper understanding of spatial perception and decision-making in the behaving animal. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Steven J. Jerjian
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Devin R. Harsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
- Center for Neuroscience and Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
5
|
Liu B, Shan J, Gu Y. Temporal and spatial properties of vestibular signals for perception of self-motion. Front Neurol 2023; 14:1266513. [PMID: 37780704 PMCID: PMC10534010 DOI: 10.3389/fneur.2023.1266513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 08/29/2023] [Indexed: 10/03/2023] Open
Abstract
It is well recognized that the vestibular system is involved in numerous important cognitive functions, including self-motion perception, spatial orientation, locomotion, and vector-based navigation, in addition to basic reflexes, such as oculomotor or body postural control. Consistent with this rationale, vestibular signals exist broadly in the brain, including several regions of the cerebral cortex, potentially allowing tight coordination with other sensory systems to improve the accuracy and precision of perception or action during self-motion. Recent neurophysiological studies in animal models based on single-cell resolution indicate that vestibular signals exhibit complex spatiotemporal dynamics, producing challenges in identifying their exact functions and how they are integrated with other modality signals. For example, vestibular and optic flow could provide congruent and incongruent signals regarding spatial tuning functions, reference frames, and temporal dynamics. Comprehensive studies, including behavioral tasks, neural recording across sensory and sensory-motor association areas, and causal link manipulations, have provided some insights into the neural mechanisms underlying multisensory self-motion perception.
Collapse
Affiliation(s)
- Bingyu Liu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jiayu Shan
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
6
|
Gao W, Shen J, Lin Y, Wang K, Lin Z, Tang H, Chen X. Sequential sparse autoencoder for dynamic heading representation in ventral intraparietal area. Comput Biol Med 2023; 163:107114. [PMID: 37329620 DOI: 10.1016/j.compbiomed.2023.107114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 05/12/2023] [Accepted: 05/30/2023] [Indexed: 06/19/2023]
Abstract
To navigate in space, it is important to predict headings in real-time from neural responses in the brain to vestibular and visual signals, and the ventral intraparietal area (VIP) is one of the critical brain areas. However, it remains unexplored in the population level how the heading perception is represented in VIP. And there are no commonly used methods suitable for decoding the headings from the population responses in VIP, given the large spatiotemporal dynamics and heterogeneity in the neural responses. Here, responses were recorded from 210 VIP neurons in three rhesus monkeys when they were performing a heading perception task. And by specifically and separately modelling the both dynamics with sparse representation, we built a sequential sparse autoencoder (SSAE) to do the population decoding on the recorded dataset and tried to maximize the decoding performance. The SSAE relies on a three-layer sparse autoencoder to extract temporal and spatial heading features in the dataset via unsupervised learning, and a softmax classifier to decode the headings. Compared with other population decoding methods, the SSAE achieves a leading accuracy of 96.8% ± 2.1%, and shows the advantages of robustness, low storage and computing burden for real-time prediction. Therefore, our SSAE model performs well in learning neurobiologically plausible features comprising dynamic navigational information.
Collapse
Affiliation(s)
- Wei Gao
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou, 310029, China
| | - Jiangrong Shen
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou, 310027, China
| | - Yipeng Lin
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou, 310029, China
| | - Kejun Wang
- School of Software Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou, 310027, China
| | - Zheng Lin
- Department of Psychiatry, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Shangcheng District, Hangzhou, 310009, China
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou, 310027, China.
| | - Xiaodong Chen
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou, 310029, China.
| |
Collapse
|
7
|
Keshavarzi S, Velez-Fort M, Margrie TW. Cortical Integration of Vestibular and Visual Cues for Navigation, Visual Processing, and Perception. Annu Rev Neurosci 2023; 46:301-320. [PMID: 37428601 PMCID: PMC7616138 DOI: 10.1146/annurev-neuro-120722-100503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Despite increasing evidence of its involvement in several key functions of the cerebral cortex, the vestibular sense rarely enters our consciousness. Indeed, the extent to which these internal signals are incorporated within cortical sensory representation and how they might be relied upon for sensory-driven decision-making, during, for example, spatial navigation, is yet to be understood. Recent novel experimental approaches in rodents have probed both the physiological and behavioral significance of vestibular signals and indicate that their widespread integration with vision improves both the cortical representation and perceptual accuracy of self-motion and orientation. Here, we summarize these recent findings with a focus on cortical circuits involved in visual perception and spatial navigation and highlight the major remaining knowledge gaps. We suggest that vestibulo-visual integration reflects a process of constant updating regarding the status of self-motion, and access to such information by the cortex is used for sensory perception and predictions that may be implemented for rapid, navigation-related decision-making.
Collapse
Affiliation(s)
- Sepiedeh Keshavarzi
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Mateo Velez-Fort
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Troy W Margrie
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| |
Collapse
|
8
|
DiRisio GF, Ra Y, Qiu Y, Anzai A, DeAngelis GC. Neurons in Primate Area MSTd Signal Eye Movement Direction Inferred from Dynamic Perspective Cues in Optic Flow. J Neurosci 2023; 43:1888-1904. [PMID: 36725323 PMCID: PMC10027048 DOI: 10.1523/jneurosci.1885-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/18/2023] [Accepted: 01/24/2023] [Indexed: 02/03/2023] Open
Abstract
Smooth eye movements are common during natural viewing; we frequently rotate our eyes to track moving objects or to maintain fixation on an object during self-movement. Reliable information about smooth eye movements is crucial to various neural computations, such as estimating heading from optic flow or judging depth from motion parallax. While it is well established that extraretinal signals (e.g., efference copies of motor commands) carry critical information about eye velocity, the rotational optic flow field produced by eye rotations also carries valuable information. Although previous work has shown that dynamic perspective cues in optic flow can be used in computations that require estimates of eye velocity, it has remained unclear where and how the brain processes these visual cues and how they are integrated with extraretinal signals regarding eye rotation. We examined how neurons in the dorsal region of the medial superior temporal area (MSTd) of two male rhesus monkeys represent the direction of smooth pursuit eye movements based on both visual cues (dynamic perspective) and extraretinal signals. We find that most MSTd neurons have matched preferences for the direction of eye rotation based on visual and extraretinal signals. Moreover, neural responses to combinations of these signals are well predicted by a weighted linear summation model. These findings demonstrate a neural substrate for representing the velocity of smooth eye movements based on rotational optic flow and establish area MSTd as a key node for integrating visual and extraretinal signals into a more generalized representation of smooth eye movements.SIGNIFICANCE STATEMENT We frequently rotate our eyes to smoothly track objects of interest during self-motion. Information about eye velocity is crucial for a variety of computations performed by the brain, including depth perception and heading perception. Traditionally, information about eye rotation has been thought to arise mainly from extraretinal signals, such as efference copies of motor commands. Previous work shows that eye velocity can also be inferred from rotational optic flow that accompanies smooth eye movements, but the neural origins of these visual signals about eye rotation have remained unknown. We demonstrate that macaque neurons signal the direction of smooth eye rotation based on visual signals, and that they integrate both visual and extraretinal signals regarding eye rotation in a congruent fashion.
Collapse
Affiliation(s)
- Grace F DiRisio
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- Department of Neurobiology, University of Chicago, Chicago, Illinois 60637
| | - Yongsoo Ra
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Yinghui Qiu
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- College of Veterinary Medicine, Cornell University, Ithaca, New York 14853-6401
| | - Akiyuki Anzai
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
| |
Collapse
|
9
|
Gao W, Lin Y, Shen J, Han J, Song X, Lu Y, Zhan H, Li Q, Ge H, Lin Z, Shi W, Drugowitsch J, Tang H, Chen X. Diverse effects of gaze direction on heading perception in humans. Cereb Cortex 2023:7024719. [PMID: 36734278 DOI: 10.1093/cercor/bhac541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 12/24/2022] [Accepted: 12/27/2022] [Indexed: 02/04/2023] Open
Abstract
Gaze change can misalign spatial reference frames encoding visual and vestibular signals in cortex, which may affect the heading discrimination. Here, by systematically manipulating the eye-in-head and head-on-body positions to change the gaze direction of subjects, the performance of heading discrimination was tested with visual, vestibular, and combined stimuli in a reaction-time task in which the reaction time is under the control of subjects. We found the gaze change induced substantial biases in perceived heading, increased the threshold of discrimination and reaction time of subjects in all stimulus conditions. For the visual stimulus, the gaze effects were induced by changing the eye-in-world position, and the perceived heading was biased in the opposite direction of gaze. In contrast, the vestibular gaze effects were induced by changing the eye-in-head position, and the perceived heading was biased in the same direction of gaze. Although the bias was reduced when the visual and vestibular stimuli were combined, integration of the 2 signals substantially deviated from predictions of an extended diffusion model that accumulates evidence optimally over time and across sensory modalities. These findings reveal diverse gaze effects on the heading discrimination and emphasize that the transformation of spatial reference frames may underlie the effects.
Collapse
Affiliation(s)
- Wei Gao
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Yipeng Lin
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Jiangrong Shen
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Jianing Han
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaoxiao Song
- Department of Liberal Arts, School of Art Administration and Education, China Academy of Art, 218 Nanshan Road, Shangcheng District, Hangzhou 310002, China
| | - Yukun Lu
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Huijia Zhan
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Qianbing Li
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Haoting Ge
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Zheng Lin
- Department of Psychiatry, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Shangcheng District, Hangzhou 310009, China
| | - Wenlei Shi
- Center for the Study of the History of Chinese Language and Center for the Study of Language and Cognition, Zhejiang University, 866 Yuhangtang Road, Xihu District, Hangzhou 310058, China
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Longwood Avenue 220, Boston, MA 02116, United States
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaodong Chen
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| |
Collapse
|
10
|
Zhou Y, Mohan K, Freedman DJ. Abstract Encoding of Categorical Decisions in Medial Superior Temporal and Lateral Intraparietal Cortices. J Neurosci 2022; 42:9069-9081. [PMID: 36261285 PMCID: PMC9732825 DOI: 10.1523/jneurosci.0017-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 10/04/2022] [Accepted: 10/06/2022] [Indexed: 01/05/2023] Open
Abstract
Categorization is an essential cognitive and perceptual process for decision-making and recognition. The posterior parietal cortex, particularly the lateral intraparietal (LIP) area has been suggested to transform visual feature encoding into abstract categorical representations. By contrast, areas closer to sensory input, such as the middle temporal (MT) area, encode stimulus features but not more abstract categorical information during categorization tasks. Here, we compare the contributions of the medial superior temporal (MST) and LIP areas in category computation by recording neuronal activity in both areas from two male rhesus macaques trained to perform a visual motion categorization task. MST is a core motion-processing region interconnected with MT and is often considered an intermediate processing stage between MT and LIP. We show that MST exhibits robust decision-correlated motion category encoding and working memory encoding similar to LIP, suggesting that MST plays a substantial role in cognitive computation, extending beyond its widely recognized role in visual motion processing.SIGNIFICANCE STATEMENT Categorization requires assigning incoming sensory stimuli into behaviorally relevant groups. Previous work found that parietal area LIP shows a strong encoding of the learned category membership of visual motion stimuli, while visual area MT shows strong direction tuning but not category tuning during a motion direction categorization task. Here we show that the medial superior temporal (MST) area, a visual motion-processing region interconnected with both LIP and MT, shows strong visual category encoding similar to that observed in LIP. This suggests that MST plays a greater role in abstract cognitive functions, extending beyond its well known role in visual motion processing.
Collapse
Affiliation(s)
- Yang Zhou
- Department of Neurobiology, The University of Chicago, Chicago, Illinois 60637
- PKU-IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, School of Psychological and Cognitive Sciences, Peking University, Beijing 100871, People's Republic of China
| | - Krithika Mohan
- Department of Neurobiology, The University of Chicago, Chicago, Illinois 60637
| | - David J Freedman
- Department of Neurobiology, The University of Chicago, Chicago, Illinois 60637
- The University of Chicago Neuroscience Institute, The University of Chicago, Chicago, Illinois 60637
| |
Collapse
|
11
|
Zhang J, Gu Y, Chen A, Yu Y. Unveiling Dynamic System Strategies for Multisensory Processing: From Neuronal Fixed-Criterion Integration to Population Bayesian Inference. Research (Wash D C) 2022; 2022:9787040. [PMID: 36072271 PMCID: PMC9422331 DOI: 10.34133/2022/9787040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/18/2022] [Indexed: 11/17/2022] Open
Abstract
Multisensory processing is of vital importance for survival in the external world. Brain circuits can both integrate and separate visual and vestibular senses to infer self-motion and the motion of other objects. However, it is largely debated how multisensory brain regions process such multisensory information and whether they follow the Bayesian strategy in this process. Here, we combined macaque physiological recordings in the dorsal medial superior temporal area (MST-d) with modeling of synaptically coupled multilayer continuous attractor neural networks (CANNs) to study the underlying neuronal circuit mechanisms. In contrast to previous theoretical studies that focused on unisensory direction preference, our analysis showed that synaptic coupling induced cooperation and competition in the multisensory circuit and caused single MST-d neurons to switch between sensory integration or separation modes based on the fixed-criterion causal strategy, which is determined by the synaptic coupling strength. Furthermore, the prior of sensory reliability was represented by pooling diversified criteria at the MST-d population level, and the Bayesian strategy was achieved in downstream neurons whose causal inference flexibly changed with the prior. The CANN model also showed that synaptic input balance is the dynamic origin of neuronal direction preference formation and further explained the misalignment between direction preference and inference observed in previous studies. This work provides a computational framework for a new brain-inspired algorithm underlying multisensory computation.
Collapse
Affiliation(s)
- Jiawei Zhang
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, Human Phenome Institute, Shanghai 200433, China
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China
| | - Yuguo Yu
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, Human Phenome Institute, Shanghai 200433, China
| |
Collapse
|
12
|
Chaudhary S, Saywell N, Taylor D. The Differentiation of Self-Motion From External Motion Is a Prerequisite for Postural Control: A Narrative Review of Visual-Vestibular Interaction. Front Hum Neurosci 2022; 16:697739. [PMID: 35210998 PMCID: PMC8860980 DOI: 10.3389/fnhum.2022.697739] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 01/18/2022] [Indexed: 11/13/2022] Open
Abstract
The visual system is a source of sensory information that perceives environmental stimuli and interacts with other sensory systems to generate visual and postural responses to maintain postural stability. Although the three sensory systems; the visual, vestibular, and somatosensory systems work concurrently to maintain postural control, the visual and vestibular system interaction is vital to differentiate self-motion from external motion to maintain postural stability. The visual system influences postural control playing a key role in perceiving information required for this differentiation. The visual system’s main afferent information consists of optic flow and retinal slip that lead to the generation of visual and postural responses. Visual fixations generated by the visual system interact with the afferent information and the vestibular system to maintain visual and postural stability. This review synthesizes the roles of the visual system and their interaction with the vestibular system, to maintain postural stability.
Collapse
|
13
|
Wild B, Treue S. Primate extrastriate cortical area MST: a gateway between sensation and cognition. J Neurophysiol 2021; 125:1851-1882. [PMID: 33656951 DOI: 10.1152/jn.00384.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Primate visual cortex consists of dozens of distinct brain areas, each providing a highly specialized component to the sophisticated task of encoding the incoming sensory information and creating a representation of our visual environment that underlies our perception and action. One such area is the medial superior temporal cortex (MST), a motion-sensitive, direction-selective part of the primate visual cortex. It receives most of its input from the middle temporal (MT) area, but MST cells have larger receptive fields and respond to more complex motion patterns. The finding that MST cells are tuned for optic flow patterns has led to the suggestion that the area plays an important role in the perception of self-motion. This hypothesis has received further support from studies showing that some MST cells also respond selectively to vestibular cues. Furthermore, the area is part of a network that controls the planning and execution of smooth pursuit eye movements and its activity is modulated by cognitive factors, such as attention and working memory. This review of more than 90 studies focuses on providing clarity of the heterogeneous findings on MST in the macaque cortex and its putative homolog in the human cortex. From this analysis of the unique anatomical and functional position in the hierarchy of areas and processing steps in primate visual cortex, MST emerges as a gateway between perception, cognition, and action planning. Given this pivotal role, this area represents an ideal model system for the transition from sensation to cognition.
Collapse
Affiliation(s)
- Benedict Wild
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Goettingen Graduate Center for Neurosciences, Biophysics, and Molecular Biosciences (GGNB), University of Goettingen, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany
| |
Collapse
|
14
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
15
|
Gallagher M, Choi R, Ferrè ER. Multisensory Interactions in Virtual Reality: Optic Flow Reduces Vestibular Sensitivity, but Only for Congruent Planes of Motion. Multisens Res 2020; 33:625-644. [PMID: 31972542 DOI: 10.1163/22134808-20201487] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Accepted: 12/02/2019] [Indexed: 11/19/2022]
Abstract
During exposure to Virtual Reality (VR) a sensory conflict may be present, whereby the visual system signals that the user is moving in a certain direction with a certain acceleration, while the vestibular system signals that the user is stationary. In order to reduce this conflict, the brain may down-weight vestibular signals, which may in turn affect vestibular contributions to self-motion perception. Here we investigated whether vestibular perceptual sensitivity is affected by VR exposure. Participants' ability to detect artificial vestibular inputs was measured during optic flow or random motion stimuli on a VR head-mounted display. Sensitivity to vestibular signals was significantly reduced when optic flow stimuli were presented, but importantly this was only the case when both visual and vestibular cues conveyed information on the same plane of self-motion. Our results suggest that the brain dynamically adjusts the weight given to incoming sensory cues for self-motion in VR; however this is dependent on the congruency of visual and vestibular cues.
Collapse
Affiliation(s)
| | - Reno Choi
- Royal Holloway, University of London, Egham, UK
| | | |
Collapse
|
16
|
Flexible coding of object motion in multiple reference frames by parietal cortex neurons. Nat Neurosci 2020; 23:1004-1015. [PMID: 32541964 PMCID: PMC7474851 DOI: 10.1038/s41593-020-0656-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/14/2020] [Indexed: 12/28/2022]
Abstract
Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. We examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates, but should be ignored when judging object motion in head coordinates. Neural responses in the ventral intraparietal area are modulated by the task reference frame, such that population activity represents object direction in either reference frame. In contrast, responses in the lateral portion of the medial superior temporal area primarily represent object motion in head coordinates. Our findings demonstrate a neural representation of object motion that changes with task requirements.
Collapse
|
17
|
Shayman CS, Peterka RJ, Gallun FJ, Oh Y, Chang NYN, Hullar TE. Frequency-dependent integration of auditory and vestibular cues for self-motion perception. J Neurophysiol 2020; 123:936-944. [PMID: 31940239 DOI: 10.1152/jn.00307.2019] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Recent evidence has shown that auditory information may be used to improve postural stability, spatial orientation, navigation, and gait, suggesting an auditory component of self-motion perception. To determine how auditory and other sensory cues integrate for self-motion perception, we measured motion perception during yaw rotations of the body and the auditory environment. Psychophysical thresholds in humans were measured over a range of frequencies (0.1-1.0 Hz) during self-rotation without spatial auditory stimuli, rotation of a sound source around a stationary listener, and self-rotation in the presence of an earth-fixed sound source. Unisensory perceptual thresholds and the combined multisensory thresholds were found to be frequency dependent. Auditory thresholds were better at lower frequencies, and vestibular thresholds were better at higher frequencies. Expressed in terms of peak angular velocity, multisensory vestibular and auditory thresholds ranged from 0.39°/s at 0.1 Hz to 0.95°/s at 1.0 Hz and were significantly better over low frequencies than either the auditory-only (0.54°/s to 2.42°/s at 0.1 and 1.0 Hz, respectively) or vestibular-only (2.00°/s to 0.75°/s at 0.1 and 1.0 Hz, respectively) unisensory conditions. Monaurally presented auditory cues were less effective than binaural cues in lowering multisensory thresholds. Frequency-independent thresholds were derived, assuming that vestibular thresholds depended on a weighted combination of velocity and acceleration cues, whereas auditory thresholds depended on displacement and velocity cues. These results elucidate fundamental mechanisms for the contribution of audition to balance and help explain previous findings, indicating its significance in tasks requiring self-orientation.NEW & NOTEWORTHY Auditory information can be integrated with visual, proprioceptive, and vestibular signals to improve balance, orientation, and gait, but this process is poorly understood. Here, we show that auditory cues significantly improve sensitivity to self-motion perception below 0.5 Hz, whereas vestibular cues contribute more at higher frequencies. Motion thresholds are determined by a weighted combination of displacement, velocity, and acceleration information. These findings may help understand and treat imbalance, particularly in people with sensory deficits.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon.,School of Medicine, University of Utah, Salt Lake City, Utah
| | - Robert J Peterka
- Department of Neurology, Oregon Health and Science University, Portland, Oregon.,National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon
| | - Frederick J Gallun
- National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon.,Oregon Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon
| | - Yonghee Oh
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida
| | - Nai-Yuan N Chang
- Department of Preventive and Restorative Dental Sciences-Division of Bioengineering and Biomaterials, University of California, San Francisco, San Francisco, California
| | - Timothy E Hullar
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon.,Department of Neurology, Oregon Health and Science University, Portland, Oregon.,National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon
| |
Collapse
|
18
|
Retinal Stabilization Reveals Limited Influence of Extraretinal Signals on Heading Tuning in the Medial Superior Temporal Area. J Neurosci 2019; 39:8064-8078. [PMID: 31488610 DOI: 10.1523/jneurosci.0388-19.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Revised: 08/17/2019] [Accepted: 08/20/2019] [Indexed: 11/21/2022] Open
Abstract
Heading perception in primates depends heavily on visual optic-flow cues. Yet during self-motion, heading percepts remain stable, even though smooth-pursuit eye movements often distort optic flow. According to theoretical work, self-motion can be represented accurately by compensating for these distortions in two ways: via retinal mechanisms or via extraretinal efference-copy signals, which predict the sensory consequences of movement. Psychophysical evidence strongly supports the efference-copy hypothesis, but physiological evidence remains inconclusive. Neurons that signal the true heading direction during pursuit are found in visual areas of monkey cortex, including the dorsal medial superior temporal area (MSTd). Here we measured heading tuning in MSTd using a novel stimulus paradigm, in which we stabilize the optic-flow stimulus on the retina during pursuit. This approach isolates the effects on neuronal heading preferences of extraretinal signals, which remain active while the retinal stimulus is prevented from changing. Our results from 3 female monkeys demonstrate a significant but small influence of extraretinal signals on the preferred heading directions of MSTd neurons. Under our stimulus conditions, which are rich in retinal cues, we find that retinal mechanisms dominate physiological corrections for pursuit eye movements, suggesting that extraretinal cues, such as predictive efference-copy mechanisms, have a limited role under naturalistic conditions.SIGNIFICANCE STATEMENT Sensory systems discount stimulation caused by an animal's own behavior. For example, eye movements cause irrelevant retinal signals that could interfere with motion perception. The visual system compensates for such self-generated motion, but how this happens is unclear. Two theoretical possibilities are a purely visual calculation or one using an internal signal of eye movements to compensate for their effects. The latter can be isolated by experimentally stabilizing the image on a moving retina, but this approach has never been adopted to study motion physiology. Using this method, we find that extraretinal signals have little influence on activity in visual cortex, whereas visually based corrections for ongoing eye movements have stronger effects and are likely most important under real-world conditions.
Collapse
|
19
|
Going with the Flow: The Neural Mechanisms Underlying Illusions of Complex-Flow Motion. J Neurosci 2019; 39:2664-2685. [PMID: 30777886 DOI: 10.1523/jneurosci.2112-18.2019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Revised: 01/07/2019] [Accepted: 01/08/2019] [Indexed: 11/21/2022] Open
Abstract
Studying the mismatch between perception and reality helps us better understand the constructive nature of the visual brain. The Pinna-Brelstaff motion illusion is a compelling example illustrating how a complex moving pattern can generate an illusory motion perception. When an observer moves toward (expansion) or away (contraction) from the Pinna-Brelstaff figure, the figure appears to rotate. The neural mechanisms underlying the illusory complex-flow motion of rotation, expansion, and contraction remain unknown. We studied this question at both perceptual and neuronal levels in behaving male macaques by using carefully parametrized Pinna-Brelstaff figures that induce the above motion illusions. We first demonstrate that macaques perceive illusory motion in a manner similar to that of human observers. Neurophysiological recordings were subsequently performed in the middle temporal area (MT) and the dorsal portion of the medial superior temporal area (MSTd). We find that subgroups of MSTd neurons encoding a particular global pattern of real complex-flow motion (rotation, expansion, contraction) also represent illusory motion patterns of the same class. They require an extra 15 ms to reliably discriminate the illusion. In contrast, MT neurons encode both real and illusory local motions with similar temporal delays. These findings reveal that illusory complex-flow motion is first represented in MSTd by the same neurons that normally encode real complex-flow motion. However, the extraction of global illusory motion in MSTd from other classes of real complex-flow motion requires extra processing time. Our study illustrates a cascaded integration mechanism from MT to MSTd underlying the transformation from external physical to internal nonveridical flow-motion perception.SIGNIFICANCE STATEMENT The neural basis of the transformation from objective reality to illusory percepts of rotation, expansion, and contraction remains unknown. We demonstrate psychophysically that macaques perceive these illusory complex-flow motions in a manner similar to that of human observers. At the neural level, we show that medial superior temporal (MSTd) neurons represent illusory flow motions as if they were real by globally integrating middle temporal area (MT) local motion signals. Furthermore, while MT neurons reliably encode real and illusory local motions with similar temporal delays, MSTd neurons take a significantly longer time to process the signals associated with illusory percepts. Our work extends previous complex-flow motion studies by providing the first detailed analysis of the neuron-specific mechanisms underlying complex forms of illusory motion integration from MT to MSTd.
Collapse
|
20
|
Prior expectation of objects in space is dependent on the direction of gaze. Cognition 2019; 182:220-226. [DOI: 10.1016/j.cognition.2018.10.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2018] [Revised: 10/09/2018] [Accepted: 10/12/2018] [Indexed: 10/28/2022]
|
21
|
Abstract
Detection of the state of self-motion, such as the instantaneous heading direction, the traveled trajectory and traveled distance or time, is critical for efficient spatial navigation. Numerous psychophysical studies have indicated that the vestibular system, originating from the otolith and semicircular canals in our inner ears, provides robust signals for different aspects of self-motion perception. In addition, vestibular signals interact with other sensory signals such as visual optic flow to facilitate natural navigation. These behavioral results are consistent with recent findings in neurophysiological studies. In particular, vestibular activity in response to the translation or rotation of the head/body in darkness is revealed in a growing number of cortical regions, many of which are also sensitive to visual motion stimuli. The temporal dynamics of the vestibular activity in the central nervous system can vary widely, ranging from acceleration-dominant to velocity-dominant. Different temporal dynamic signals may be decoded by higher level areas for different functions. For example, the acceleration signals during the translation of body in the horizontal plane may be used by the brain to estimate the heading directions. Although translation and rotation signals arise from independent peripheral organs, that is, otolith and canals, respectively, they frequently converge onto single neurons in the central nervous system including both the brainstem and the cerebral cortex. The convergent neurons typically exhibit stronger responses during a combined curved motion trajectory which may serve as the neural correlate for complex path perception. During spatial navigation, traveled distance or time may be encoded by different population of neurons in multiple regions including hippocampal-entorhinal system, posterior parietal cortex, or frontal cortex.
Collapse
Affiliation(s)
- Zhixian Cheng
- Department of Neuroscience, Yale School of Medicine, New Haven, CT, United States
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
22
|
Rigutti S, Stragà M, Jez M, Baldassi G, Carnaghi A, Miceu P, Fantoni C. Don't worry, be active: how to facilitate the detection of errors in immersive virtual environments. PeerJ 2018; 6:e5844. [PMID: 30397547 PMCID: PMC6211266 DOI: 10.7717/peerj.5844] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 09/26/2018] [Indexed: 11/23/2022] Open
Abstract
The current research aims to study the link between the type of vision experienced in a collaborative immersive virtual environment (active vs. multiple passive), the type of error one looks for during a cooperative multi-user exploration of a design project (affordance vs. perceptual violations), and the type of setting in which multi-user perform (field in Experiment 1 vs. laboratory in Experiment 2). The relevance of this link is backed by the lack of conclusive evidence on an active vs. passive vision advantage in cooperative search tasks within software based on immersive virtual reality (IVR). Using a yoking paradigm based on the mixed usage of simultaneous active and multiple passive viewings, we found that the likelihood of error detection in a complex 3D environment was characterized by an active vs. multi-passive viewing advantage depending on: (1) the degree of knowledge dependence of the type of error the passive/active observers were looking for (low for perceptual violations, vs. high for affordance violations), as the advantage tended to manifest itself irrespectively from the setting for affordance, but not for perceptual violations; and (2) the degree of social desirability possibly induced by the setting in which the task was performed, as the advantage occurred irrespectively from the type of error in the laboratory (Experiment 2) but not in the field (Experiment 1) setting. Results are relevant to future development of cooperative software based on IVR used for supporting the design review. A multi-user design review experience in which designers, engineers and end-users all cooperate actively within the IVR wearing their own head mounted display, seems more suitable for the detection of relevant errors than standard systems characterized by a mixed usage of active and passive viewing.
Collapse
Affiliation(s)
- Sara Rigutti
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| | - Marta Stragà
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| | - Marco Jez
- Area Science Park, Arsenal S.r.L, Trieste, Italy
| | - Giulio Baldassi
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| | - Andrea Carnaghi
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| | - Piero Miceu
- Area Science Park, Arsenal S.r.L, Trieste, Italy
| | - Carlo Fantoni
- Department of Life Sciences, Psychology Unit "Gaetano Kanizsa", University of Trieste, Trieste, Italy
| |
Collapse
|
23
|
Gu Y. Vestibular signals in primate cortex for self-motion perception. Curr Opin Neurobiol 2018; 52:10-17. [DOI: 10.1016/j.conb.2018.04.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 03/12/2018] [Accepted: 04/07/2018] [Indexed: 10/17/2022]
|
24
|
Effect of vibration during visual-inertial integration on human heading perception during eccentric gaze. PLoS One 2018; 13:e0199097. [PMID: 29902253 PMCID: PMC6002115 DOI: 10.1371/journal.pone.0199097] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Accepted: 05/31/2018] [Indexed: 11/21/2022] Open
Abstract
Heading direction is determined from visual and inertial cues. Visual headings use retinal coordinates while inertial headings use body coordinates. Thus during eccentric gaze the same heading may be perceived differently by visual and inertial modalities. Stimulus weights depend on the relative reliability of these stimuli, but previous work suggests that the inertial heading may be given more weight than predicted. These experiments only varied the visual stimulus reliability, and it is unclear what occurs with variation in inertial reliability. Five human subjects completed a heading discrimination task using 2s of translation with a peak velocity of 16cm/s. Eye position was ±25° left/right with visual, inertial, or combined motion. The visual motion coherence was 50%. Inertial stimuli included 6 Hz vertical vibration with 0, 0.10, 0.15, or 0.20cm amplitude. Subjects reported perceived heading relative to the midline. With an inertial heading, perception was biased 3.6° towards the gaze direction. Visual headings biased perception 9.6° opposite gaze. The inertial threshold without vibration was 4.8° which increased significantly to 8.8° with vibration but the amplitude of vibration did not influence reliability. With visual-inertial headings, empirical stimulus weights were calculated from the bias and compared with the optimal weight calculated from the threshold. In 2 subjects empirical weights were near optimal while in the remaining 3 subjects the inertial stimuli were weighted greater than optimal predictions. On average the inertial stimulus was weighted greater than predicted. These results indicate multisensory integration may not be a function of stimulus reliability when inertial stimulus reliability is varied.
Collapse
|
25
|
Flexible egocentric and allocentric representations of heading signals in parietal cortex. Proc Natl Acad Sci U S A 2018; 115:E3305-E3312. [PMID: 29555744 DOI: 10.1073/pnas.1715625115] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
By systematically manipulating head position relative to the body and eye position relative to the head, previous studies have shown that vestibular tuning curves of neurons in the ventral intraparietal (VIP) area remain invariant when expressed in body-/world-centered coordinates. However, body orientation relative to the world was not manipulated; thus, an egocentric, body-centered representation could not be distinguished from an allocentric, world-centered reference frame. We manipulated the orientation of the body relative to the world such that we could distinguish whether vestibular heading signals in VIP are organized in body- or world-centered reference frames. We found a hybrid representation, depending on gaze direction. When gaze remained fixed relative to the body, the vestibular heading tuning of VIP neurons shifted systematically with body orientation, indicating an egocentric, body-centered reference frame. In contrast, when gaze remained fixed relative to the world, this representation changed to be intermediate between body- and world-centered. We conclude that the neural representation of heading in posterior parietal cortex is flexible, depending on gaze and possibly attentional demands.
Collapse
|
26
|
Shao M, DeAngelis GC, Angelaki DE, Chen A. Clustering of heading selectivity and perception-related activity in the ventral intraparietal area. J Neurophysiol 2018; 119:1113-1126. [PMID: 29187554 PMCID: PMC5899310 DOI: 10.1152/jn.00556.2017] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 11/29/2017] [Accepted: 11/29/2017] [Indexed: 11/22/2022] Open
Abstract
The ventral intraparietal area (VIP) of the macaque brain is a multimodal cortical region, with many cells tuned to both optic flow and vestibular stimuli. Responses of many VIP neurons also show robust correlations with perceptual judgments during a fine heading discrimination task. Previous studies have shown that heading tuning based on optic flow is represented in a clustered fashion in VIP. However, it is unknown whether vestibular self-motion selectivity is clustered in VIP. Moreover, it is not known whether stimulus- and choice-related signals in VIP show clustering in the context of a heading discrimination task. To address these issues, we compared the response characteristics of isolated single units (SUs) with those of the undifferentiated multiunit (MU) activity corresponding to several neighboring neurons recorded from the same microelectrode. We find that MU activity typically shows selectivity similar to that of simultaneously recorded SUs, for both the vestibular and visual stimulus conditions. In addition, the choice-related activity of MU signals, as quantified using choice probabilities, is correlated with the choice-related activity of SUs. Overall, these findings suggest that both sensory and choice-related signals regarding self-motion are clustered in VIP. NEW & NOTEWORTHY We demonstrate, for the first time, that the vestibular tuning of ventral intraparietal area (VIP) neurons in response to both translational and rotational motion is clustered. In addition, heading discriminability and choice-related activity are also weakly clustered in VIP.
Collapse
Affiliation(s)
- Mengmeng Shao
- Key laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University , Shanghai , China
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester , Rochester, New York
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine , Houston, Texas
| | - Aihua Chen
- Key laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University , Shanghai , China
| |
Collapse
|
27
|
Role of Rostral Fastigial Neurons in Encoding a Body-Centered Representation of Translation in Three Dimensions. J Neurosci 2018; 38:3584-3602. [PMID: 29487123 DOI: 10.1523/jneurosci.2116-17.2018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 02/01/2018] [Accepted: 02/20/2018] [Indexed: 11/21/2022] Open
Abstract
Many daily behaviors rely critically on estimates of our body motion. Such estimates must be computed by combining neck proprioceptive signals with vestibular signals that have been transformed from a head- to a body-centered reference frame. Recent studies showed that deep cerebellar neurons in the rostral fastigial nucleus (rFN) reflect these computations, but whether they explicitly encode estimates of body motion remains unclear. A key limitation in addressing this question is that, to date, cell tuning properties have only been characterized for a restricted set of motions across head-re-body orientations in the horizontal plane. Here we examined, for the first time, how 3D spatiotemporal tuning for translational motion varies with head-re-body orientation in both horizontal and vertical planes in the rFN of male macaques. While vestibular coding was profoundly influenced by head-re-body position in both planes, neurons typically reflected at most a partial transformation. However, their tuning shifts were not random but followed the specific spatial trajectories predicted for a 3D transformation. We show that these properties facilitate the linear decoding of fully body-centered motion representations in 3D with a broad range of temporal characteristics from small groups of 5-7 cells. These results demonstrate that the vestibular reference frame transformation required to compute body motion is indeed encoded by cerebellar neurons. We propose that maintaining partially transformed rFN responses with different spatiotemporal properties facilitates the creation of downstream body motion representations with a range of dynamic characteristics, consistent with the functional requirements for tasks such as postural control and reaching.SIGNIFICANCE STATEMENT Estimates of body motion are essential for many daily activities. Vestibular signals are important contributors to such estimates but must be transformed from a head- to a body-centered reference frame. Here, we provide the first direct demonstration that the cerebellum computes this transformation fully in 3D. We show that the output of these computations is reflected in the tuning properties of deep cerebellar rostral fastigial nucleus neurons in a specific distributed fashion that facilitates the efficient creation of body-centered translation estimates with a broad range of temporal properties (i.e., from acceleration to position). These findings support an important role for the rostral fastigial nucleus as a source of body translation estimates functionally relevant for behaviors ranging from postural control to perception.
Collapse
|
28
|
Yang L, Gu Y. Distinct spatial coordinate of visual and vestibular heading signals in macaque FEFsem and MSTd. eLife 2017; 6. [PMID: 29134944 PMCID: PMC5685470 DOI: 10.7554/elife.29809] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 11/03/2017] [Indexed: 11/17/2022] Open
Abstract
Precise heading estimate requires integration of visual optic flow and vestibular inertial motion originating from distinct spatial coordinates (eye- and head-centered, respectively). To explore whether the two heading signals may share a common reference frame along the hierarchy of cortical stages, we explored two multisensory areas in macaques: the smooth pursuit area of the frontal eye field (FEFsem) closer to the motor side, and the dorsal portion of medial superior temporal area (MSTd) closer to the sensory side. In both areas, vestibular signals are head-centered, whereas visual signals are mainly eye-centered. However, visual signals in FEFsem are more shifted towards the head coordinate compared to MSTd. These results are robust being largely independent on: (1) smooth pursuit eye movement, (2) motion parallax cue, and (3) behavioral context for active heading estimation, indicating that the visual and vestibular heading signals may be represented in distinct spatial coordinate in sensory cortices.
Collapse
Affiliation(s)
- Lihua Yang
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
29
|
Kheradmand A, Winnick A. Perception of Upright: Multisensory Convergence and the Role of Temporo-Parietal Cortex. Front Neurol 2017; 8:552. [PMID: 29118736 PMCID: PMC5660972 DOI: 10.3389/fneur.2017.00552] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Accepted: 09/28/2017] [Indexed: 12/18/2022] Open
Abstract
We inherently maintain a stable perception of the world despite frequent changes in the head, eye, and body positions. Such "orientation constancy" is a prerequisite for coherent spatial perception and sensorimotor planning. As a multimodal sensory reference, perception of upright represents neural processes that subserve orientation constancy through integration of sensory information encoding the eye, head, and body positions. Although perception of upright is distinct from perception of body orientation, they share similar neural substrates within the cerebral cortical networks involved in perception of spatial orientation. These cortical networks, mainly within the temporo-parietal junction, are crucial for multisensory processing and integration that generate sensory reference frames for coherent perception of self-position and extrapersonal space transformations. In this review, we focus on these neural mechanisms and discuss (i) neurobehavioral aspects of orientation constancy, (ii) sensory models that address the neurophysiology underlying perception of upright, and (iii) the current evidence for the role of cerebral cortex in perception of upright and orientation constancy, including findings from the neurological disorders that affect cortical function.
Collapse
Affiliation(s)
- Amir Kheradmand
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Otolaryngology – Head and Neck Surgery, The Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Ariel Winnick
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
30
|
Bremmer F, Churan J, Lappe M. Heading representations in primates are compressed by saccades. Nat Commun 2017; 8:920. [PMID: 29030557 PMCID: PMC5640607 DOI: 10.1038/s41467-017-01021-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Accepted: 08/13/2017] [Indexed: 01/06/2023] Open
Abstract
Perceptual illusions help to understand how sensory signals are decoded in the brain. Here we report that the opposite approach is also applicable, i.e., results from decoding neural activity from monkey extrastriate visual cortex correctly predict a hitherto unknown perceptual illusion in humans. We record neural activity from monkey medial superior temporal (MST) and ventral intraparietal (VIP) area during presentation of self-motion stimuli and concurrent reflexive eye movements. A heading-decoder performs veridically during slow eye movements. During fast eye movements (saccades), however, the decoder erroneously reports compression of heading toward straight ahead. Functional equivalents of macaque areas MST and VIP have been identified in humans, implying a perceptual correlate (illusion) of this perisaccadic decoding error. Indeed, a behavioral experiment in humans shows that perceived heading is perisaccadically compressed toward the direction of gaze. Response properties of primate areas MST and VIP are consistent with being the substrate of the newly described visual illusion.Macaque higher visual areas MST and VIP encode heading direction based on self-motion stimuli. Here the authors show that, while making saccades, the heading direction decoded from the neural responses is compressed toward straight-ahead, and independently demonstrate a perceptual illusion in humans based on this perisaccadic decoding error.
Collapse
Affiliation(s)
- Frank Bremmer
- Department of Neurophysics & Marburg Center for Mind, Brain and Behavior - MCMBB, Philipps-Universität Marburg, Karl-von-Frisch Straße 8a, 35043, Marburg, Germany.
| | - Jan Churan
- Department of Neurophysics & Marburg Center for Mind, Brain and Behavior - MCMBB, Philipps-Universität Marburg, Karl-von-Frisch Straße 8a, 35043, Marburg, Germany
| | - Markus Lappe
- Department of Psychology & Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Fliednerstraße 21, 48149, Münster, Germany
| |
Collapse
|
31
|
Crane BT. Effect of eye position during human visual-vestibular integration of heading perception. J Neurophysiol 2017; 118:1609-1621. [PMID: 28615328 DOI: 10.1152/jn.00037.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2017] [Revised: 06/13/2017] [Accepted: 06/13/2017] [Indexed: 11/22/2022] Open
Abstract
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems.NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability.
Collapse
Affiliation(s)
- Benjamin T Crane
- Department of Otolaryngology, University of Rochester, Rochester, New York
| |
Collapse
|
32
|
Kuang S, Shi J, Wang Y, Zhang T. Where are you heading? Flexible integration of retinal and extra-retinal cues during self-motion perception. Psych J 2017; 6:141-152. [PMID: 28514063 DOI: 10.1002/pchj.165] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Revised: 01/26/2017] [Accepted: 02/02/2017] [Indexed: 11/08/2022]
Abstract
As we move forward in the environment, we experience a radial expansion of the retinal image, wherein the center corresponds to the instantaneous direction of self-motion. Humans can precisely perceive their heading direction even when the retinal motion is distorted by gaze shifts due to eye/body rotations. Previous studies have suggested that both retinal and extra-retinal strategies can compensate for the retinal image distortion. However, the relative contributions of each strategy remain unclear. To address this issue, we devised a two-alternative-headings discrimination task, in which participants had either real or simulated pursuit eye movements. The two conditions had the same retinal input but either with or without extra-retinal eye movement signals. Thus, the behavioral difference between conditions served as a metric of extra-retinal contribution. We systematically and independently manipulated pursuit speed, heading speed, and the reliability of retinal signals. We found that the levels of extra-retinal contributions increased with increasing pursuit speed (stronger extra-retinal signal), and with decreasing heading speed (weaker retinal signal). In addition, extra-retinal contributions also increased as we corrupted retinal signals with noise. Our results revealed that the relative magnitude of retinal and extra-retinal contributions was not fixed but rather flexibly adjusted to each specific task condition. This task-dependent, flexible integration appears to take the form of a reliability-based weighting scheme that maximizes heading performance.
Collapse
Affiliation(s)
- Shenbing Kuang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Jinfu Shi
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yang Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Tao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
33
|
Strong SL, Silson EH, Gouws AD, Morland AB, McKeefry DJ. Differential processing of the direction and focus of expansion of optic flow stimuli in areas MST and V3A of the human visual cortex. J Neurophysiol 2017; 117:2209-2217. [PMID: 28298300 DOI: 10.1152/jn.00031.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 03/02/2017] [Accepted: 03/09/2017] [Indexed: 11/22/2022] Open
Abstract
Human neuropsychological and neuroimaging studies have raised the possibility that different attributes of optic flow stimuli, namely radial direction and the position of the focus of expansion (FOE), are processed within separate cortical areas. In the human brain, visual areas V5/MT+ and V3A have been proposed as integral to the analysis of these different attributes of optic flow stimuli. To establish direct causal relationships between neural activity in human (h)V5/MT+ and V3A and the perception of radial motion direction and FOE position, we used transcranial magnetic stimulation (TMS) to disrupt cortical activity in these areas while participants performed behavioral tasks dependent on these different aspects of optic flow stimuli. The cortical regions of interest were identified in seven human participants using standard functional MRI retinotopic mapping techniques and functional localizers. TMS to area V3A was found to disrupt FOE positional judgments but not radial direction discrimination, whereas the application of TMS to an anterior subdivision of hV5/MT+, MST/TO-2 produced the reverse effects, disrupting radial direction discrimination but eliciting no effect on the FOE positional judgment task. This double dissociation demonstrates that FOE position and radial direction of optic flow stimuli are signaled independently by neural activity in areas hV5/MT+ and V3A.NEW & NOTEWORTHY Optic flow constitutes a biologically relevant visual cue as we move through any environment. With the use of neuroimaging and brain-stimulation techniques, this study demonstrates that separate human brain areas are involved in the analysis of the direction of radial motion and the focus of expansion in optic flow. This dissociation reveals the existence of separate processing pathways for the analysis of different attributes of optic flow that are important for the guidance of self-locomotion and object avoidance.
Collapse
Affiliation(s)
- Samantha L Strong
- School of Optometry and Vision Science, University of Bradford, Bradford, West Yorkshire, United Kingdom.,York Neuroimaging Centre, Department of Psychology, University of York, York, United Kingdom
| | - Edward H Silson
- York Neuroimaging Centre, Department of Psychology, University of York, York, United Kingdom.,Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland; and
| | - André D Gouws
- York Neuroimaging Centre, Department of Psychology, University of York, York, United Kingdom
| | - Antony B Morland
- York Neuroimaging Centre, Department of Psychology, University of York, York, United Kingdom.,Centre for Neuroscience, Hull-York Medical School, University of York, York, United Kingdom
| | - Declan J McKeefry
- School of Optometry and Vision Science, University of Bradford, Bradford, West Yorkshire, United Kingdom;
| |
Collapse
|
34
|
Amemiya T, Beck B, Walsh V, Gomi H, Haggard P. Visual area V5/hMT+ contributes to perception of tactile motion direction: a TMS study. Sci Rep 2017; 7:40937. [PMID: 28106123 PMCID: PMC5247673 DOI: 10.1038/srep40937] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2016] [Accepted: 12/14/2016] [Indexed: 12/18/2022] Open
Abstract
Human imaging studies have reported activations associated with tactile motion perception in visual motion area V5/hMT+, primary somatosensory cortex (SI) and posterior parietal cortex (PPC; Brodmann areas 7/40). However, such studies cannot establish whether these areas are causally involved in tactile motion perception. We delivered double-pulse transcranial magnetic stimulation (TMS) while moving a single tactile point across the fingertip, and used signal detection theory to quantify perceptual sensitivity to motion direction. TMS over both SI and V5/hMT+, but not the PPC site, significantly reduced tactile direction discrimination. Our results show that V5/hMT+ plays a causal role in tactile direction processing, and strengthen the case for V5/hMT+ serving multimodal motion perception. Further, our findings are consistent with a serial model of cortical tactile processing, in which higher-order perceptual processing depends upon information received from SI. By contrast, our results do not provide clear evidence that the PPC site we targeted (Brodmann areas 7/40) contributes to tactile direction perception.
Collapse
Affiliation(s)
- Tomohiro Amemiya
- Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square London, WC1N 3AZ, United Kingdom.,NTT Communication Science Laboratories, NTT Corporation, 3-1 Wakamiya, Morinosato, Atsugi-shi, Kanagawa, 243-0198, Japan
| | - Brianna Beck
- Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square London, WC1N 3AZ, United Kingdom
| | - Vincent Walsh
- Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square London, WC1N 3AZ, United Kingdom
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, NTT Corporation, 3-1 Wakamiya, Morinosato, Atsugi-shi, Kanagawa, 243-0198, Japan
| | - Patrick Haggard
- Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square London, WC1N 3AZ, United Kingdom
| |
Collapse
|
35
|
Smith AT, Greenlee MW, DeAngelis GC, Angelaki D. Distributed Visual–Vestibular Processing in the Cerebral Cortex of Man and Macaque. Multisens Res 2017. [DOI: 10.1163/22134808-00002568] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Recent advances in understanding the neurobiological underpinnings of visual–vestibular interactions underlying self-motion perception are reviewed with an emphasis on comparisons between the macaque and human brains. In both species, several distinct cortical regions have been identified that are active during both visual and vestibular stimulation and in some of these there is clear evidence for sensory integration. Several possible cross-species homologies between cortical regions are identified. A key feature of cortical organization is that the same information is apparently represented in multiple, anatomically diverse cortical regions, suggesting that information about self-motion is used for different purposes in different brain regions.
Collapse
Affiliation(s)
- Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| | - Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, 93053 Regensburg, Germany
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030, USA
| |
Collapse
|
36
|
Gu Y, Cheng Z, Yang L, DeAngelis GC, Angelaki DE. Multisensory Convergence of Visual and Vestibular Heading Cues in the Pursuit Area of the Frontal Eye Field. Cereb Cortex 2016; 26:3785-801. [PMID: 26286917 PMCID: PMC5004753 DOI: 10.1093/cercor/bhv183] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023] Open
Abstract
Both visual and vestibular sensory cues are important for perceiving one's direction of heading during self-motion. Previous studies have identified multisensory, heading-selective neurons in the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area (VIP). Both MSTd and VIP have strong recurrent connections with the pursuit area of the frontal eye field (FEFsem), but whether FEFsem neurons may contribute to multisensory heading perception remain unknown. We characterized the tuning of macaque FEFsem neurons to visual, vestibular, and multisensory heading stimuli. About two-thirds of FEFsem neurons exhibited significant heading selectivity based on either vestibular or visual stimulation. These multisensory neurons shared many properties, including distributions of tuning strength and heading preferences, with MSTd and VIP neurons. Fisher information analysis also revealed that the average FEFsem neuron was almost as sensitive as MSTd or VIP cells. Visual and vestibular heading preferences in FEFsem tended to be either matched (congruent cells) or discrepant (opposite cells), such that combined stimulation strengthened heading selectivity for congruent cells but weakened heading selectivity for opposite cells. These findings demonstrate that, in addition to oculomotor functions, FEFsem neurons also exhibit properties that may allow them to contribute to a cortical network that processes multisensory heading cues.
Collapse
Affiliation(s)
- Yong Gu
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Institute of Neuroscience, Shanghai, China
| | - Zhixian Cheng
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Institute of Neuroscience, Shanghai, China
| | - Lihua Yang
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Institute of Neuroscience, Shanghai, China
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
37
|
3D Visual Response Properties of MSTd Emerge from an Efficient, Sparse Population Code. J Neurosci 2016; 36:8399-415. [PMID: 27511012 DOI: 10.1523/jneurosci.0396-16.2016] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 06/15/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Neurons in the dorsal subregion of the medial superior temporal (MSTd) area of the macaque respond to large, complex patterns of retinal flow, implying a role in the analysis of self-motion. Some neurons are selective for the expanding radial motion that occurs as an observer moves through the environment ("heading"), and computational models can account for this finding. However, ample evidence suggests that MSTd neurons exhibit a continuum of visual response selectivity to large-field motion stimuli. Furthermore, the underlying computational principles by which these response properties are derived remain poorly understood. Here we describe a computational model of macaque MSTd based on the hypothesis that neurons in MSTd efficiently encode the continuum of large-field retinal flow patterns on the basis of inputs received from neurons in MT with receptive fields that resemble basis vectors recovered with non-negative matrix factorization. These assumptions are sufficient to quantitatively simulate neurophysiological response properties of MSTd cells, such as 3D translation and rotation selectivity, suggesting that these properties might simply be a byproduct of MSTd neurons performing dimensionality reduction on their inputs. At the population level, model MSTd accurately predicts eye velocity and heading using a sparse distributed code, consistent with the idea that biological MSTd might be well equipped to efficiently encode various self-motion variables. The present work aims to add some structure to the often contradictory findings about macaque MSTd, and offers a biologically plausible account of a wide range of visual response properties ranging from single-unit selectivity to population statistics. SIGNIFICANCE STATEMENT Using a dimensionality reduction technique known as non-negative matrix factorization, we found that a variety of medial superior temporal (MSTd) neural response properties could be derived from MT-like input features. The responses that emerge from this technique, such as 3D translation and rotation selectivity, spiral tuning, and heading selectivity, can account for a number of empirical results. These findings (1) provide a further step toward a scientific understanding of the often nonintuitive response properties of MSTd neurons; (2) suggest that response properties, such as complex motion tuning and heading selectivity, might simply be a byproduct of MSTd neurons performing dimensionality reduction on their inputs; and (3) imply that motion perception in the cortex is consistent with ideas from the efficient-coding and free-energy principles.
Collapse
|
38
|
Pfeiffer C, Grivaz P, Herbelin B, Serino A, Blanke O. Visual gravity contributes to subjective first-person perspective. Neurosci Conscious 2016; 2016:niw006. [PMID: 30109127 PMCID: PMC6084587 DOI: 10.1093/nc/niw006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
A fundamental component of conscious experience involves a first-person perspective (1PP), characterized by the experience of being a subject and of being directed at the world. Extending earlier work on multisensory perceptual mechanisms of 1PP, we here asked whether the experienced direction of the 1PP (i.e. the spatial direction of subjective experience of the world) depends on visual-tactile-vestibular conflicts, including the direction of gravity. Sixteen healthy subjects in supine position received visuo-tactile synchronous or asynchronous stroking to induce a full-body illusion. In the critical manipulation, we presented gravitational visual object motion directed toward or away from the participant’s body and thus congruent or incongruent with respect to the direction of vestibular and somatosensory gravitational cues. The results showed that multisensory gravitational conflict induced within-subject changes of the experienced direction of the 1PP that depended on the direction of visual gravitational cues. Participants experienced more often a downward direction of their 1PP (incongruent with respect to the participant’s physical body posture) when visual object motion was directed away rather than towards the participant’s body. These downward-directed 1PP experiences positively correlated with measures of elevated self-location. Together, these results show that visual gravitational cues contribute to the experienced direction of the 1PP, defining the subjective location and perspective from where humans experience to perceive the world.
Collapse
Affiliation(s)
- Christian Pfeiffer
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratory of Cognitive Neuroscience, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratoire de Recherche en Neuroimagerie (LREN), Department of Clinical Neuroscience, Lausanne University and University Hospital, Switzerland
| | - Petr Grivaz
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratory of Cognitive Neuroscience, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Bruno Herbelin
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratory of Cognitive Neuroscience, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Andrea Serino
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratory of Cognitive Neuroscience, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Olaf Blanke
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Department of Neurology, University Hospital Geneva, Switzerland
| |
Collapse
|
39
|
Abstract
UNLABELLED Cortical areas, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), have been shown to integrate visual and vestibular self-motion signals. Area V6 is interconnected with areas MSTd and VIP, allowing for the possibility that V6 also integrates visual and vestibular self-motion cues. An alternative hypothesis in the literature is that V6 does not use these sensory signals to compute heading but instead discounts self-motion signals to represent object motion. However, the responses of V6 neurons to visual and vestibular self-motion cues have never been studied, thus leaving the functional roles of V6 unclear. We used a virtual reality system to examine the 3D heading tuning of macaque V6 neurons in response to optic flow and inertial motion stimuli. We found that the majority of V6 neurons are selective for heading defined by optic flow. However, unlike areas MSTd and VIP, V6 neurons are almost universally unresponsive to inertial motion in the absence of optic flow. We also explored the spatial reference frames of heading signals in V6 by measuring heading tuning for different eye positions, and we found that the visual heading tuning of most V6 cells was eye-centered. Similar to areas MSTd and VIP, the population of V6 neurons was best able to discriminate small variations in heading around forward and backward headings. Our findings support the idea that V6 is involved primarily in processing visual motion signals and does not appear to play a role in visual-vestibular integration for self-motion perception. SIGNIFICANCE STATEMENT To understand how we successfully navigate our world, it is important to understand which parts of the brain process cues used to perceive our direction of self-motion (i.e., heading). Cortical area V6 has been implicated in heading computations based on human neuroimaging data, but direct measurements of heading selectivity in individual V6 neurons have been lacking. We provide the first demonstration that V6 neurons carry 3D visual heading signals, which are represented in an eye-centered reference frame. In contrast, we found almost no evidence for vestibular heading signals in V6, indicating that V6 is unlikely to contribute to multisensory integration of heading signals, unlike other cortical areas. These findings provide important constraints on the roles of V6 in self-motion perception.
Collapse
|
40
|
Joint representation of translational and rotational components of optic flow in parietal cortex. Proc Natl Acad Sci U S A 2016; 113:5077-82. [PMID: 27095846 DOI: 10.1073/pnas.1604818113] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Terrestrial navigation naturally involves translations within the horizontal plane and eye rotations about a vertical (yaw) axis to track and fixate targets of interest. Neurons in the macaque ventral intraparietal (VIP) area are known to represent heading (the direction of self-translation) from optic flow in a manner that is tolerant to rotational visual cues generated during pursuit eye movements. Previous studies have also reported that eye rotations modulate the response gain of heading tuning curves in VIP neurons. We tested the hypothesis that VIP neurons simultaneously represent both heading and horizontal (yaw) eye rotation velocity by measuring heading tuning curves for a range of rotational velocities of either real or simulated eye movements. Three findings support the hypothesis of a joint representation. First, we show that rotation velocity selectivity based on gain modulations of visual heading tuning is similar to that measured during pure rotations. Second, gain modulations of heading tuning are similar for self-generated eye rotations and visually simulated rotations, indicating that the representation of rotation velocity in VIP is multimodal, driven by both visual and extraretinal signals. Third, we show that roughly one-half of VIP neurons jointly represent heading and rotation velocity in a multiplicatively separable manner. These results provide the first evidence, to our knowledge, for a joint representation of translation direction and rotation velocity in parietal cortex and show that rotation velocity can be represented based on visual cues, even in the absence of efference copy signals.
Collapse
|
41
|
Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object. J Neurosci 2016; 35:13599-607. [PMID: 26446214 DOI: 10.1523/jneurosci.2267-15.2015] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion.
Collapse
|
42
|
Abstract
The relative simplicity of the neural circuits that mediate vestibular reflexes is well suited for linking systems and cellular levels of analyses. Notably, a distinctive feature of the vestibular system is that neurons at the first central stage of sensory processing in the vestibular nuclei are premotor neurons; the same neurons that receive vestibular-nerve input also send direct projections to motor pathways. For example, the simplicity of the three-neuron pathway that mediates the vestibulo-ocular reflex leads to the generation of compensatory eye movements within ~5ms of a head movement. Similarly, relatively direct pathways between the labyrinth and spinal cord control vestibulospinal reflexes. A second distinctive feature of the vestibular system is that the first stage of central processing is strongly multimodal. This is because the vestibular nuclei receive inputs from a wide range of cortical, cerebellar, and other brainstem structures in addition to direct inputs from the vestibular nerve. Recent studies in alert animals have established how extravestibular signals shape these "simple" reflexes to meet the needs of current behavioral goal. Moreover, multimodal interactions at higher levels, such as the vestibular cerebellum, thalamus, and cortex, play a vital role in ensuring accurate self-motion and spatial orientation perception.
Collapse
Affiliation(s)
- K E Cullen
- Department of Physiology, McGill University, Montreal, Quebec, Canada.
| |
Collapse
|
43
|
Pfeiffer C, van Elk M, Bernasconi F, Blanke O. Distinct vestibular effects on early and late somatosensory cortical processing in humans. Neuroimage 2015; 125:208-219. [PMID: 26466979 DOI: 10.1016/j.neuroimage.2015.10.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2015] [Revised: 08/31/2015] [Accepted: 10/01/2015] [Indexed: 11/28/2022] Open
Abstract
In non-human primates several brain areas contain neurons that respond to both vestibular and somatosensory stimulation. In humans, vestibular stimulation activates several somatosensory brain regions and improves tactile perception. However, less is known about the spatio-temporal dynamics of such vestibular-somatosensory interactions in the human brain. To address this issue, we recorded high-density electroencephalography during left median nerve electrical stimulation to obtain Somatosensory Evoked Potentials (SEPs). We analyzed SEPs during vestibular activation following sudden decelerations from constant-velocity (90°/s and 60°/s) earth-vertical axis yaw rotations and SEPs during a non-vestibular control period. SEP analysis revealed two distinct temporal effects of vestibular activation: An early effect (28-32ms post-stimulus) characterized by vestibular suppression of SEP response strength that depended on rotation velocity and a later effect (97-112ms post-stimulus) characterized by vestibular modulation of SEP topographical pattern that was rotation velocity-independent. Source estimation localized these vestibular effects, during both time periods, to activation differences in a distributed cortical network including the right postcentral gyrus, right insula, left precuneus, and bilateral secondary somatosensory cortex. These results suggest that vestibular-somatosensory interactions in humans depend on processing in specific time periods in somatosensory and vestibular cortical regions.
Collapse
Affiliation(s)
- Christian Pfeiffer
- Laboratoire de Recherche en Neuroimagerie (LREN), Department of Clinical Neuroscience, Lausanne University and University Hospital, Lausanne, Switzerland; Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland; Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Michiel van Elk
- Department of Psychology, University of Amsterdam, Netherlands
| | - Fosco Bernasconi
- Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland; Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland; Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland; Department of Neurology, University Hospital Geneva, Switzerland.
| |
Collapse
|
44
|
Crane BT. Coordinates of Human Visual and Inertial Heading Perception. PLoS One 2015; 10:e0135539. [PMID: 26267865 PMCID: PMC4534459 DOI: 10.1371/journal.pone.0135539] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Accepted: 07/22/2015] [Indexed: 11/22/2022] Open
Abstract
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.
Collapse
Affiliation(s)
- Benjamin Thomas Crane
- Department of Otolaryngology, University of Rochester, Rochester, NY, United States of America
- Department of Bioengineering, University of Rochester, Rochester, NY, United States of America
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
- * E-mail:
| |
Collapse
|
45
|
de Winkel KN, Katliar M, Bülthoff HH. Forced fusion in multisensory heading estimation. PLoS One 2015; 10:e0127104. [PMID: 25938235 PMCID: PMC4418840 DOI: 10.1371/journal.pone.0127104] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Accepted: 04/10/2015] [Indexed: 11/18/2022] Open
Abstract
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.
Collapse
Affiliation(s)
- Ksander N. de Winkel
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
| | - Mikhail Katliar
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
| | - Heinrich H. Bülthoff
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
- Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713, Korea
- * E-mail:
| |
Collapse
|
46
|
Sunkara A, DeAngelis GC, Angelaki DE. Role of visual and non-visual cues in constructing a rotation-invariant representation of heading in parietal cortex. eLife 2015; 4. [PMID: 25693417 PMCID: PMC4337725 DOI: 10.7554/elife.04693] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2014] [Accepted: 01/20/2015] [Indexed: 11/16/2022] Open
Abstract
As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI:http://dx.doi.org/10.7554/eLife.04693.001 When strolling along a path beside a busy street, we can look around without losing our stride. The things we see change as we walk forward, and our view also changes if we turn our head—for example, to look at a passing car. Nevertheless, we can still tell that we are walking in a straight-line because our brain is able to compute the direction in which we are heading by discounting the visual changes caused by rotating our head or eyes. It remains unclear how the brain gets the information about head and eye movements that it would need to be able to do this. Many researchers had proposed that the brain estimates these rotations by using a copy of the neural signals that are sent to the muscles to move the eyes or head. However, it is possible that the brain can estimate head and eye rotations by directly analyzing the visual information from the eyes. One region of the brain that may contribute to this process is the ventral intraparietal area or ‘area VIP’ for short. Sunkara et al. devised an experiment that can help distinguish the effects of visual cues from copies of neural signals sent to the muscles during eye rotations. This involved training monkeys to look at a 3D display of moving dots, which gives the impression of moving through space. Sunkara et al. then measured the electrical signals in area VIP either when the monkey moved its eyes (to follow a moving target), or when the display changed to give the monkey the same visual cues as if it had rotated its eyes, when in fact it had not. Sunkara et al. found that the electrical signals recorded in area VIP when the monkey was given the illusion of rotating its eyes were similar to the signals recorded when the monkey actually rotated its eyes. This suggests that visual cues play an important role in correcting for the effects of eye rotations and correctly estimating the direction in which we are heading. Further research into the mechanisms behind this neural process could lead to new vision-based treatments for medical disorders that cause people to have balance problems. Similar research could also help to identify ways to improve navigation in automated vehicles, such as driverless cars. DOI:http://dx.doi.org/10.7554/eLife.04693.002
Collapse
Affiliation(s)
- Adhira Sunkara
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| |
Collapse
|
47
|
Abstract
Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction.
Collapse
|
48
|
Hitier M, Besnard S, Smith PF. Vestibular pathways involved in cognition. Front Integr Neurosci 2014; 8:59. [PMID: 25100954 PMCID: PMC4107830 DOI: 10.3389/fnint.2014.00059] [Citation(s) in RCA: 209] [Impact Index Per Article: 20.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2013] [Accepted: 06/30/2014] [Indexed: 01/30/2023] Open
Abstract
Recent discoveries have emphasized the role of the vestibular system in cognitive processes such as memory, spatial navigation and bodily self-consciousness. A precise understanding of the vestibular pathways involved is essential to understand the consequences of vestibular diseases for cognition, as well as develop therapeutic strategies to facilitate recovery. The knowledge of the “vestibular cortical projection areas”, defined as the cortical areas activated by vestibular stimulation, has dramatically increased over the last several years from both anatomical and functional points of view. Four major pathways have been hypothesized to transmit vestibular information to the vestibular cortex: (1) the vestibulo-thalamo-cortical pathway, which probably transmits spatial information about the environment via the parietal, entorhinal and perirhinal cortices to the hippocampus and is associated with spatial representation and self-versus object motion distinctions; (2) the pathway from the dorsal tegmental nucleus via the lateral mammillary nucleus, the anterodorsal nucleus of the thalamus to the entorhinal cortex, which transmits information for estimations of head direction; (3) the pathway via the nucleus reticularis pontis oralis, the supramammillary nucleus and the medial septum to the hippocampus, which transmits information supporting hippocampal theta rhythm and memory; and (4) a possible pathway via the cerebellum, and the ventral lateral nucleus of the thalamus (perhaps to the parietal cortex), which transmits information for spatial learning. Finally a new pathway is hypothesized via the basal ganglia, potentially involved in spatial learning and spatial memory. From these pathways, progressively emerges the anatomical network of vestibular cognition.
Collapse
Affiliation(s)
- Martin Hitier
- Inserm, U 1075 COMETE Caen, France ; Department of Pharmacology and Toxicology, Brain Health Research Center, University of Otago Dunedin, New Zealand ; Department of Anatomy, UNICAEN Caen, France ; Department of Otolaryngology Head and Neck Surgery, CHU de Caen Caen, France
| | | | - Paul F Smith
- Department of Pharmacology and Toxicology, Brain Health Research Center, University of Otago Dunedin, New Zealand
| |
Collapse
|
49
|
Ventre-Dominey J. Vestibular function in the temporal and parietal cortex: distinct velocity and inertial processing pathways. Front Integr Neurosci 2014; 8:53. [PMID: 25071481 PMCID: PMC4082317 DOI: 10.3389/fnint.2014.00053] [Citation(s) in RCA: 54] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2014] [Accepted: 06/05/2014] [Indexed: 11/13/2022] Open
Abstract
A number of behavioral and neuroimaging studies have reported converging data in favor of a cortical network for vestibular function, distributed between the temporo-parietal cortex and the prefrontal cortex in the primate. In this review, we focus on the role of the cerebral cortex in visuo-vestibular integration including the motion sensitive temporo-occipital areas i.e., the middle superior temporal area (MST) and the parietal cortex. Indeed, these two neighboring cortical regions, though they both receive combined vestibular and visual information, have distinct implications in vestibular function. In sum, this review of the literature leads to the idea of two separate cortical vestibular sub-systems forming (1) a velocity pathway including MST and direct descending pathways on vestibular nuclei. As it receives well-defined visual and vestibular velocity signals, this pathway is likely involved in heading perception and rapid top-down regulation of eye/head coordination and (2) an inertial processing pathway involving the parietal cortex in connection with the subcortical vestibular nuclei complex responsible for velocity storage integration. This vestibular cortical pathway would be implicated in high-order multimodal integration and cognitive functions, including world space and self-referential processing.
Collapse
|
50
|
Pelah A, Barbur J, Thurrell A, Hock HS. The coupling of vision with locomotion in cortical blindness. Vision Res 2014; 110:286-94. [PMID: 24832646 DOI: 10.1016/j.visres.2014.04.015] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2013] [Revised: 04/29/2014] [Accepted: 04/30/2014] [Indexed: 10/25/2022]
Abstract
Maintaining or modifying the speed and direction of locomotion requires the coupling of the locomotion with the retinal optic flow that it generates. It is shown that this essential behavioral capability, which requires on-line neural control, is preserved in the cortically blind hemifield of a hemianope. In experiments, optic flow stimuli were presented to either the normal or blind hemifield while the patient was walking on a treadmill. Little difference was found between the hemifields with respect to the coupling (i.e. co-dependency) of optic flow detection with locomotion. Even in the cortically blind hemifield, faster walking resulted in the perceptual slowing of detected optic flow, and self-selected locomotion speeds demonstrated behavioral discrimination between different optic flow speeds. The results indicate that the processing of optic flow, and thereby on-line visuo-locomotor coupling, can take place along neural pathways that function without processing in Area V1, and thus in the absence of conscious intervention. These and earlier findings suggest that optic flow and object motion are processed in parallel along with correlated non-visual locomotion signals. Extrastriate interactions may be responsible for discounting the optical effects of locomotion on the perceived direction of object motion, and maintaining visually guided self-motion.
Collapse
Affiliation(s)
- Adar Pelah
- Department of Electronics, University of York, York Y010 5DD, UK.
| | - John Barbur
- School of Health Sciences, City University London, London EG1V 0HB, UK
| | - Adrian Thurrell
- Girton College, University of Cambridge, Cambridge CB3 0JG, UK
| | - Howard S Hock
- Department of Psychology, The Center for Complex Systems and Brain Science, Florida Atlantic University, Boca Raton, FL 33486, USA
| |
Collapse
|