1
|
Peltier NE, Anzai A, Moreno-Bote R, DeAngelis GC. A neural mechanism for optic flow parsing in macaque visual cortex. Curr Biol 2024:S0960-9822(24)01241-7. [PMID: 39389059 DOI: 10.1016/j.cub.2024.09.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/21/2024] [Accepted: 09/12/2024] [Indexed: 10/12/2024]
Abstract
For the brain to compute object motion in the world during self-motion, it must discount the global patterns of image motion (optic flow) caused by self-motion. Optic flow parsing is a proposed visual mechanism for computing object motion in the world, and studies in both humans and monkeys have demonstrated perceptual biases consistent with the operation of a flow-parsing mechanism. However, the neural basis of flow parsing remains unknown. We demonstrate, at both the individual unit and population levels, that neural activity in macaque middle temporal (MT) area is biased by peripheral optic flow in a manner that can at least partially account for perceptual biases induced by flow parsing. These effects cannot be explained by conventional surround suppression mechanisms or choice-related activity and have substantial neural latency. Together, our findings establish the first neural basis for the computation of scene-relative object motion based on flow parsing.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Akiyuki Anzai
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Rubén Moreno-Bote
- Center for Brain and Cognition & Department of Engineering, Universitat Pompeu Fabra, Barcelona 08002, Spain; Serra Húnter Fellow Programme, Universitat Pompeu Fabra, Barcelona 08002, Spain
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14627, USA.
| |
Collapse
|
2
|
Gao Y, Cai YC, Liu DY, Yu J, Wang J, Li M, Xu B, Wang T, Chen G, Northoff G, Bai R, Song XM. GABAergic inhibition in human hMT+ predicts visuo-spatial intelligence mediated through the frontal cortex. eLife 2024; 13:RP97545. [PMID: 39352734 PMCID: PMC11444681 DOI: 10.7554/elife.97545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/03/2024] Open
Abstract
The prevailing opinion emphasizes fronto-parietal network (FPN) is key in mediating general fluid intelligence (gF). Meanwhile, recent studies show that human MT complex (hMT+), located at the occipito-temporal border and involved in 3D perception processing, also plays a key role in gF. However, the underlying mechanism is not clear, yet. To investigate this issue, our study targets visuo-spatial intelligence, which is considered to have high loading on gF. We use ultra-high field magnetic resonance spectroscopy (MRS) to measure GABA/Glu concentrations in hMT+ combining resting-state fMRI functional connectivity (FC), behavioral examinations including hMT+ perception suppression test and gF subtest in visuo-spatial component. Our findings show that both GABA in hMT+ and frontal-hMT+ functional connectivity significantly correlate with the performance of visuo-spatial intelligence. Further, serial mediation model demonstrates that the effect of hMT+ GABA on visuo-spatial gF is fully mediated by the hMT+ frontal FC. Together our findings highlight the importance in integrating sensory and frontal cortices in mediating the visuo-spatial component of general fluid intelligence.
Collapse
Affiliation(s)
- Yuan Gao
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou, China
| | - Yong-Chun Cai
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China
| | - Dong-Yu Liu
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, Qiushi Academy for Advanced Studies, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Juan Yu
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, Qiushi Academy for Advanced Studies, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Jue Wang
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou, China
| | - Ming Li
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China
| | - Bin Xu
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou, China
| | - Tengfei Wang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China
| | - Gang Chen
- University of Ottawa Institute of Mental Health Research, University of Ottawa, Ottawa, Canada
| | - Georg Northoff
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Hangzhou, China
| | - Ruiliang Bai
- MOE Frontier Science Center for Brain Science & Brain-Machine Integration, Zhejiang University, Hangzhou, China
| | - Xue Mei Song
- Department of Neurosurgery of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, Qiushi Academy for Advanced Studies, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| |
Collapse
|
3
|
Kong L, Zeng F, Zhang Y, Li L, Chen A. The influence of form on motion signal processing in the ventral intraparietal area of macaque monkeys. Heliyon 2024; 10:e36913. [PMID: 39286089 PMCID: PMC11402950 DOI: 10.1016/j.heliyon.2024.e36913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 07/26/2024] [Accepted: 08/23/2024] [Indexed: 09/19/2024] Open
Abstract
The visual system relies on both motion and form signals to perceive the direction of self-motion, yet the coordination mechanisms between these two elements in this process remain elusive. In the current study, we employed heading perception as a model to delve into the interaction characteristics between form and motion signals. We recorded the responses of neurons in the ventral intraparietal area (VIP), an area with strong heading selectivity, to motion-only, form-only, and combined stimuli of simulated self-motion. Intriguingly, VIP neurons responded to form-only cues defined by Glass patterns, although they exhibited no tuning selectivity. In combined condition, introducing a small offset between form and motion cues significantly enhanced neuronal sensitivity to motion cues. However, with a larger offset, the enhancement effect on sensitivity became comparatively smaller. Moreover, we observed that the influence of form cues on neuronal response to motion cues is more pronounced in the later stage (1-2 s) of stimulation, with a relatively smaller effect in the early stage (0-1 s). This suggests a dynamic interaction between motion and form cues over time for heading perception. In summary, our study uncovered that in area VIP, form information plays a role in constructing accurate self-motion perception. This adds valuable insights into the complex dynamics of how the brain integrates motion and form cues for the perception of one's own movements.
Collapse
Affiliation(s)
- Lingqi Kong
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, 200062, China
| | - Fu Zeng
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, 200062, China
| | - Yingying Zhang
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, 200062, China
| | - Li Li
- Faculty of Arts and Science, New York University Shanghai, Shanghai, 200122, China
- New York University-East China Normal University Joint Research Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, 200062, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, 200062, China
- New York University-East China Normal University Joint Research Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, 200062, China
| |
Collapse
|
4
|
Layton OW, Steinmetz ST. Accuracy optimized neural networks do not effectively model optic flow tuning in brain area MSTd. Front Neurosci 2024; 18:1441285. [PMID: 39286477 PMCID: PMC11403719 DOI: 10.3389/fnins.2024.1441285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Accepted: 08/09/2024] [Indexed: 09/19/2024] Open
Abstract
Accuracy-optimized convolutional neural networks (CNNs) have emerged as highly effective models at predicting neural responses in brain areas along the primate ventral stream, but it is largely unknown whether they effectively model neurons in the complementary primate dorsal stream. We explored how well CNNs model the optic flow tuning properties of neurons in dorsal area MSTd and we compared our results with the Non-Negative Matrix Factorization (NNMF) model, which successfully models many tuning properties of MSTd neurons. To better understand the role of computational properties in the NNMF model that give rise to optic flow tuning that resembles that of MSTd neurons, we created additional CNN model variants that implement key NNMF constraints - non-negative weights and sparse coding of optic flow. While the CNNs and NNMF models both accurately estimate the observer's self-motion from purely translational or rotational optic flow, NNMF and the CNNs with nonnegative weights yield substantially less accurate estimates than the other CNNs when tested on more complex optic flow that combines observer translation and rotation. Despite its poor accuracy, NNMF gives rise to tuning properties that align more closely with those observed in primate MSTd than any of the accuracy-optimized CNNs. This work offers a step toward a deeper understanding of the computational properties and constraints that describe the optic flow tuning of primate area MSTd.
Collapse
Affiliation(s)
- Oliver W Layton
- Department of Computer Science, Colby College, Waterville, ME, United States
| | - Scott T Steinmetz
- Center for Computing Research, Sandia National Labs, Albuquerque, NM, United States
| |
Collapse
|
5
|
Pickard K, Davidson MJ, Kim S, Alais D. Incongruent active head rotations increase visual motion detection thresholds. Neurosci Conscious 2024; 2024:niae019. [PMID: 38757119 PMCID: PMC11097904 DOI: 10.1093/nc/niae019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 03/18/2024] [Accepted: 04/24/2024] [Indexed: 05/18/2024] Open
Abstract
Attributing a visual motion signal to its correct source-be that external object motion, self-motion, or some combination of both-seems effortless, and yet often involves disentangling a complex web of motion signals. Existing literature focuses on either translational motion (heading) or eye movements, leaving much to be learnt about the influence of a wider range of self-motions, such as active head rotations, on visual motion perception. This study investigated how active head rotations affect visual motion detection thresholds, comparing conditions where visual motion and head-turn direction were either congruent or incongruent. Participants judged the direction of a visual motion stimulus while rotating their head or remaining stationary, using a fixation-locked Virtual Reality display with integrated head-movement recordings. Thresholds to perceive visual motion were higher in both active-head rotation conditions compared to stationary, though no differences were found between congruent or incongruent conditions. Participants also showed a significant bias to report seeing visual motion travelling in the same direction as the head rotation. Together, these results demonstrate active head rotations increase visual motion perceptual thresholds, particularly in cases of incongruent visual and active vestibular stimulation.
Collapse
Affiliation(s)
- Kate Pickard
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - Matthew J Davidson
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - Sujin Kim
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - David Alais
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| |
Collapse
|
6
|
Hua A, Wang G, Bai J, Hao Z, Liu J, Meng J, Wang J. Nonlinear dynamics of postural control system under visual-vestibular habituation balance practice: evidence from EEG, EMG and center of pressure signals. Front Hum Neurosci 2024; 18:1371648. [PMID: 38736529 PMCID: PMC11082324 DOI: 10.3389/fnhum.2024.1371648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 04/15/2024] [Indexed: 05/14/2024] Open
Abstract
Human postural control system is inherently complex with nonlinear interaction among multiple subsystems. Accordingly, such postural control system has the flexibility in adaptation to complex environments. Previous studies applied complexity-based methods to analyze center of pressure (COP) to explore nonlinear dynamics of postural sway under changing environments, but direct evidence from central nervous system or muscular system is limited in the existing literature. Therefore, we assessed the fractal dimension of COP, surface electromyographic (sEMG) and electroencephalogram (EEG) signals under visual-vestibular habituation balance practice. We combined a rotating platform and a virtual reality headset to present visual-vestibular congruent or incongruent conditions. We asked participants to undergo repeated exposure to either congruent (n = 14) or incongruent condition (n = 13) five times while maintaining balance. We found repeated practice under both congruent and incongruent conditions increased the complexity of high-frequency (0.5-20 Hz) component of COP data and the complexity of sEMG data from tibialis anterior muscle. In contrast, repeated practice under conflicts decreased the complexity of low-frequency (<0.5 Hz) component of COP data and the complexity of EEG data of parietal and occipital lobes, while repeated practice under congruent environment decreased the complexity of EEG data of parietal and temporal lobes. These results suggested nonlinear dynamics of cortical activity differed after balance practice under congruent and incongruent environments. Also, we found a positive correlation (1) between the complexity of high-frequency component of COP and the complexity of sEMG signals from calf muscles, and (2) between the complexity of low-frequency component of COP and the complexity of EEG signals. These results suggested the low- or high-component of COP might be related to central or muscular adjustment of postural control, respectively.
Collapse
Affiliation(s)
- Anke Hua
- Department of Sports Science, Zhejiang University, Hangzhou, China
- Sciences Cognitives et Sciences Affectives, University of Lille, Lille, France
| | - Guozheng Wang
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
- Taizhou Key Laboratory of Medical Devices and Advanced Materials, Research Institute of Zhejiang University, Taizhou, China
| | - Jingyuan Bai
- Department of Sports Science, Zhejiang University, Hangzhou, China
| | - Zengming Hao
- Department of Rehabilitation Medicine, First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jun Liu
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Jun Meng
- College of Control Science and Engineering, Zhejiang University, Hangzhou, China
| | - Jian Wang
- Department of Sports Science, Zhejiang University, Hangzhou, China
- Center for Psychological Science, Zhejiang University, Hangzhou, China
| |
Collapse
|
7
|
He X, Bao M. Neuroimaging evidence of visual-vestibular interaction accounting for perceptual mislocalization induced by head rotation. NEUROPHOTONICS 2024; 11:015005. [PMID: 38298609 PMCID: PMC10828893 DOI: 10.1117/1.nph.11.1.015005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 01/04/2024] [Accepted: 01/08/2024] [Indexed: 02/02/2024]
Abstract
Significance A fleeting flash aligned vertically with an object remaining stationary in the head-centered space would be perceived as lagging behind the object during the observer's horizontal head rotation. This perceptual mislocalization is an illusion named head-rotation-induced flash-lag effect (hFLE). While many studies have investigated the neural mechanism of the classical visual FLE, the hFLE has been hardly investigated. Aim We measured the cortical activity corresponding to the hFLE on participants experiencing passive head rotations using functional near-infrared spectroscopy. Approach Participants were asked to judge the relative position of a flash to a fixed reference while being horizontally rotated or staying static in a swivel chair. Meanwhile, functional near-infrared spectroscopy signals were recorded in temporal-parietal areas. The flash duration was manipulated to provide control conditions. Results Brain activity specific to the hFLE was found around the right middle/inferior temporal gyri, and bilateral supramarginal gyri and superior temporal gyri areas. The activation was positively correlated with the rotation velocity of the participant around the supramarginal gyrus and negatively related to the hFLE intensity around the middle temporal gyrus. Conclusions These results suggest that the mechanism underlying the hFLE involves multiple aspects of visual-vestibular interactions including the processing of multisensory conflicts mediated by the temporoparietal junction and the modulation of vestibular signals on object position perception in the human middle temporal complex.
Collapse
Affiliation(s)
- Xin He
- Chinese Academy of Sciences, Institute of Psychology, CAS Key Laboratory of Behavioral Science, Beijing, China
| | - Min Bao
- Chinese Academy of Sciences, Institute of Psychology, CAS Key Laboratory of Behavioral Science, Beijing, China
- University of Chinese Academy of Sciences, Department of Psychology, Beijing, China
- State Key Laboratory of Brain and Cognitive Science, Beijing, China
| |
Collapse
|
8
|
Zaidel A. Multisensory Calibration: A Variety of Slow and Fast Brain Processes Throughout the Lifespan. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:139-152. [PMID: 38270858 DOI: 10.1007/978-981-99-7611-9_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
From before we are born, throughout development, adulthood, and aging, we are immersed in a multisensory world. At each of these stages, our sensory cues are constantly changing, due to body, brain, and environmental changes. While integration of information from our different sensory cues improves precision, this only improves accuracy if the underlying cues are unbiased. Thus, multisensory calibration is a vital and ongoing process. To meet this grand challenge, our brains have evolved a variety of mechanisms. First, in response to a systematic discrepancy between sensory cues (without external feedback) the cues calibrate one another (unsupervised calibration). Second, multisensory function is calibrated to external feedback (supervised calibration). These two mechanisms superimpose. While the former likely reflects a lower level mechanism, the latter likely reflects a higher level cognitive mechanism. Indeed, neural correlates of supervised multisensory calibration in monkeys were found in higher level multisensory cortical area VIP, but not in the relatively lower level multisensory area MSTd. In addition, even without a cue discrepancy (e.g., when experiencing stimuli from different sensory cues in series) the brain monitors supra-modal statistics of events in the environment and adapts perception cross-modally. This too comprises a variety of mechanisms, including confirmation bias to prior choices, and lower level cross-sensory adaptation. Further research into the neuronal underpinnings of the broad and diverse functions of multisensory calibration, with improved synthesis of theories is needed to attain a more comprehensive understanding of multisensory brain function.
Collapse
Affiliation(s)
- Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, Israel.
| |
Collapse
|
9
|
Zhang WH. Decentralized Neural Circuits of Multisensory Information Integration in the Brain. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:1-21. [PMID: 38270850 DOI: 10.1007/978-981-99-7611-9_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
The brain combines multisensory inputs together to obtain a complete and reliable description of the world. Recent experiments suggest that several interconnected multisensory brain areas are simultaneously involved to integrate multisensory information. It was unknown how these mutually connected multisensory areas achieve multisensory integration. To answer this question, using biologically plausible neural circuit models we developed a decentralized system for information integration that comprises multiple interconnected multisensory brain areas. Through studying an example of integrating visual and vestibular cues to infer heading direction, we show that such a decentralized system is well consistent with experimental observations. In particular, we demonstrate that this decentralized system can optimally integrate information by implementing sampling-based Bayesian inference. The Poisson variability of spike generation provides appropriate variability to drive sampling, and the interconnections between multisensory areas store the correlation prior between multisensory stimuli. The decentralized system predicts that optimally integrated information emerges locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Lyda Hill Department of Bioinformatics and O'Donnell Brain Institute, UT Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
10
|
Zheng Q, Gu Y. From Multisensory Integration to Multisensory Decision-Making. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:23-35. [PMID: 38270851 DOI: 10.1007/978-981-99-7611-9_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Organisms live in a dynamic environment in which sensory information from multiple sources is ever changing. A conceptually complex task for the organisms is to accumulate evidence across sensory modalities and over time, a process known as multisensory decision-making. This is a new concept, in terms of that previous researches have been largely conducted in parallel disciplines. That is, much efforts have been put either in sensory integration across modalities using activity summed over a duration of time, or in decision-making with only one sensory modality that evolves over time. Recently, a few studies with neurophysiological measurements emerge to study how different sensory modality information is processed, accumulated, and integrated over time in decision-related areas such as the parietal or frontal lobes in mammals. In this review, we summarize and comment on these studies that combine the long-existed two parallel fields of multisensory integration and decision-making. We show how the new findings provide insight into our understanding about neural mechanisms mediating multisensory information processing in a more complete way.
Collapse
Affiliation(s)
- Qihao Zheng
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Yong Gu
- Systems Neuroscience, SInstitute of Neuroscience, Chinese Academy of Sciences, Shanghai, China.
| |
Collapse
|
11
|
Lin R, Zeng F, Wang Q, Chen A. Cross-Modal Plasticity during Self-Motion Perception. Brain Sci 2023; 13:1504. [PMID: 38002465 PMCID: PMC10669852 DOI: 10.3390/brainsci13111504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 10/13/2023] [Accepted: 10/23/2023] [Indexed: 11/26/2023] Open
Abstract
To maintain stable and coherent perception in an ever-changing environment, the brain needs to continuously and dynamically calibrate information from multiple sensory sources, using sensory and non-sensory information in a flexible manner. Here, we review how the vestibular and visual signals are recalibrated during self-motion perception. We illustrate two different types of recalibration: one long-term cross-modal (visual-vestibular) recalibration concerning how multisensory cues recalibrate over time in response to a constant cue discrepancy, and one rapid-term cross-modal (visual-vestibular) recalibration concerning how recent prior stimuli and choices differentially affect subsequent self-motion decisions. In addition, we highlight the neural substrates of long-term visual-vestibular recalibration, with profound differences observed in neuronal recalibration across multisensory cortical areas. We suggest that multisensory recalibration is a complex process in the brain, is modulated by many factors, and requires the coordination of many distinct cortical areas. We hope this review will shed some light on research into the neural circuits of visual-vestibular recalibration and help develop a more generalized theory for cross-modal plasticity.
Collapse
Affiliation(s)
- Rushi Lin
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Fu Zeng
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Qingjun Wang
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
- NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai 200122, China
| |
Collapse
|
12
|
Zeng Z, Zhang C, Gu Y. Visuo-vestibular heading perception: a model system to study multi-sensory decision making. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220334. [PMID: 37545303 PMCID: PMC10404926 DOI: 10.1098/rstb.2022.0334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/15/2023] [Indexed: 08/08/2023] Open
Abstract
Integrating noisy signals across time as well as sensory modalities, a process named multi-sensory decision making (MSDM), is an essential strategy for making more accurate and sensitive decisions in complex environments. Although this field is just emerging, recent extraordinary works from different perspectives, including computational theory, psychophysical behaviour and neurophysiology, begin to shed new light onto MSDM. In the current review, we focus on MSDM by using a model system of visuo-vestibular heading. Combining well-controlled behavioural paradigms on virtual-reality systems, single-unit recordings, causal manipulations and computational theory based on spiking activity, recent progress reveals that vestibular signals contain complex temporal dynamics in many brain regions, including unisensory, multi-sensory and sensory-motor association areas. This challenges the brain for cue integration across time and sensory modality such as optic flow which mainly contains a motion velocity signal. In addition, new evidence from the higher-level decision-related areas, mostly in the posterior and frontal/prefrontal regions, helps revise our conventional thought on how signals from different sensory modalities may be processed, converged, and moment-by-moment accumulated through neural circuits for forming a unified, optimal perceptual decision. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Zhao Zeng
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Ce Zhang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| |
Collapse
|
13
|
Jerjian SJ, Harsch DR, Fetsch CR. Self-motion perception and sequential decision-making: where are we heading? Philos Trans R Soc Lond B Biol Sci 2023; 378:20220333. [PMID: 37545301 PMCID: PMC10404932 DOI: 10.1098/rstb.2022.0333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
To navigate and guide adaptive behaviour in a dynamic environment, animals must accurately estimate their own motion relative to the external world. This is a fundamentally multisensory process involving integration of visual, vestibular and kinesthetic inputs. Ideal observer models, paired with careful neurophysiological investigation, helped to reveal how visual and vestibular signals are combined to support perception of linear self-motion direction, or heading. Recent work has extended these findings by emphasizing the dimension of time, both with regard to stimulus dynamics and the trade-off between speed and accuracy. Both time and certainty-i.e. the degree of confidence in a multisensory decision-are essential to the ecological goals of the system: terminating a decision process is necessary for timely action, and predicting one's accuracy is critical for making multiple decisions in a sequence, as in navigation. Here, we summarize a leading model for multisensory decision-making, then show how the model can be extended to study confidence in heading discrimination. Lastly, we preview ongoing efforts to bridge self-motion perception and navigation per se, including closed-loop virtual reality and active self-motion. The design of unconstrained, ethologically inspired tasks, accompanied by large-scale neural recordings, raise promise for a deeper understanding of spatial perception and decision-making in the behaving animal. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Steven J. Jerjian
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Devin R. Harsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
- Center for Neuroscience and Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
14
|
Liu B, Shan J, Gu Y. Temporal and spatial properties of vestibular signals for perception of self-motion. Front Neurol 2023; 14:1266513. [PMID: 37780704 PMCID: PMC10534010 DOI: 10.3389/fneur.2023.1266513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 08/29/2023] [Indexed: 10/03/2023] Open
Abstract
It is well recognized that the vestibular system is involved in numerous important cognitive functions, including self-motion perception, spatial orientation, locomotion, and vector-based navigation, in addition to basic reflexes, such as oculomotor or body postural control. Consistent with this rationale, vestibular signals exist broadly in the brain, including several regions of the cerebral cortex, potentially allowing tight coordination with other sensory systems to improve the accuracy and precision of perception or action during self-motion. Recent neurophysiological studies in animal models based on single-cell resolution indicate that vestibular signals exhibit complex spatiotemporal dynamics, producing challenges in identifying their exact functions and how they are integrated with other modality signals. For example, vestibular and optic flow could provide congruent and incongruent signals regarding spatial tuning functions, reference frames, and temporal dynamics. Comprehensive studies, including behavioral tasks, neural recording across sensory and sensory-motor association areas, and causal link manipulations, have provided some insights into the neural mechanisms underlying multisensory self-motion perception.
Collapse
Affiliation(s)
- Bingyu Liu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jiayu Shan
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
15
|
Gao W, Shen J, Lin Y, Wang K, Lin Z, Tang H, Chen X. Sequential sparse autoencoder for dynamic heading representation in ventral intraparietal area. Comput Biol Med 2023; 163:107114. [PMID: 37329620 DOI: 10.1016/j.compbiomed.2023.107114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 05/12/2023] [Accepted: 05/30/2023] [Indexed: 06/19/2023]
Abstract
To navigate in space, it is important to predict headings in real-time from neural responses in the brain to vestibular and visual signals, and the ventral intraparietal area (VIP) is one of the critical brain areas. However, it remains unexplored in the population level how the heading perception is represented in VIP. And there are no commonly used methods suitable for decoding the headings from the population responses in VIP, given the large spatiotemporal dynamics and heterogeneity in the neural responses. Here, responses were recorded from 210 VIP neurons in three rhesus monkeys when they were performing a heading perception task. And by specifically and separately modelling the both dynamics with sparse representation, we built a sequential sparse autoencoder (SSAE) to do the population decoding on the recorded dataset and tried to maximize the decoding performance. The SSAE relies on a three-layer sparse autoencoder to extract temporal and spatial heading features in the dataset via unsupervised learning, and a softmax classifier to decode the headings. Compared with other population decoding methods, the SSAE achieves a leading accuracy of 96.8% ± 2.1%, and shows the advantages of robustness, low storage and computing burden for real-time prediction. Therefore, our SSAE model performs well in learning neurobiologically plausible features comprising dynamic navigational information.
Collapse
Affiliation(s)
- Wei Gao
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou, 310029, China
| | - Jiangrong Shen
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou, 310027, China
| | - Yipeng Lin
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou, 310029, China
| | - Kejun Wang
- School of Software Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou, 310027, China
| | - Zheng Lin
- Department of Psychiatry, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Shangcheng District, Hangzhou, 310009, China
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou, 310027, China.
| | - Xiaodong Chen
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou, 310029, China.
| |
Collapse
|
16
|
Zhao B, Wang R, Zhu Z, Yang Q, Chen A. The computational rules of cross-modality suppression in the visual posterior sylvian area. iScience 2023; 26:106973. [PMID: 37378331 PMCID: PMC10291470 DOI: 10.1016/j.isci.2023.106973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 03/13/2023] [Accepted: 05/23/2023] [Indexed: 06/29/2023] Open
Abstract
The macaque visual posterior sylvian area (VPS) is an area with neurons responding selectively to heading direction in both visual and vestibular modalities, but how VPS neurons combined these two sensory signals is still unknown. In contrast to the subadditive characteristics in the medial superior temporal area (MSTd), responses in VPS were dominated by vestibular signals, with approximately a winner-take-all competition. The conditional Fisher information analysis shows that VPS neural population encodes information from distinct sensory modalities under large and small offset conditions, which differs from MSTd whose neural population contains more information about visual stimuli in both conditions. However, the combined responses of single neurons in both areas can be well fit by weighted linear sums of unimodal responses. Furthermore, a normalization model captured most vestibular and visual interaction characteristics for both VPS and MSTd, indicating the divisive normalization mechanism widely exists in the cortex.
Collapse
Affiliation(s)
- Bin Zhao
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| | - Rong Wang
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| | - Zhihua Zhu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Qianli Yang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| |
Collapse
|
17
|
Page WK, Sulon DW, Duffy CJ. Neural activity during monkey vehicular wayfinding. J Neurol Sci 2023; 446:120593. [PMID: 36827811 DOI: 10.1016/j.jns.2023.120593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 02/06/2023] [Accepted: 02/14/2023] [Indexed: 02/19/2023]
Abstract
Navigation gets us from place to place, creating a path to arrive at a goal. We trained a monkey to steer a motorized cart in a large room, beginning at its trial-by-trial start location and ending at a trial-by-trial cued goal location. While the monkey steered its autonomously chosen path to its goal, we recorded neural activity simultaneously in both the hippocampus (HPC) and medial superior temporal (MST) cortex. Local field potentials (LFPs) in these sites show similar patterns of activity with the 15-30 Hz band highlighting specific room locations. In contrast, 30-100 Hz LFPs support a unified map of the behaviorally relevant start and goal locations. The single neuron responses (SNRs) do not substantially contribute to room or start-goal maps. Rather, the SNRs form a continuum from neurons that are most active when the monkey is moving on a path toward the goal, versus other neurons that are most active when the monkey deviates from paths toward the goal. Granger analyses suggest that HPC firing precedes MST firing during cueing at the trial start location, mainly mediated by off-path neurons. In contrast, MST precedes HPC firing during steering, mainly mediated by on-path neurons. Interactions between MST and HPC are mediated by the parallel activation of on-path and off-path neurons, selectively activated across stages of this wayfinding task.
Collapse
Affiliation(s)
- William K Page
- Dept. of Neurology, University of Rochester Medical Ctr., Rochester, NY 14642, USA
| | - David W Sulon
- Dept. of Neurology, Penn State Health Medical Ctr., Hershey, PA 17036, USA
| | - Charles J Duffy
- Dept. of Neurology, University of Rochester Medical Ctr., Rochester, NY 14642, USA; Dept. of Neurology, Penn State Health Medical Ctr., Hershey, PA 17036, USA; Dept. of Neurology, University Hospitals and Case Western Reserve University, Cleveland, OH 44122, USA.
| |
Collapse
|
18
|
DiRisio GF, Ra Y, Qiu Y, Anzai A, DeAngelis GC. Neurons in Primate Area MSTd Signal Eye Movement Direction Inferred from Dynamic Perspective Cues in Optic Flow. J Neurosci 2023; 43:1888-1904. [PMID: 36725323 PMCID: PMC10027048 DOI: 10.1523/jneurosci.1885-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/18/2023] [Accepted: 01/24/2023] [Indexed: 02/03/2023] Open
Abstract
Smooth eye movements are common during natural viewing; we frequently rotate our eyes to track moving objects or to maintain fixation on an object during self-movement. Reliable information about smooth eye movements is crucial to various neural computations, such as estimating heading from optic flow or judging depth from motion parallax. While it is well established that extraretinal signals (e.g., efference copies of motor commands) carry critical information about eye velocity, the rotational optic flow field produced by eye rotations also carries valuable information. Although previous work has shown that dynamic perspective cues in optic flow can be used in computations that require estimates of eye velocity, it has remained unclear where and how the brain processes these visual cues and how they are integrated with extraretinal signals regarding eye rotation. We examined how neurons in the dorsal region of the medial superior temporal area (MSTd) of two male rhesus monkeys represent the direction of smooth pursuit eye movements based on both visual cues (dynamic perspective) and extraretinal signals. We find that most MSTd neurons have matched preferences for the direction of eye rotation based on visual and extraretinal signals. Moreover, neural responses to combinations of these signals are well predicted by a weighted linear summation model. These findings demonstrate a neural substrate for representing the velocity of smooth eye movements based on rotational optic flow and establish area MSTd as a key node for integrating visual and extraretinal signals into a more generalized representation of smooth eye movements.SIGNIFICANCE STATEMENT We frequently rotate our eyes to smoothly track objects of interest during self-motion. Information about eye velocity is crucial for a variety of computations performed by the brain, including depth perception and heading perception. Traditionally, information about eye rotation has been thought to arise mainly from extraretinal signals, such as efference copies of motor commands. Previous work shows that eye velocity can also be inferred from rotational optic flow that accompanies smooth eye movements, but the neural origins of these visual signals about eye rotation have remained unknown. We demonstrate that macaque neurons signal the direction of smooth eye rotation based on visual signals, and that they integrate both visual and extraretinal signals regarding eye rotation in a congruent fashion.
Collapse
Affiliation(s)
- Grace F DiRisio
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- Department of Neurobiology, University of Chicago, Chicago, Illinois 60637
| | - Yongsoo Ra
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Yinghui Qiu
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
- College of Veterinary Medicine, Cornell University, Ithaca, New York 14853-6401
| | - Akiyuki Anzai
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, New York 14627
| |
Collapse
|
19
|
Zeng F, Zaidel A, Chen A. Contrary neuronal recalibration in different multisensory cortical areas. eLife 2023; 12:82895. [PMID: 36877555 PMCID: PMC9988259 DOI: 10.7554/elife.82895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 02/21/2023] [Indexed: 03/07/2023] Open
Abstract
The adult brain demonstrates remarkable multisensory plasticity by dynamically recalibrating itself based on information from multiple sensory sources. After a systematic visual-vestibular heading offset is experienced, the unisensory perceptual estimates for subsequently presented stimuli are shifted toward each other (in opposite directions) to reduce the conflict. The neural substrate of this recalibration is unknown. Here, we recorded single-neuron activity from the dorsal medial superior temporal (MSTd), parietoinsular vestibular cortex (PIVC), and ventral intraparietal (VIP) areas in three male rhesus macaques during this visual-vestibular recalibration. Both visual and vestibular neuronal tuning curves in MSTd shifted - each according to their respective cues' perceptual shifts. Tuning of vestibular neurons in PIVC also shifted in the same direction as vestibular perceptual shifts (cells were not robustly tuned to the visual stimuli). By contrast, VIP neurons demonstrated a unique phenomenon: both vestibular and visual tuning shifted in accordance with vestibular perceptual shifts. Such that, visual tuning shifted, surprisingly, contrary to visual perceptual shifts. Therefore, while unsupervised recalibration (to reduce cue conflict) occurs in early multisensory cortices, higher-level VIP reflects only a global shift, in vestibular space.
Collapse
Affiliation(s)
- Fu Zeng
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal UniversityShanghaiChina
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan UniversityRamat GanIsrael
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal UniversityShanghaiChina
| |
Collapse
|
20
|
Gao W, Lin Y, Shen J, Han J, Song X, Lu Y, Zhan H, Li Q, Ge H, Lin Z, Shi W, Drugowitsch J, Tang H, Chen X. Diverse effects of gaze direction on heading perception in humans. Cereb Cortex 2023:7024719. [PMID: 36734278 DOI: 10.1093/cercor/bhac541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 12/24/2022] [Accepted: 12/27/2022] [Indexed: 02/04/2023] Open
Abstract
Gaze change can misalign spatial reference frames encoding visual and vestibular signals in cortex, which may affect the heading discrimination. Here, by systematically manipulating the eye-in-head and head-on-body positions to change the gaze direction of subjects, the performance of heading discrimination was tested with visual, vestibular, and combined stimuli in a reaction-time task in which the reaction time is under the control of subjects. We found the gaze change induced substantial biases in perceived heading, increased the threshold of discrimination and reaction time of subjects in all stimulus conditions. For the visual stimulus, the gaze effects were induced by changing the eye-in-world position, and the perceived heading was biased in the opposite direction of gaze. In contrast, the vestibular gaze effects were induced by changing the eye-in-head position, and the perceived heading was biased in the same direction of gaze. Although the bias was reduced when the visual and vestibular stimuli were combined, integration of the 2 signals substantially deviated from predictions of an extended diffusion model that accumulates evidence optimally over time and across sensory modalities. These findings reveal diverse gaze effects on the heading discrimination and emphasize that the transformation of spatial reference frames may underlie the effects.
Collapse
Affiliation(s)
- Wei Gao
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Yipeng Lin
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Jiangrong Shen
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Jianing Han
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaoxiao Song
- Department of Liberal Arts, School of Art Administration and Education, China Academy of Art, 218 Nanshan Road, Shangcheng District, Hangzhou 310002, China
| | - Yukun Lu
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Huijia Zhan
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Qianbing Li
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Haoting Ge
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Zheng Lin
- Department of Psychiatry, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Shangcheng District, Hangzhou 310009, China
| | - Wenlei Shi
- Center for the Study of the History of Chinese Language and Center for the Study of Language and Cognition, Zhejiang University, 866 Yuhangtang Road, Xihu District, Hangzhou 310058, China
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Longwood Avenue 220, Boston, MA 02116, United States
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaodong Chen
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| |
Collapse
|
21
|
Horrocks EAB, Mareschal I, Saleem AB. Walking humans and running mice: perception and neural encoding of optic flow during self-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210450. [PMID: 36511417 PMCID: PMC9745880 DOI: 10.1098/rstb.2021.0450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 08/30/2022] [Indexed: 12/15/2022] Open
Abstract
Locomotion produces full-field optic flow that often dominates the visual motion inputs to an observer. The perception of optic flow is in turn important for animals to guide their heading and interact with moving objects. Understanding how locomotion influences optic flow processing and perception is therefore essential to understand how animals successfully interact with their environment. Here, we review research investigating how perception and neural encoding of optic flow are altered during self-motion, focusing on locomotion. Self-motion has been found to influence estimation and sensitivity for optic flow speed and direction. Nonvisual self-motion signals also increase compensation for self-driven optic flow when parsing the visual motion of moving objects. The integration of visual and nonvisual self-motion signals largely follows principles of Bayesian inference and can improve the precision and accuracy of self-motion perception. The calibration of visual and nonvisual self-motion signals is dynamic, reflecting the changing visuomotor contingencies across different environmental contexts. Throughout this review, we consider experimental research using humans, non-human primates and mice. We highlight experimental challenges and opportunities afforded by each of these species and draw parallels between experimental findings. These findings reveal a profound influence of locomotion on optic flow processing and perception across species. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Edward A. B. Horrocks
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary, University of London, London E1 4NS, UK
| | - Aman B. Saleem
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
22
|
Jeong W, Kim S, Park J, Lee J. Multivariate EEG activity reflects the Bayesian integration and the integrated Galilean relative velocity of sensory motion during sensorimotor behavior. Commun Biol 2023; 6:113. [PMID: 36709242 PMCID: PMC9884247 DOI: 10.1038/s42003-023-04481-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 01/12/2023] [Indexed: 01/29/2023] Open
Abstract
Humans integrate multiple sources of information for action-taking, using the reliability of each source to allocate weight to the data. This reliability-weighted information integration is a crucial property of Bayesian inference. In this study, participants were asked to perform a smooth pursuit eye movement task in which we independently manipulated the reliability of pursuit target motion and the direction-of-motion cue. Through an analysis of pursuit initiation and multivariate electroencephalography activity, we found neural and behavioral evidence of Bayesian information integration: more attraction toward the cue direction was generated when the target motion was weak and unreliable. Furthermore, using mathematical modeling, we found that the neural signature of Bayesian information integration had extra-retinal origins, although most of the multivariate electroencephalography activity patterns during pursuit were best correlated with the retinal velocity errors accumulated over time. Our results demonstrated neural implementation of Bayesian inference in human oculomotor behavior.
Collapse
Affiliation(s)
- Woojae Jeong
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.42505.360000 0001 2156 6853Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089 USA
| | - Seolmin Kim
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.264381.a0000 0001 2181 989XDepartment of Biomedical Engineering, Sungkyunkwan University, Suwon, 16419 Republic of Korea
| | - JeongJun Park
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.4367.60000 0001 2355 7002Division of Biology and Biomedical Sciences, Program in Neurosciences, Washington University in St. Louis, St. Louis, MO 63130 USA
| | - Joonyeol Lee
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.264381.a0000 0001 2181 989XDepartment of Biomedical Engineering, Sungkyunkwan University, Suwon, 16419 Republic of Korea ,grid.264381.a0000 0001 2181 989XDepartment of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon, 16419 Republic of Korea
| |
Collapse
|
23
|
Alterations in Corticocortical Vestibular Network Functional Connectivity Are Associated with Decreased Balance Ability in Elderly Individuals with Mild Cognitive Impairment. Brain Sci 2022; 13:brainsci13010063. [PMID: 36672045 PMCID: PMC9856347 DOI: 10.3390/brainsci13010063] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 12/19/2022] [Accepted: 12/27/2022] [Indexed: 12/31/2022] Open
Abstract
The corticocortical vestibular network (CVN) plays an important role in maintaining balance and stability. In order to clarify the specific relationship between the CVN and the balance ability of patients with mild cognitive impairment (MCI), we recruited 30 MCI patients in the community. According to age and sex, they were 1:1 matched to 30 older adults with normal cognitive function. We evaluated balance ability and performed MRI scanning in the two groups of participants. We analyzed functional connectivity within the CVN based on the region of interest. Then, we performed a Pearson correlation analysis between the functional connection and the Berg Balance Scale scores. The research results show that compared with the control group, there were three pairs of functional connections (hMST_R−Premotor_R, PFcm_R−SMA_L, and hMST_L−VIP_R) that were significantly decreased in the CVNs of the MCI group (p < 0.05). Further correlation analysis showed that there was a significant positive correlation between hMST_R−Premotor_R functional connectivity and BBS score (r = 0.364, p = 0.004). The decline in balance ability and increase in fall risk in patients with MCI may be closely related to the change in the internal connection mode of the corticocortical vestibular network.
Collapse
|
24
|
Zhou Y, Mohan K, Freedman DJ. Abstract Encoding of Categorical Decisions in Medial Superior Temporal and Lateral Intraparietal Cortices. J Neurosci 2022; 42:9069-9081. [PMID: 36261285 PMCID: PMC9732825 DOI: 10.1523/jneurosci.0017-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 10/04/2022] [Accepted: 10/06/2022] [Indexed: 01/05/2023] Open
Abstract
Categorization is an essential cognitive and perceptual process for decision-making and recognition. The posterior parietal cortex, particularly the lateral intraparietal (LIP) area has been suggested to transform visual feature encoding into abstract categorical representations. By contrast, areas closer to sensory input, such as the middle temporal (MT) area, encode stimulus features but not more abstract categorical information during categorization tasks. Here, we compare the contributions of the medial superior temporal (MST) and LIP areas in category computation by recording neuronal activity in both areas from two male rhesus macaques trained to perform a visual motion categorization task. MST is a core motion-processing region interconnected with MT and is often considered an intermediate processing stage between MT and LIP. We show that MST exhibits robust decision-correlated motion category encoding and working memory encoding similar to LIP, suggesting that MST plays a substantial role in cognitive computation, extending beyond its widely recognized role in visual motion processing.SIGNIFICANCE STATEMENT Categorization requires assigning incoming sensory stimuli into behaviorally relevant groups. Previous work found that parietal area LIP shows a strong encoding of the learned category membership of visual motion stimuli, while visual area MT shows strong direction tuning but not category tuning during a motion direction categorization task. Here we show that the medial superior temporal (MST) area, a visual motion-processing region interconnected with both LIP and MT, shows strong visual category encoding similar to that observed in LIP. This suggests that MST plays a greater role in abstract cognitive functions, extending beyond its well known role in visual motion processing.
Collapse
Affiliation(s)
- Yang Zhou
- Department of Neurobiology, The University of Chicago, Chicago, Illinois 60637
- PKU-IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, School of Psychological and Cognitive Sciences, Peking University, Beijing 100871, People's Republic of China
| | - Krithika Mohan
- Department of Neurobiology, The University of Chicago, Chicago, Illinois 60637
| | - David J Freedman
- Department of Neurobiology, The University of Chicago, Chicago, Illinois 60637
- The University of Chicago Neuroscience Institute, The University of Chicago, Chicago, Illinois 60637
| |
Collapse
|
25
|
Zhang J, Huang M, Gu Y, Chen A, Yu Y. Visual-Based Spatial Coordinate Dominates Probabilistic Multisensory Inference in Macaque MST-d Disparity Encoding. Brain Sci 2022; 12:1387. [PMID: 36291320 PMCID: PMC9599195 DOI: 10.3390/brainsci12101387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Revised: 10/07/2022] [Accepted: 10/07/2022] [Indexed: 11/16/2022] Open
Abstract
Numerous studies have demonstrated that animal brains accurately infer whether multisensory stimuli are from a common source or separate sources. Previous work proposed that the multisensory neurons in the dorsal medial superior temporal area (MST-d) serve as integration or separation encoders determined by the tuning-response ratio. However, it remains unclear whether MST-d neurons mainly take a sense input as a spatial coordinate reference for carrying out multisensory integration or separation. Our experimental analysis shows that the preferred tuning response to visual input is generally larger than vestibular according to the Macaque MST-d neuronal recordings. This may be crucial to serving as the base of coordinate reference when the subject perceives moving direction information from two senses. By constructing a flexible Monte-Carlo probabilistic sampling (fMCS) model, we validate this hypothesis that the visual and vestibular cues are more likely to be integrated into a visual-based coordinate rather than vestibular. Furthermore, the property of the tuning gradient also affects decision-making regarding whether the cues should be integrated or not. To a dominant modality, an effective decision is produced by a steep response-tuning gradient of the corresponding neurons, while to a subordinate modality a steep tuning gradient produces a rigid decision with a significant bias to either integration or separation. This work proposes that the tuning response amplitude and tuning gradient jointly modulate which modality serves as the base coordinate for the reference frame and the direction change with which modality is decoded effectively.
Collapse
Affiliation(s)
- Jiawei Zhang
- Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Human Phenome Institute, Shanghai 200433, China
| | - Mingyi Huang
- Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Human Phenome Institute, Shanghai 200433, China
| | - Yong Gu
- Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China
| | - Yuguo Yu
- Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Human Phenome Institute, Shanghai 200433, China
| |
Collapse
|
26
|
Wang G, Yang Y, Wang J, Hao Z, Luo X, Liu J. Dynamic changes of brain networks during standing balance control under visual conflict. Front Neurosci 2022; 16:1003996. [PMID: 36278015 PMCID: PMC9581155 DOI: 10.3389/fnins.2022.1003996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 09/20/2022] [Indexed: 11/13/2022] Open
Abstract
Stance balance control requires a very accurate tuning and combination of visual, vestibular, and proprioceptive inputs, and conflict among these sensory systems may induce posture instability and even falls. Although there are many human mechanics and psychophysical studies for this phenomenon, the effects of sensory conflict on brain networks and its underlying neural mechanisms are still unclear. Here, we combined a rotating platform and a virtual reality (VR) headset to control the participants’ physical and visual motion states, presenting them with incongruous (sensory conflict) or congruous (normal control) physical-visual stimuli. Further, to investigate the effects of sensory conflict on stance stability and brain networks, we recorded and calculated the effective connectivity of source-level electroencephalogram (EEG) and the average velocity of the plantar center of pressure (COP) in healthy subjects (18 subjects: 10 males, 8 females). First, our results showed that sensory conflict did have a detrimental effect on stance posture control [sensor F(1, 17) = 13.34, P = 0.0019], but this effect decreases over time [window*sensor F(2, 34) = 6.72, P = 0.0035]. Humans show a marked adaptation to sensory conflict. In addition, we found that human adaptation to the sensory conflict was associated with changes in the cortical network. At the stimulus onset, congruent and incongruent stimuli had similar effects on brain networks. In both cases, there was a significant increase in information interaction centered on the frontal cortices (p < 0.05). Then, after a time window, synchronized with the restoration of stance stability under conflict, the connectivity of large brain regions, including posterior parietal, visual, somatosensory, and motor cortices, was generally lower in sensory conflict than in controls (p < 0.05). But the influence of the superior temporal lobe on other cortices was significantly increased. Overall, we speculate that a posterior parietal-centered cortical network may play a key role in integrating congruous sensory information. Furthermore, the dissociation of this network may reflect a flexible multisensory interaction strategy that is critical for human posture balance control in complex and changing environments. In addition, the superior temporal lobe may play a key role in processing conflicting sensory information.
Collapse
Affiliation(s)
- Guozheng Wang
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Yi Yang
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, China
| | - Jian Wang
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, China
| | - Zengming Hao
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xin Luo
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, China
| | - Jun Liu
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
- *Correspondence: Jun Liu,
| |
Collapse
|
27
|
Causal contribution of optic flow signal in Macaque extrastriate visual cortex for roll perception. Nat Commun 2022; 13:5479. [PMID: 36123363 PMCID: PMC9485245 DOI: 10.1038/s41467-022-33245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 09/08/2022] [Indexed: 11/08/2022] Open
Abstract
Optic flow is a powerful cue for inferring self-motion status which is critical for postural control, spatial orientation, locomotion and navigation. In primates, neurons in extrastriate visual cortex (MSTd) are predominantly modulated by high-order optic flow patterns (e.g., spiral), yet a functional link to direct perception is lacking. Here, we applied electrical microstimulation to selectively manipulate population of MSTd neurons while macaques discriminated direction of rotation around line-of-sight (roll) or direction of linear-translation (heading), two tasks which were orthogonal in 3D spiral coordinate using a four-alternative-forced-choice paradigm. Microstimulation frequently biased animal's roll perception towards coded labeled-lines of the artificial-stimulated neurons in either context with spiral or pure-rotation stimuli. Choice frequency was also altered between roll and translation flow-pattern. Our results provide direct causal-link evidence supporting that roll signals in MSTd, despite often mixed with translation signals, can be extracted by downstream areas for perception of rotation relative to gravity-vertical.
Collapse
|
28
|
Zhang J, Gu Y, Chen A, Yu Y. Unveiling Dynamic System Strategies for Multisensory Processing: From Neuronal Fixed-Criterion Integration to Population Bayesian Inference. Research (Wash D C) 2022; 2022:9787040. [PMID: 36072271 PMCID: PMC9422331 DOI: 10.34133/2022/9787040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/18/2022] [Indexed: 11/17/2022] Open
Abstract
Multisensory processing is of vital importance for survival in the external world. Brain circuits can both integrate and separate visual and vestibular senses to infer self-motion and the motion of other objects. However, it is largely debated how multisensory brain regions process such multisensory information and whether they follow the Bayesian strategy in this process. Here, we combined macaque physiological recordings in the dorsal medial superior temporal area (MST-d) with modeling of synaptically coupled multilayer continuous attractor neural networks (CANNs) to study the underlying neuronal circuit mechanisms. In contrast to previous theoretical studies that focused on unisensory direction preference, our analysis showed that synaptic coupling induced cooperation and competition in the multisensory circuit and caused single MST-d neurons to switch between sensory integration or separation modes based on the fixed-criterion causal strategy, which is determined by the synaptic coupling strength. Furthermore, the prior of sensory reliability was represented by pooling diversified criteria at the MST-d population level, and the Bayesian strategy was achieved in downstream neurons whose causal inference flexibly changed with the prior. The CANN model also showed that synaptic input balance is the dynamic origin of neuronal direction preference formation and further explained the misalignment between direction preference and inference observed in previous studies. This work provides a computational framework for a new brain-inspired algorithm underlying multisensory computation.
Collapse
Affiliation(s)
- Jiawei Zhang
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, Human Phenome Institute, Shanghai 200433, China
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China
| | - Yuguo Yu
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, Human Phenome Institute, Shanghai 200433, China
| |
Collapse
|
29
|
Layton OW, Fajen BR. Distributed encoding of curvilinear self-motion across spiral optic flow patterns. Sci Rep 2022; 12:13393. [PMID: 35927277 PMCID: PMC9352735 DOI: 10.1038/s41598-022-16371-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 07/08/2022] [Indexed: 11/09/2022] Open
Abstract
Self-motion along linear paths without eye movements creates optic flow that radiates from the direction of travel (heading). Optic flow-sensitive neurons in primate brain area MSTd have been linked to linear heading perception, but the neural basis of more general curvilinear self-motion perception is unknown. The optic flow in this case is more complex and depends on the gaze direction and curvature of the path. We investigated the extent to which signals decoded from a neural model of MSTd predict the observer's curvilinear self-motion. Specifically, we considered the contributions of MSTd-like units that were tuned to radial, spiral, and concentric optic flow patterns in "spiral space". Self-motion estimates decoded from units tuned to the full set of spiral space patterns were substantially more accurate and precise than those decoded from units tuned to radial expansion. Decoding only from units tuned to spiral subtypes closely approximated the performance of the full model. Only the full decoding model could account for human judgments when path curvature and gaze covaried in self-motion stimuli. The most predictive units exhibited bias in center-of-motion tuning toward the periphery, consistent with neurophysiology and prior modeling. Together, findings support a distributed encoding of curvilinear self-motion across spiral space.
Collapse
Affiliation(s)
- Oliver W Layton
- Department of Computer Science, Colby College, Waterville, ME, USA. .,Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA.
| | - Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
30
|
Chen K, Beyeler M, Krichmar JL. Cortical Motion Perception Emerges from Dimensionality Reduction with Evolved Spike-Timing-Dependent Plasticity Rules. J Neurosci 2022; 42:5882-5898. [PMID: 35732492 PMCID: PMC9337611 DOI: 10.1523/jneurosci.0384-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 05/16/2022] [Accepted: 06/08/2022] [Indexed: 01/29/2023] Open
Abstract
The nervous system is under tight energy constraints and must represent information efficiently. This is particularly relevant in the dorsal part of the medial superior temporal area (MSTd) in primates where neurons encode complex motion patterns to support a variety of behaviors. A sparse decomposition model based on a dimensionality reduction principle known as non-negative matrix factorization (NMF) was previously shown to account for a wide range of monkey MSTd visual response properties. This model resulted in sparse, parts-based representations that could be regarded as basis flow fields, a linear superposition of which accurately reconstructed the input stimuli. This model provided evidence that the seemingly complex response properties of MSTd may be a by-product of MSTd neurons performing dimensionality reduction on their input. However, an open question is how a neural circuit could carry out this function. In the current study, we propose a spiking neural network (SNN) model of MSTd based on evolved spike-timing-dependent plasticity and homeostatic synaptic scaling (STDP-H) learning rules. We demonstrate that the SNN model learns compressed and efficient representations of the input patterns similar to the patterns that emerge from NMF, resulting in MSTd-like receptive fields observed in monkeys. This SNN model suggests that STDP-H observed in the nervous system may be performing a similar function as NMF with sparsity constraints, which provides a test bed for mechanistic theories of how MSTd may efficiently encode complex patterns of visual motion to support robust self-motion perception.SIGNIFICANCE STATEMENT The brain may use dimensionality reduction and sparse coding to efficiently represent stimuli under metabolic constraints. Neurons in monkey area MSTd respond to complex optic flow patterns resulting from self-motion. We developed a spiking neural network model that showed MSTd-like response properties can emerge from evolving spike-timing-dependent plasticity with STDP-H parameters of the connections between then middle temporal area and MSTd. Simulated MSTd neurons formed a sparse, reduced population code capable of encoding perceptual variables important for self-motion perception. This model demonstrates that complex neuronal responses observed in MSTd may emerge from efficient coding and suggests that neurobiological plasticity, like STDP-H, may contribute to reducing the dimensions of input stimuli and allowing spiking neurons to learn sparse representations.
Collapse
Affiliation(s)
| | - Michael Beyeler
- Departments of Computer Science
- Psychological & Brain Sciences, University of California, Santa Barbara, California 93106
| | - Jeffrey L Krichmar
- Departments of Cognitive Sciences
- Computer Science, University of California, Irvine, California 92697
| |
Collapse
|
31
|
Cortical Mechanisms of Multisensory Linear Self-motion Perception. Neurosci Bull 2022; 39:125-137. [PMID: 35821337 PMCID: PMC9849545 DOI: 10.1007/s12264-022-00916-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 01/22/2023] Open
Abstract
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Collapse
|
32
|
Maus N, Layton OW. Estimating heading from optic flow: Comparing deep learning network and human performance. Neural Netw 2022; 154:383-396. [DOI: 10.1016/j.neunet.2022.07.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 06/17/2022] [Accepted: 07/07/2022] [Indexed: 10/16/2022]
|
33
|
Kim HR, Angelaki DE, DeAngelis GC. A neural mechanism for detecting object motion during self-motion. eLife 2022; 11:74971. [PMID: 35642599 PMCID: PMC9159750 DOI: 10.7554/elife.74971] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/17/2022] [Indexed: 11/17/2022] Open
Abstract
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.,Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
34
|
Steinmetz ST, Layton OW, Powell NV, Fajen BR. A Dynamic Efficient Sensory Encoding Approach to Adaptive Tuning in Neural Models of Optic Flow Processing. Front Comput Neurosci 2022; 16:844289. [PMID: 35431848 PMCID: PMC9011806 DOI: 10.3389/fncom.2022.844289] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 02/10/2022] [Indexed: 11/13/2022] Open
Abstract
This paper introduces a self-tuning mechanism for capturing rapid adaptation to changing visual stimuli by a population of neurons. Building upon the principles of efficient sensory encoding, we show how neural tuning curve parameters can be continually updated to optimally encode a time-varying distribution of recently detected stimulus values. We implemented this mechanism in a neural model that produces human-like estimates of self-motion direction (i.e., heading) based on optic flow. The parameters of speed-sensitive units were dynamically tuned in accordance with efficient sensory encoding such that the network remained sensitive as the distribution of optic flow speeds varied. In two simulation experiments, we found that model performance with dynamic tuning yielded more accurate, shorter latency heading estimates compared to the model with static tuning. We conclude that dynamic efficient sensory encoding offers a plausible approach for capturing adaptation to varying visual environments in biological visual systems and neural models alike.
Collapse
Affiliation(s)
- Scott T. Steinmetz
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
- *Correspondence: Scott T. Steinmetz,
| | - Oliver W. Layton
- Computer Science Department, Colby College, Waterville, ME, United States
| | - Nathaniel V. Powell
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
| | - Brett R. Fajen
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
| |
Collapse
|
35
|
Abstract
Navigating by path integration requires continuously estimating one's self-motion. This estimate may be derived from visual velocity and/or vestibular acceleration signals. Importantly, these senses in isolation are ill-equipped to provide accurate estimates, and thus visuo-vestibular integration is an imperative. After a summary of the visual and vestibular pathways involved, the crux of this review focuses on the human and theoretical approaches that have outlined a normative account of cue combination in behavior and neurons, as well as on the systems neuroscience efforts that are searching for its neural implementation. We then highlight a contemporary frontier in our state of knowledge: understanding how velocity cues with time-varying reliabilities are integrated into an evolving position estimate over prolonged time periods. Further, we discuss how the brain builds internal models inferring when cues ought to be integrated versus segregated-a process of causal inference. Lastly, we suggest that the study of spatial navigation has not yet addressed its initial condition: self-location.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA;
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA;
- Tandon School of Engineering, New York University, New York, NY 11201, USA
| |
Collapse
|
36
|
Rapid cross-sensory adaptation of self-motion perception. Cortex 2022; 148:14-30. [DOI: 10.1016/j.cortex.2021.11.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 10/24/2021] [Accepted: 11/16/2021] [Indexed: 11/19/2022]
|
37
|
Vestibular and active self-motion signals drive visual perception in binocular rivalry. iScience 2021; 24:103417. [PMID: 34877486 PMCID: PMC8632839 DOI: 10.1016/j.isci.2021.103417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 09/24/2021] [Accepted: 11/04/2021] [Indexed: 11/24/2022] Open
Abstract
Multisensory integration helps the brain build reliable models of the world and resolve ambiguities. Visual interactions with sound and touch are well established but vestibular influences on vision are less well studied. Here, we test the vestibular influence on vision using horizontally opposed motions presented one to each eye so that visual perception is unstable and alternates irregularly. Passive, whole-body rotations in the yaw plane stabilized visual alternations, with perceived direction oscillating congruently with rotation (leftward motion during leftward rotation, and vice versa). This demonstrates a purely vestibular signal can resolve ambiguous visual motion and determine visual perception. Active self-rotation following the same sinusoidal profile also entrained vision to the rotation cycle – more strongly and with a lesser time lag, likely because of efference copy and predictive internal models. Both experiments show that visual ambiguity provides an effective paradigm to reveal how vestibular and motor inputs can shape visual perception. Binocular rivalry between left/right motions is stabilized by congruent head movement Left/right head rotations entrain rivalry dynamics so matching direction is perceived Active and passive rotations both drive rivalry dominance to match rotation direction Resolving ambiguous vision occurs in a broader vestibular and action-based context
Collapse
|
38
|
Zaidel A, Laurens J, DeAngelis GC, Angelaki DE. Supervised Multisensory Calibration Signals Are Evident in VIP But Not MSTd. J Neurosci 2021; 41:10108-10119. [PMID: 34716232 PMCID: PMC8660052 DOI: 10.1523/jneurosci.0135-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 08/23/2021] [Accepted: 09/17/2021] [Indexed: 11/21/2022] Open
Abstract
Multisensory plasticity enables our senses to dynamically adapt to each other and the external environment, a fundamental operation that our brain performs continuously. We searched for neural correlates of adult multisensory plasticity in the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area (VIP) in 2 male rhesus macaques using a paradigm of supervised calibration. We report little plasticity in neural responses in the relatively low-level multisensory cortical area MSTd. In contrast, neural correlates of plasticity are found in higher-level multisensory VIP, an area with strong decision-related activity. Accordingly, we observed systematic shifts of VIP tuning curves, which were reflected in the choice-related component of the population response. This is the first demonstration of neuronal calibration, together with behavioral calibration, in single sessions. These results lay the foundation for understanding multisensory neural plasticity, applicable broadly to maintaining accuracy for sensorimotor tasks.SIGNIFICANCE STATEMENT Multisensory plasticity is a fundamental and continual function of the brain that enables our senses to adapt dynamically to each other and to the external environment. Yet, very little is known about the neuronal mechanisms of multisensory plasticity. In this study, we searched for neural correlates of adult multisensory plasticity in the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area (VIP) using a paradigm of supervised calibration. We found little plasticity in neural responses in the relatively low-level multisensory cortical area MSTd. By contrast, neural correlates of plasticity were found in VIP, a higher-level multisensory area with strong decision-related activity. This is the first demonstration of neuronal calibration, together with behavioral calibration, in single sessions.
Collapse
Affiliation(s)
- Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, 5290002, Israel
| | - Jean Laurens
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society, Frankfurt 60528, Germany
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester 14627, New York
| | - Dora E Angelaki
- Center for Neural Science and Tandon School of Engineering, New York University, New York 10003, New York
| |
Collapse
|
39
|
Di Marco S, Sulpizio V, Bellagamba M, Fattori P, Galati G, Galletti C, Lappe M, Maltempo T, Pitzalis S. Multisensory integration in cortical regions responding to locomotion-related visual and somatomotor signals. Neuroimage 2021; 244:118581. [PMID: 34543763 DOI: 10.1016/j.neuroimage.2021.118581] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 09/08/2021] [Accepted: 09/14/2021] [Indexed: 11/18/2022] Open
Abstract
During real-world locomotion, in order to be able to move along a path or avoid an obstacle, continuous changes in self-motion direction (i.e. heading) are needed. Control of heading changes during locomotion requires the integration of multiple signals (i.e., visual, somatomotor, vestibular). Recent fMRI studies have shown that both somatomotor areas (human PEc [hPEc], human PE [hPE], primary somatosensory cortex [S-I]) and egomotion visual regions (cingulate sulcus visual area [CSv], posterior cingulate area [pCi], posterior insular cortex [PIC]) respond to either leg movements and egomotion-compatible visual stimulations, suggesting a role in the analysis of both visual attributes of egomotion and somatomotor signals with the aim of guiding locomotion. However, whether these regions are able to integrate egomotion-related visual signals with somatomotor inputs coming from leg movements during heading changes remains an open question. Here we used a combined approach of individual functional localizers and task-evoked activity by fMRI. In thirty subjects we first localized three egomotion areas (CSv, pCi, PIC) and three somatomotor regions (S-I, hPE, hPEc). Then, we tested their responses in a multisensory integration experiment combining visual and somatomotor signals relevant to locomotion in congruent or incongruent trials. We used an fMR-adaptation paradigm to explore the sensitivity to the repeated presentation of these bimodal stimuli in the six regions of interest. Results revealed that hPE, S-I and CSv showed an adaptation effect regardless of congruency, while PIC, pCi and hPEc showed sensitivity to congruency. PIC exhibited a preference for congruent trials compared to incongruent trials. Areas pCi and hPEc exhibited an adaptation effect only for congruent and incongruent trials, respectively. PIC, pCi and hPEc sensitivity to the congruency relationship between visual (locomotion-compatible) cues and (leg-related) somatomotor inputs suggests that these regions are involved in multisensory integration processes, likely in order to guide/adjust leg movements during heading changes.
Collapse
Affiliation(s)
- Sara Di Marco
- Department of Psychology, "Sapienza" University of Rome, Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| | - Valentina Sulpizio
- Department of Psychology, "Sapienza" University of Rome, Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Martina Bellagamba
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Gaspare Galati
- Department of Psychology, "Sapienza" University of Rome, Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Markus Lappe
- Institute for Psychology, University of Muenster, Muenster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Teresa Maltempo
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy
| | - Sabrina Pitzalis
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy
| |
Collapse
|
40
|
Zheng Q, Zhou L, Gu Y. Temporal synchrony effects of optic flow and vestibular inputs on multisensory heading perception. Cell Rep 2021; 37:109999. [PMID: 34788608 DOI: 10.1016/j.celrep.2021.109999] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 08/21/2021] [Accepted: 10/21/2021] [Indexed: 11/25/2022] Open
Abstract
Precise heading perception requires integration of optic flow and vestibular cues, yet the two cues often carry distinct temporal dynamics that may confound cue integration benefit. Here, we varied temporal offset between the two sensory inputs while macaques discriminated headings around straight ahead. We find the best heading performance does not occur under natural condition of synchronous inputs with zero offset but rather when visual stimuli are artificially adjusted to lead vestibular by a few hundreds of milliseconds. This amount exactly matches the lag between the vestibular acceleration and visual speed signals as measured from single-unit-activity in frontal and posterior parietal cortices. Manually aligning cues in these areas best facilitates integration with some nonlinear gain modulation effects. These findings are consistent with predictions from a model by which the brain integrates optic flow speed with a faster vestibular acceleration signal for sensing instantaneous heading direction during self-motion in the environment.
Collapse
Affiliation(s)
- Qihao Zheng
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, China; University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Luxin Zhou
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, China; University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, China; University of Chinese Academy of Sciences, 100049 Beijing, China; Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, 201210 Shanghai, China.
| |
Collapse
|
41
|
Foster C, Sheng WA, Heed T, Ben Hamed S. The macaque ventral intraparietal area has expanded into three homologue human parietal areas. Prog Neurobiol 2021; 209:102185. [PMID: 34775040 DOI: 10.1016/j.pneurobio.2021.102185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 10/27/2021] [Accepted: 11/05/2021] [Indexed: 10/19/2022]
Abstract
The macaque ventral intraparietal area (VIP) in the fundus of the intraparietal sulcus has been implicated in a diverse range of sensorimotor and cognitive functions such as motion processing, multisensory integration, processing of head peripersonal space, defensive behavior, and numerosity coding. Here, we exhaustively review macaque VIP function, cytoarchitectonics, and anatomical connectivity and integrate it with human studies that have attempted to identify a potential human VIP homologue. We show that human VIP research has consistently identified three, rather than one, bilateral parietal areas that each appear to subsume some, but not all, of the macaque area's functionality. Available evidence suggests that this human "VIP complex" has evolved as an expansion of the macaque area, but that some precursory specialization within macaque VIP has been previously overlooked. The three human areas are dominated, roughly, by coding the head or self in the environment, visual heading direction, and the peripersonal environment around the head, respectively. A unifying functional principle may be best described as prediction in space and time, linking VIP to state estimation as a key parietal sensorimotor function. VIP's expansive differentiation of head and self-related processing may have been key in the emergence of human bodily self-consciousness.
Collapse
Affiliation(s)
- Celia Foster
- Biopsychology & Cognitive Neuroscience, Faculty of Psychology & Sports Science, Bielefeld University, Bielefeld, Germany; Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Wei-An Sheng
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229, CNRS-University of Lyon 1, France
| | - Tobias Heed
- Biopsychology & Cognitive Neuroscience, Faculty of Psychology & Sports Science, Bielefeld University, Bielefeld, Germany; Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany; Department of Psychology, University of Salzburg, Salzburg, Austria; Centre for Cognitive Neuroscience, University of Salzburg, Salzburg, Austria.
| | - Suliann Ben Hamed
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229, CNRS-University of Lyon 1, France.
| |
Collapse
|
42
|
Modeling Physiological Sources of Heading Bias from Optic Flow. eNeuro 2021; 8:ENEURO.0307-21.2021. [PMID: 34642226 PMCID: PMC8607907 DOI: 10.1523/eneuro.0307-21.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 09/01/2021] [Accepted: 09/20/2021] [Indexed: 11/21/2022] Open
Abstract
Human heading perception from optic flow is accurate for directions close to the straight-ahead and systematic biases emerge in the periphery (Cuturi and Macneilage, 2013; Sun et al., 2020). In pursuit of the underlying neural mechanisms, primate brain dorsal medial superior temporal (MSTd) area has been a focus because of its causal link with heading perception (Gu et al., 2012). Computational models generally explain heading sensitivity in individual MSTd neurons as a feedforward integration of motion signals from medial temporal (MT) area that resemble full-field optic flow patterns consistent with the preferred heading direction (Britten, 2008; Mineault et al., 2012). In the present simulation study, we quantified within the structure of this feedforward model how physiological properties of MT and MSTd shape heading signals. We found that known physiological tuning characteristics generally supported the accuracy of heading estimation, but not always. A weak-to-moderate overrepresentation of peripheral headings in MSTd garnered the highest accuracy and precision out of the models that we tested. The model also performed well when noise corrupted high proportions of the optic flow vectors. Such a peripheral MSTd model performed well when units possessed a range of receptive field (RF) sizes and were strongly direction tuned. Physiological biases in MT direction tuning toward the radial direction also supported heading estimation, but the tendency for MT preferred speed and RF size to scale with eccentricity did not. Our findings help elucidate the extent to which different physiological tuning properties influence the accuracy and precision of neural heading signals.
Collapse
|
43
|
Jia J, Puyang Z, Wang Q, Jin X, Chen A. Dynamic encoding of saccade sequences in primate frontal eye field. J Physiol 2021; 599:5061-5084. [PMID: 34555188 DOI: 10.1113/jp282094] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 09/20/2021] [Indexed: 11/08/2022] Open
Abstract
The frontal eye field (FEF) is a key part of the oculomotor system, with dominant responses to the direction of single saccades. However, whether and how FEF contributes to sequential saccades remain largely unknown. By training rhesus monkeys to perform saccade sequences, we found sequence-related activities in FEF neurons, whose selectivity to saccade direction undergoes dynamic changes during sequential vs. single saccades. These sequence-related activities are context-dependent, exhibiting different firing activities during memory- vs. visually guided sequences. When the monkey was performing the sequential saccade task, the thresholds of microstimulation to evoke saccades in FEF were increased and the percentage of the successfully induced saccades was significantly reduced compared with the fixation condition. Pharmacological inactivation of FEF impaired the monkey's performance of previously learned sequential saccades, with different effects on the same actions depending on its position within the sequence. These results reveal the context-dependent, sequence-specific dynamic encoding of saccades in FEF, and underscore the crucial role of FEF in the planning and execution of sequential saccades. KEY POINTS: FEF neurons respond differently during sequential vs. single saccades Sequence-related FEF activity is context-dependent The microstimulation threshold in FEF was increased during the sequential task but the evoked saccade did not alter the sequence structure FEF inactivation severely impaired the performance of sequential saccades.
Collapse
Affiliation(s)
- Jing Jia
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, China
| | - Zhen Puyang
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, China
| | - Qingjun Wang
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, China
| | - Xin Jin
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, China.,Molecular Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA, USA.,Center for Motor Control and Disease, East China Normal University, Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, China
| |
Collapse
|
44
|
Orban GA, Sepe A, Bonini L. Parietal maps of visual signals for bodily action planning. Brain Struct Funct 2021; 226:2967-2988. [PMID: 34508272 PMCID: PMC8541987 DOI: 10.1007/s00429-021-02378-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/01/2021] [Indexed: 12/24/2022]
Abstract
The posterior parietal cortex (PPC) has long been understood as a high-level integrative station for computing motor commands for the body based on sensory (i.e., mostly tactile and visual) input from the outside world. In the last decade, accumulating evidence has shown that the parietal areas not only extract the pragmatic features of manipulable objects, but also subserve sensorimotor processing of others’ actions. A paradigmatic case is that of the anterior intraparietal area (AIP), which encodes the identity of observed manipulative actions that afford potential motor actions the observer could perform in response to them. On these bases, we propose an AIP manipulative action-based template of the general planning functions of the PPC and review existing evidence supporting the extension of this model to other PPC regions and to a wider set of actions: defensive and locomotor actions. In our model, a hallmark of PPC functioning is the processing of information about the physical and social world to encode potential bodily actions appropriate for the current context. We further extend the model to actions performed with man-made objects (e.g., tools) and artifacts, because they become integral parts of the subject’s body schema and motor repertoire. Finally, we conclude that existing evidence supports a generally conserved neural circuitry that transforms integrated sensory signals into the variety of bodily actions that primates are capable of preparing and performing to interact with their physical and social world.
Collapse
Affiliation(s)
- Guy A Orban
- Department of Medicine and Surgery, University of Parma, via Volturno 39/E, 43125, Parma, Italy.
| | - Alessia Sepe
- Department of Medicine and Surgery, University of Parma, via Volturno 39/E, 43125, Parma, Italy
| | - Luca Bonini
- Department of Medicine and Surgery, University of Parma, via Volturno 39/E, 43125, Parma, Italy.
| |
Collapse
|
45
|
Abstract
We perceive our environment through multiple independent sources of sensory input. The brain is tasked with deciding whether multiple signals are produced by the same or different events (i.e., solve the problem of causal inference). Here, we train a neural network to solve causal inference by either combining or separating visual and vestibular inputs in order to estimate self- and scene motion. We find that the network recapitulates key neurophysiological (i.e., congruent and opposite neurons) and behavioral (e.g., reliability-based cue weighting) properties of biological systems. We show how congruent and opposite neurons support motion estimation and how the balance in activity between these subpopulations determines whether to combine or separate multisensory signals. Sitting in a static railway carriage can produce illusory self-motion if the train on an adjoining track moves off. While our visual system registers motion, vestibular signals indicate that we are stationary. The brain is faced with a difficult challenge: is there a single cause of sensations (I am moving) or two causes (I am static, another train is moving)? If a single cause, integrating signals produces a more precise estimate of self-motion, but if not, one cue should be ignored. In many cases, this process of causal inference works without error, but how does the brain achieve it? Electrophysiological recordings show that the macaque medial superior temporal area contains many neurons that encode combinations of vestibular and visual motion cues. Some respond best to vestibular and visual motion in the same direction (“congruent” neurons), while others prefer opposing directions (“opposite” neurons). Congruent neurons could underlie cue integration, but the function of opposite neurons remains a puzzle. Here, we seek to explain this computational arrangement by training a neural network model to solve causal inference for motion estimation. Like biological systems, the model develops congruent and opposite units and recapitulates known behavioral and neurophysiological observations. We show that all units (both congruent and opposite) contribute to motion estimation. Importantly, however, it is the balance between their activity that distinguishes whether visual and vestibular cues should be integrated or separated. This explains the computational purpose of puzzling neural representations and shows how a relatively simple feedforward network can solve causal inference.
Collapse
|
46
|
Zhao B, Zhang Y, Chen A. Encoding of vestibular and optic flow cues to self-motion in the posterior superior temporal polysensory area. J Physiol 2021; 599:3937-3954. [PMID: 34192812 DOI: 10.1113/jp281913] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 06/28/2021] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS Neurons in the posterior superior temporal polysensory area (STPp) showed significant directional selectivity in response to vestibular, optic flow and combined visual-vestibular stimuli. By comparison to the dorsal medial superior temporal area, the visual latency was slower in STPp but the vestibular latency was faster. Heading preferences under combined stimulation in STPp were usually dominated by visual signals. Cross-modal enhancement was observed in STPp when both vestibular and visual cues were presented together at their heading preferences. ABSTRACT Human neuroimaging data implicated that the superior temporal polysensory area (STP) might be involved in vestibular-visual interaction during heading computations, but the heading selectivity has not been examined in the macaque. Here, we investigated the convergence of optic flow and vestibular signals in macaque STP by using a virtual-reality system and found that 6.3% of STP neurons showed multisensory responses, with visual and vestibular direction preferences either congruent or opposite in roughly equal proportion. The percentage of vestibular-tuned cells (18.3%) was much smaller than that of visual-tuned cells (30.4%) in STP. The vestibular tuning strength was usually weaker than the visual condition. The visual latency was significantly slower in STPp than in the dorsal medial superior temporal area (MSTd), but the vestibular latency was significantly faster than in MSTd. During the bimodal condition, STP cells' response was dominated by visual signals, with the visual heading preference not affected by the vestibular signals but the response amplitudes modulated by vestibular signals in a subadditive way.
Collapse
Affiliation(s)
- Bin Zhao
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| | - Yi Zhang
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| | - Aihua Chen
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| |
Collapse
|
47
|
Cornelio P, Velasco C, Obrist M. Multisensory Integration as per Technological Advances: A Review. Front Neurosci 2021; 15:652611. [PMID: 34239410 PMCID: PMC8257956 DOI: 10.3389/fnins.2021.652611] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 04/29/2021] [Indexed: 11/13/2022] Open
Abstract
Multisensory integration research has allowed us to better understand how humans integrate sensory information to produce a unitary experience of the external world. However, this field is often challenged by the limited ability to deliver and control sensory stimuli, especially when going beyond audio-visual events and outside laboratory settings. In this review, we examine the scope and challenges of new technology in the study of multisensory integration in a world that is increasingly characterized as a fusion of physical and digital/virtual events. We discuss multisensory integration research through the lens of novel multisensory technologies and, thus, bring research in human-computer interaction, experimental psychology, and neuroscience closer together. Today, for instance, displays have become volumetric so that visual content is no longer limited to 2D screens, new haptic devices enable tactile stimulation without physical contact, olfactory interfaces provide users with smells precisely synchronized with events in virtual environments, and novel gustatory interfaces enable taste perception through levitating stimuli. These technological advances offer new ways to control and deliver sensory stimulation for multisensory integration research beyond traditional laboratory settings and open up new experimentations in naturally occurring events in everyday life experiences. Our review then summarizes these multisensory technologies and discusses initial insights to introduce a bridge between the disciplines in order to advance the study of multisensory integration.
Collapse
Affiliation(s)
- Patricia Cornelio
- Department of Computer Science, University College London, London, United Kingdom
| | - Carlos Velasco
- Centre for Multisensory Marketing, Department of Marketing, BI Norwegian Business School, Oslo, Norway
| | - Marianna Obrist
- Department of Computer Science, University College London, London, United Kingdom
| |
Collapse
|
48
|
Churan J, Kaminiarz A, Schwenk JCB, Bremmer F. Action-dependent processing of self-motion in parietal cortex of macaque monkeys. J Neurophysiol 2021; 125:2432-2443. [PMID: 34010579 DOI: 10.1152/jn.00049.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Successful interaction with the environment requires the dissociation of self-induced from externally induced sensory stimulation. Temporal proximity of action and effect is hereby often used as an indicator of whether an observed event should be interpreted as a result of own actions or not. We tested how the delay between an action (press of a touch bar) and an effect (onset of simulated self-motion) influences the processing of visually simulated self-motion in the ventral intraparietal area (VIP) of macaque monkeys. We found that a delay between the action and the start of the self-motion stimulus led to a rise of activity above the baseline activity before motion onset in a subpopulation of 21% of the investigated neurons. In the responses to the stimulus, we found a significantly lower sustained activity when the press of a touch bar and the motion onset were contiguous compared to the condition when the motion onset was delayed. We speculate that this weak inhibitory effect might be part of a mechanism that sharpens the tuning of VIP neurons during self-induced motion and thus has the potential to increase the precision of heading information that is required to adjust the orientation of self-motion in everyday navigational tasks.NEW & NOTEWORTHY Neurons in macaque ventral intraparietal area (VIP) are responding to sensory stimulation related to self-motion, e.g. visual optic flow. Here, we found that self-motion induced activation depends on the sense of agency, i.e., it differed when optic flow was perceived as self- or externally induced. This demonstrates that area VIP is well suited for study of the interplay between active behavior and sensory processing during self-motion.
Collapse
Affiliation(s)
- Jan Churan
- Department of Neurophysics, Philipps-Universität Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Marburg, Germany
| | - Andre Kaminiarz
- Department of Neurophysics, Philipps-Universität Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Marburg, Germany
| | - Jakob C B Schwenk
- Department of Neurophysics, Philipps-Universität Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Marburg, Germany
| | - Frank Bremmer
- Department of Neurophysics, Philipps-Universität Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Marburg, Germany
| |
Collapse
|
49
|
Liu B, Tian Q, Gu Y. Robust vestibular self-motion signals in macaque posterior cingulate region. eLife 2021; 10:e64569. [PMID: 33827753 PMCID: PMC8032402 DOI: 10.7554/elife.64569] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 03/29/2021] [Indexed: 11/13/2022] Open
Abstract
Self-motion signals, distributed ubiquitously across parietal-temporal lobes, propagate to limbic hippocampal system for vector-based navigation via hubs including posterior cingulate cortex (PCC) and retrosplenial cortex (RSC). Although numerous studies have indicated posterior cingulate areas are involved in spatial tasks, it is unclear how their neurons represent self-motion signals. Providing translation and rotation stimuli to macaques on a 6-degree-of-freedom motion platform, we discovered robust vestibular responses in PCC. A combined three-dimensional spatiotemporal model captured data well and revealed multiple temporal components including velocity, acceleration, jerk, and position. Compared to PCC, RSC contained moderate vestibular temporal modulations and lacked significant spatial tuning. Visual self-motion signals were much weaker in both regions compared to the vestibular signals. We conclude that macaque posterior cingulate region carries vestibular-dominant self-motion signals with plentiful temporal components that could be useful for path integration.
Collapse
Affiliation(s)
- Bingyu Liu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Key Laboratory of Primate Neurobiology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- University of Chinese Academy of SciencesBeijingChina
| | - Qingyang Tian
- CAS Center for Excellence in Brain Science and Intelligence Technology, Key Laboratory of Primate Neurobiology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- University of Chinese Academy of SciencesBeijingChina
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Key Laboratory of Primate Neurobiology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- University of Chinese Academy of SciencesBeijingChina
| |
Collapse
|
50
|
Dynamics of Heading and Choice-Related Signals in the Parieto-Insular Vestibular Cortex of Macaque Monkeys. J Neurosci 2021; 41:3254-3265. [PMID: 33622780 DOI: 10.1523/jneurosci.2275-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 01/20/2021] [Accepted: 02/17/2021] [Indexed: 02/06/2023] Open
Abstract
Perceptual decision-making is increasingly being understood to involve an interaction between bottom-up sensory-driven signals and top-down choice-driven signals, but how these signals interact to mediate perception is not well understood. The parieto-insular vestibular cortex (PIVC) is an area with prominent vestibular responsiveness, and previous work has shown that inactivating PIVC impairs vestibular heading judgments. To investigate the nature of PIVC's contribution to heading perception, we recorded extracellularly from PIVC neurons in two male rhesus macaques during a heading discrimination task, and compared findings with data from previous studies of dorsal medial superior temporal (MSTd) and ventral intraparietal (VIP) areas using identical stimuli. By computing partial correlations between neural responses, heading, and choice, we find that PIVC activity reflects a dynamically changing combination of sensory and choice signals. In addition, the sensory and choice signals are more balanced in PIVC, in contrast to the sensory dominance in MSTd and choice dominance in VIP. Interestingly, heading and choice signals in PIVC are negatively correlated during the middle portion of the stimulus epoch, reflecting a mismatch in the polarity of heading and choice signals. We anticipate that these results will help unravel the mechanisms of interaction between bottom-up sensory signals and top-down choice signals in perceptual decision-making, leading to more comprehensive models of self-motion perception.SIGNIFICANCE STATEMENT Vestibular information is important for our perception of self-motion, and various cortical regions in primates show vestibular heading selectivity. Inactivation of the macaque vestibular cortex substantially impairs the precision of vestibular heading discrimination, more so than inactivation of other multisensory areas. Here, we record for the first time from the vestibular cortex while monkeys perform a forced-choice heading discrimination task, and we compare results with data collected previously from other multisensory cortical areas. We find that vestibular cortex activity reflects a dynamically changing combination of sensory and choice signals, with both similarities and notable differences with other multisensory areas.
Collapse
|