1
|
Polat L, Harpaz T, Zaidel A. Rats rely on airflow cues for self-motion perception. Curr Biol 2024; 34:4248-4260.e5. [PMID: 39214088 DOI: 10.1016/j.cub.2024.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 07/12/2024] [Accepted: 08/01/2024] [Indexed: 09/04/2024]
Abstract
Self-motion perception is a vital skill for all species. It is an inherently multisensory process that combines inertial (body-based) and relative (with respect to the environment) motion cues. Although extensively studied in human and non-human primates, there is currently no paradigm to test self-motion perception in rodents using both inertial and relative self-motion cues. We developed a novel rodent motion simulator using two synchronized robotic arms to generate inertial, relative, or combined (inertial and relative) cues of self-motion. Eight rats were trained to perform a task of heading discrimination, similar to the popular primate paradigm. Strikingly, the rats relied heavily on airflow for relative self-motion perception, with little contribution from the (limited) optic flow cues provided-performance in the dark was almost as good. Relative self-motion (airflow) was perceived with greater reliability vs. inertial. Disrupting airflow, using a fan or windshield, damaged relative, but not inertial, self-motion perception. However, whiskers were not needed for this function. Lastly, the rats integrated relative and inertial self-motion cues in a reliability-based (Bayesian-like) manner. These results implicate airflow as an important cue for self-motion perception in rats and provide a new domain to investigate the neural bases of self-motion perception and multisensory processing in awake behaving rodents.
Collapse
Affiliation(s)
- Lior Polat
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan 5290002, Israel
| | - Tamar Harpaz
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan 5290002, Israel
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan 5290002, Israel.
| |
Collapse
|
2
|
Rubinstein JF, Singh M, Kowler E. Bayesian approaches to smooth pursuit of random dot kinematograms: effects of varying RDK noise and the predictability of RDK direction. J Neurophysiol 2024; 131:394-416. [PMID: 38149327 DOI: 10.1152/jn.00116.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 12/28/2023] Open
Abstract
Smooth pursuit eye movements respond on the basis of both immediate and anticipated target motion, where anticipations may be derived from either memory or perceptual cues. To study the combined influence of both immediate sensory motion and anticipation, subjects pursued clear or noisy random dot kinematograms (RDKs) whose mean directions were chosen from Gaussian distributions with SDs = 10° (narrow prior) or 45° (wide prior). Pursuit directions were consistent with Bayesian theory in that transitions over time from dependence on the prior to near total dependence on immediate sensory motion (likelihood) took longer with the noisier RDKs and with the narrower, more reliable, prior. Results were fit to Bayesian models in which parameters representing the variability of the likelihood either were or were not constrained to be the same for both priors. The unconstrained model provided a statistically better fit, with the influence of the prior in the constrained model smaller than predicted from strict reliability-based weighting of prior and likelihood. Factors that may have contributed to this outcome include prior variability different from nominal values, low-level sensorimotor learning with the narrow prior, or departures of pursuit from strict adherence to reliability-based weighting. Although modifications of, or alternatives to, the normative Bayesian model will be required, these results, along with previous studies, suggest that Bayesian approaches are a promising framework to understand how pursuit combines immediate sensory motion, past history, and informative perceptual cues to accurately track the target motion that is most likely to occur in the immediate future.NEW & NOTEWORTHY Smooth pursuit eye movements respond on the basis of anticipated, as well as immediate, target motions. Bayesian models using reliability-based weighting of previous (prior) and immediate target motions (likelihood) accounted for many, but not all, aspects of pursuit of clear and noisy random dot kinematograms with different levels of predictability. Bayesian approaches may solve the long-standing problem of how pursuit combines immediate sensory motion and anticipation of future motion to configure an effective response.
Collapse
Affiliation(s)
- Jason F Rubinstein
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Manish Singh
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Eileen Kowler
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| |
Collapse
|
3
|
Zhang WH. Decentralized Neural Circuits of Multisensory Information Integration in the Brain. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:1-21. [PMID: 38270850 DOI: 10.1007/978-981-99-7611-9_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
The brain combines multisensory inputs together to obtain a complete and reliable description of the world. Recent experiments suggest that several interconnected multisensory brain areas are simultaneously involved to integrate multisensory information. It was unknown how these mutually connected multisensory areas achieve multisensory integration. To answer this question, using biologically plausible neural circuit models we developed a decentralized system for information integration that comprises multiple interconnected multisensory brain areas. Through studying an example of integrating visual and vestibular cues to infer heading direction, we show that such a decentralized system is well consistent with experimental observations. In particular, we demonstrate that this decentralized system can optimally integrate information by implementing sampling-based Bayesian inference. The Poisson variability of spike generation provides appropriate variability to drive sampling, and the interconnections between multisensory areas store the correlation prior between multisensory stimuli. The decentralized system predicts that optimally integrated information emerges locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Lyda Hill Department of Bioinformatics and O'Donnell Brain Institute, UT Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
4
|
Zheng Q, Gu Y. From Multisensory Integration to Multisensory Decision-Making. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:23-35. [PMID: 38270851 DOI: 10.1007/978-981-99-7611-9_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Organisms live in a dynamic environment in which sensory information from multiple sources is ever changing. A conceptually complex task for the organisms is to accumulate evidence across sensory modalities and over time, a process known as multisensory decision-making. This is a new concept, in terms of that previous researches have been largely conducted in parallel disciplines. That is, much efforts have been put either in sensory integration across modalities using activity summed over a duration of time, or in decision-making with only one sensory modality that evolves over time. Recently, a few studies with neurophysiological measurements emerge to study how different sensory modality information is processed, accumulated, and integrated over time in decision-related areas such as the parietal or frontal lobes in mammals. In this review, we summarize and comment on these studies that combine the long-existed two parallel fields of multisensory integration and decision-making. We show how the new findings provide insight into our understanding about neural mechanisms mediating multisensory information processing in a more complete way.
Collapse
Affiliation(s)
- Qihao Zheng
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Yong Gu
- Systems Neuroscience, SInstitute of Neuroscience, Chinese Academy of Sciences, Shanghai, China.
| |
Collapse
|
5
|
Lin R, Zeng F, Wang Q, Chen A. Cross-Modal Plasticity during Self-Motion Perception. Brain Sci 2023; 13:1504. [PMID: 38002465 PMCID: PMC10669852 DOI: 10.3390/brainsci13111504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 10/13/2023] [Accepted: 10/23/2023] [Indexed: 11/26/2023] Open
Abstract
To maintain stable and coherent perception in an ever-changing environment, the brain needs to continuously and dynamically calibrate information from multiple sensory sources, using sensory and non-sensory information in a flexible manner. Here, we review how the vestibular and visual signals are recalibrated during self-motion perception. We illustrate two different types of recalibration: one long-term cross-modal (visual-vestibular) recalibration concerning how multisensory cues recalibrate over time in response to a constant cue discrepancy, and one rapid-term cross-modal (visual-vestibular) recalibration concerning how recent prior stimuli and choices differentially affect subsequent self-motion decisions. In addition, we highlight the neural substrates of long-term visual-vestibular recalibration, with profound differences observed in neuronal recalibration across multisensory cortical areas. We suggest that multisensory recalibration is a complex process in the brain, is modulated by many factors, and requires the coordination of many distinct cortical areas. We hope this review will shed some light on research into the neural circuits of visual-vestibular recalibration and help develop a more generalized theory for cross-modal plasticity.
Collapse
Affiliation(s)
- Rushi Lin
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Fu Zeng
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Qingjun Wang
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
- NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai 200122, China
| |
Collapse
|
6
|
Zeng Z, Zhang C, Gu Y. Visuo-vestibular heading perception: a model system to study multi-sensory decision making. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220334. [PMID: 37545303 PMCID: PMC10404926 DOI: 10.1098/rstb.2022.0334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/15/2023] [Indexed: 08/08/2023] Open
Abstract
Integrating noisy signals across time as well as sensory modalities, a process named multi-sensory decision making (MSDM), is an essential strategy for making more accurate and sensitive decisions in complex environments. Although this field is just emerging, recent extraordinary works from different perspectives, including computational theory, psychophysical behaviour and neurophysiology, begin to shed new light onto MSDM. In the current review, we focus on MSDM by using a model system of visuo-vestibular heading. Combining well-controlled behavioural paradigms on virtual-reality systems, single-unit recordings, causal manipulations and computational theory based on spiking activity, recent progress reveals that vestibular signals contain complex temporal dynamics in many brain regions, including unisensory, multi-sensory and sensory-motor association areas. This challenges the brain for cue integration across time and sensory modality such as optic flow which mainly contains a motion velocity signal. In addition, new evidence from the higher-level decision-related areas, mostly in the posterior and frontal/prefrontal regions, helps revise our conventional thought on how signals from different sensory modalities may be processed, converged, and moment-by-moment accumulated through neural circuits for forming a unified, optimal perceptual decision. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Zhao Zeng
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Ce Zhang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| |
Collapse
|
7
|
Liu B, Shan J, Gu Y. Temporal and spatial properties of vestibular signals for perception of self-motion. Front Neurol 2023; 14:1266513. [PMID: 37780704 PMCID: PMC10534010 DOI: 10.3389/fneur.2023.1266513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 08/29/2023] [Indexed: 10/03/2023] Open
Abstract
It is well recognized that the vestibular system is involved in numerous important cognitive functions, including self-motion perception, spatial orientation, locomotion, and vector-based navigation, in addition to basic reflexes, such as oculomotor or body postural control. Consistent with this rationale, vestibular signals exist broadly in the brain, including several regions of the cerebral cortex, potentially allowing tight coordination with other sensory systems to improve the accuracy and precision of perception or action during self-motion. Recent neurophysiological studies in animal models based on single-cell resolution indicate that vestibular signals exhibit complex spatiotemporal dynamics, producing challenges in identifying their exact functions and how they are integrated with other modality signals. For example, vestibular and optic flow could provide congruent and incongruent signals regarding spatial tuning functions, reference frames, and temporal dynamics. Comprehensive studies, including behavioral tasks, neural recording across sensory and sensory-motor association areas, and causal link manipulations, have provided some insights into the neural mechanisms underlying multisensory self-motion perception.
Collapse
Affiliation(s)
- Bingyu Liu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jiayu Shan
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
8
|
Coen P, Sit TPH, Wells MJ, Carandini M, Harris KD. Mouse frontal cortex mediates additive multisensory decisions. Neuron 2023; 111:2432-2447.e13. [PMID: 37295419 PMCID: PMC10957398 DOI: 10.1016/j.neuron.2023.05.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 12/02/2022] [Accepted: 05/10/2023] [Indexed: 06/12/2023]
Abstract
The brain can combine auditory and visual information to localize objects. However, the cortical substrates underlying audiovisual integration remain uncertain. Here, we show that mouse frontal cortex combines auditory and visual evidence; that this combination is additive, mirroring behavior; and that it evolves with learning. We trained mice in an audiovisual localization task. Inactivating frontal cortex impaired responses to either sensory modality, while inactivating visual or parietal cortex affected only visual stimuli. Recordings from >14,000 neurons indicated that after task learning, activity in the anterior part of frontal area MOs (secondary motor cortex) additively encodes visual and auditory signals, consistent with the mice's behavioral strategy. An accumulator model applied to these sensory representations reproduced the observed choices and reaction times. These results suggest that frontal cortex adapts through learning to combine evidence across sensory cortices, providing a signal that is transformed into a binary decision by a downstream accumulator.
Collapse
Affiliation(s)
- Philip Coen
- UCL Queen Square Institute of Neurology, University College London, London, UK; UCL Institute of Ophthalmology, University College London, London, UK.
| | - Timothy P H Sit
- Sainsbury-Wellcome Center, University College London, London, UK
| | - Miles J Wells
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Matteo Carandini
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Kenneth D Harris
- UCL Queen Square Institute of Neurology, University College London, London, UK
| |
Collapse
|
9
|
Zhao B, Wang R, Zhu Z, Yang Q, Chen A. The computational rules of cross-modality suppression in the visual posterior sylvian area. iScience 2023; 26:106973. [PMID: 37378331 PMCID: PMC10291470 DOI: 10.1016/j.isci.2023.106973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 03/13/2023] [Accepted: 05/23/2023] [Indexed: 06/29/2023] Open
Abstract
The macaque visual posterior sylvian area (VPS) is an area with neurons responding selectively to heading direction in both visual and vestibular modalities, but how VPS neurons combined these two sensory signals is still unknown. In contrast to the subadditive characteristics in the medial superior temporal area (MSTd), responses in VPS were dominated by vestibular signals, with approximately a winner-take-all competition. The conditional Fisher information analysis shows that VPS neural population encodes information from distinct sensory modalities under large and small offset conditions, which differs from MSTd whose neural population contains more information about visual stimuli in both conditions. However, the combined responses of single neurons in both areas can be well fit by weighted linear sums of unimodal responses. Furthermore, a normalization model captured most vestibular and visual interaction characteristics for both VPS and MSTd, indicating the divisive normalization mechanism widely exists in the cortex.
Collapse
Affiliation(s)
- Bin Zhao
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| | - Rong Wang
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| | - Zhihua Zhu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Qianli Yang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| |
Collapse
|
10
|
Page WK, Sulon DW, Duffy CJ. Neural activity during monkey vehicular wayfinding. J Neurol Sci 2023; 446:120593. [PMID: 36827811 DOI: 10.1016/j.jns.2023.120593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 02/06/2023] [Accepted: 02/14/2023] [Indexed: 02/19/2023]
Abstract
Navigation gets us from place to place, creating a path to arrive at a goal. We trained a monkey to steer a motorized cart in a large room, beginning at its trial-by-trial start location and ending at a trial-by-trial cued goal location. While the monkey steered its autonomously chosen path to its goal, we recorded neural activity simultaneously in both the hippocampus (HPC) and medial superior temporal (MST) cortex. Local field potentials (LFPs) in these sites show similar patterns of activity with the 15-30 Hz band highlighting specific room locations. In contrast, 30-100 Hz LFPs support a unified map of the behaviorally relevant start and goal locations. The single neuron responses (SNRs) do not substantially contribute to room or start-goal maps. Rather, the SNRs form a continuum from neurons that are most active when the monkey is moving on a path toward the goal, versus other neurons that are most active when the monkey deviates from paths toward the goal. Granger analyses suggest that HPC firing precedes MST firing during cueing at the trial start location, mainly mediated by off-path neurons. In contrast, MST precedes HPC firing during steering, mainly mediated by on-path neurons. Interactions between MST and HPC are mediated by the parallel activation of on-path and off-path neurons, selectively activated across stages of this wayfinding task.
Collapse
Affiliation(s)
- William K Page
- Dept. of Neurology, University of Rochester Medical Ctr., Rochester, NY 14642, USA
| | - David W Sulon
- Dept. of Neurology, Penn State Health Medical Ctr., Hershey, PA 17036, USA
| | - Charles J Duffy
- Dept. of Neurology, University of Rochester Medical Ctr., Rochester, NY 14642, USA; Dept. of Neurology, Penn State Health Medical Ctr., Hershey, PA 17036, USA; Dept. of Neurology, University Hospitals and Case Western Reserve University, Cleveland, OH 44122, USA.
| |
Collapse
|
11
|
Jiang C, Liu J, Ni Y, Qu S, Liu L, Li Y, Yang L, Xu W. Mammalian-brain-inspired neuromorphic motion-cognition nerve achieves cross-modal perceptual enhancement. Nat Commun 2023; 14:1344. [PMID: 36906637 PMCID: PMC10008641 DOI: 10.1038/s41467-023-36935-w] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 02/21/2023] [Indexed: 03/13/2023] Open
Abstract
Perceptual enhancement of neural and behavioral response due to combinations of multisensory stimuli are found in many animal species across different sensory modalities. By mimicking the multisensory integration of ocular-vestibular cues for enhanced spatial perception in macaques, a bioinspired motion-cognition nerve based on a flexible multisensory neuromorphic device is demonstrated. A fast, scalable and solution-processed fabrication strategy is developed to prepare a nanoparticle-doped two-dimensional (2D)-nanoflake thin film, exhibiting superior electrostatic gating capability and charge-carrier mobility. The multi-input neuromorphic device fabricated using this thin film shows history-dependent plasticity, stable linear modulation, and spatiotemporal integration capability. These characteristics ensure parallel, efficient processing of bimodal motion signals encoded as spikes and assigned with different perceptual weights. Motion-cognition function is realized by classifying the motion types using mean firing rates of encoded spikes and postsynaptic current of the device. Demonstrations of recognition of human activity types and drone flight modes reveal that the motion-cognition performance match the bio-plausible principles of perceptual enhancement by multisensory integration. Our system can be potentially applied in sensory robotics and smart wearables.
Collapse
Affiliation(s)
- Chengpeng Jiang
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China.,Research Center for Intelligent Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jiaqi Liu
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China
| | - Yao Ni
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China
| | - Shangda Qu
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China
| | - Lu Liu
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China
| | - Yue Li
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China
| | - Lu Yang
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China
| | - Wentao Xu
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China. .,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China.
| |
Collapse
|
12
|
Sengsoon P, Siriworakunsak K. A comparison of muscle activity, posture and body discomfort during the use of different computer screen sizes. INTERNATIONAL JOURNAL OF OCCUPATIONAL SAFETY AND ERGONOMICS 2023; 29:424-430. [PMID: 35296229 DOI: 10.1080/10803548.2022.2054543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
This study aims to compare changes in neck angles, muscle activities, ergonomic risk and body discomfort caused by use of two different computer screen sizes. The 36 female users who participated used displays with 46.99 and 58.42-cm screen sizes and were assessed for craniocervical angle (CCA), craniovertebral angle (CVA), upper trapezius (UT) and sternocleidomastoid (SCM) muscle activity, ergonomic risk and body discomfort for a duration of 1 h. The results showed there were no significant differences when comparing usage between both computer screen sizes (p > 0.05). However, there were significant differences in the CCA, UT muscle activity and body discomfort when comparing before and after usage for both computer screen sizes (p < 0.05). The results indicate that computer users can select different screen sizes for working but should be concerned with neck angle, muscle activity and body discomfort when using for long periods of time.
Collapse
|
13
|
Xu LH, Sun Q, Zhang B, Li X. Attractive serial dependence in heading perception from optic flow occurs at the perceptual and postperceptual stages. J Vis 2022; 22:11. [DOI: 10.1167/jov.22.12.11] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Affiliation(s)
- Ling-Hao Xu
- Department of Systems & Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Qi Sun
- Department of Psychology, Zhejiang Normal University, Jinhua, People's Republic of China
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, People's Republic of China
| | - Baoyuan Zhang
- Department of Psychology, Zhejiang Normal University, Jinhua, People's Republic of China
| | - Xinyu Li
- Department of Psychology, Zhejiang Normal University, Jinhua, People's Republic of China
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, People's Republic of China
| |
Collapse
|
14
|
Causal contribution of optic flow signal in Macaque extrastriate visual cortex for roll perception. Nat Commun 2022; 13:5479. [PMID: 36123363 PMCID: PMC9485245 DOI: 10.1038/s41467-022-33245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 09/08/2022] [Indexed: 11/08/2022] Open
Abstract
Optic flow is a powerful cue for inferring self-motion status which is critical for postural control, spatial orientation, locomotion and navigation. In primates, neurons in extrastriate visual cortex (MSTd) are predominantly modulated by high-order optic flow patterns (e.g., spiral), yet a functional link to direct perception is lacking. Here, we applied electrical microstimulation to selectively manipulate population of MSTd neurons while macaques discriminated direction of rotation around line-of-sight (roll) or direction of linear-translation (heading), two tasks which were orthogonal in 3D spiral coordinate using a four-alternative-forced-choice paradigm. Microstimulation frequently biased animal's roll perception towards coded labeled-lines of the artificial-stimulated neurons in either context with spiral or pure-rotation stimuli. Choice frequency was also altered between roll and translation flow-pattern. Our results provide direct causal-link evidence supporting that roll signals in MSTd, despite often mixed with translation signals, can be extracted by downstream areas for perception of rotation relative to gravity-vertical.
Collapse
|
15
|
Cortical Mechanisms of Multisensory Linear Self-motion Perception. Neurosci Bull 2022; 39:125-137. [PMID: 35821337 PMCID: PMC9849545 DOI: 10.1007/s12264-022-00916-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 01/22/2023] Open
Abstract
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Collapse
|
16
|
Vestibular and active self-motion signals drive visual perception in binocular rivalry. iScience 2021; 24:103417. [PMID: 34877486 PMCID: PMC8632839 DOI: 10.1016/j.isci.2021.103417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 09/24/2021] [Accepted: 11/04/2021] [Indexed: 11/24/2022] Open
Abstract
Multisensory integration helps the brain build reliable models of the world and resolve ambiguities. Visual interactions with sound and touch are well established but vestibular influences on vision are less well studied. Here, we test the vestibular influence on vision using horizontally opposed motions presented one to each eye so that visual perception is unstable and alternates irregularly. Passive, whole-body rotations in the yaw plane stabilized visual alternations, with perceived direction oscillating congruently with rotation (leftward motion during leftward rotation, and vice versa). This demonstrates a purely vestibular signal can resolve ambiguous visual motion and determine visual perception. Active self-rotation following the same sinusoidal profile also entrained vision to the rotation cycle – more strongly and with a lesser time lag, likely because of efference copy and predictive internal models. Both experiments show that visual ambiguity provides an effective paradigm to reveal how vestibular and motor inputs can shape visual perception. Binocular rivalry between left/right motions is stabilized by congruent head movement Left/right head rotations entrain rivalry dynamics so matching direction is perceived Active and passive rotations both drive rivalry dominance to match rotation direction Resolving ambiguous vision occurs in a broader vestibular and action-based context
Collapse
|
17
|
Smith AT. Cortical visual area CSv as a cingulate motor area: a sensorimotor interface for the control of locomotion. Brain Struct Funct 2021; 226:2931-2950. [PMID: 34240236 PMCID: PMC8541968 DOI: 10.1007/s00429-021-02325-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 06/17/2021] [Indexed: 12/26/2022]
Abstract
The response properties, connectivity and function of the cingulate sulcus visual area (CSv) are reviewed. Cortical area CSv has been identified in both human and macaque brains. It has similar response properties and connectivity in the two species. It is situated bilaterally in the cingulate sulcus close to an established group of medial motor/premotor areas. It has strong connectivity with these areas, particularly the cingulate motor areas and the supplementary motor area, suggesting that it is involved in motor control. CSv is active during visual stimulation but only if that stimulation is indicative of self-motion. It is also active during vestibular stimulation and connectivity data suggest that it receives proprioceptive input. Connectivity with topographically organized somatosensory and motor regions strongly emphasizes the legs over the arms. Together these properties suggest that CSv provides a key interface between the sensory and motor systems in the control of locomotion. It is likely that its role involves online control and adjustment of ongoing locomotory movements, including obstacle avoidance and maintaining the intended trajectory. It is proposed that CSv is best seen as part of the cingulate motor complex. In the human case, a modification of the influential scheme of Picard and Strick (Picard and Strick, Cereb Cortex 6:342-353, 1996) is proposed to reflect this.
Collapse
Affiliation(s)
- Andrew T Smith
- Department of Psychology, Royal Holloway, University of London, Egham, TW20 0EX, UK.
| |
Collapse
|
18
|
Zheng Q, Zhou L, Gu Y. Temporal synchrony effects of optic flow and vestibular inputs on multisensory heading perception. Cell Rep 2021; 37:109999. [PMID: 34788608 DOI: 10.1016/j.celrep.2021.109999] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 08/21/2021] [Accepted: 10/21/2021] [Indexed: 11/25/2022] Open
Abstract
Precise heading perception requires integration of optic flow and vestibular cues, yet the two cues often carry distinct temporal dynamics that may confound cue integration benefit. Here, we varied temporal offset between the two sensory inputs while macaques discriminated headings around straight ahead. We find the best heading performance does not occur under natural condition of synchronous inputs with zero offset but rather when visual stimuli are artificially adjusted to lead vestibular by a few hundreds of milliseconds. This amount exactly matches the lag between the vestibular acceleration and visual speed signals as measured from single-unit-activity in frontal and posterior parietal cortices. Manually aligning cues in these areas best facilitates integration with some nonlinear gain modulation effects. These findings are consistent with predictions from a model by which the brain integrates optic flow speed with a faster vestibular acceleration signal for sensing instantaneous heading direction during self-motion in the environment.
Collapse
Affiliation(s)
- Qihao Zheng
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, China; University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Luxin Zhou
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, China; University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, China; University of Chinese Academy of Sciences, 100049 Beijing, China; Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, 201210 Shanghai, China.
| |
Collapse
|
19
|
Foster C, Sheng WA, Heed T, Ben Hamed S. The macaque ventral intraparietal area has expanded into three homologue human parietal areas. Prog Neurobiol 2021; 209:102185. [PMID: 34775040 DOI: 10.1016/j.pneurobio.2021.102185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 10/27/2021] [Accepted: 11/05/2021] [Indexed: 10/19/2022]
Abstract
The macaque ventral intraparietal area (VIP) in the fundus of the intraparietal sulcus has been implicated in a diverse range of sensorimotor and cognitive functions such as motion processing, multisensory integration, processing of head peripersonal space, defensive behavior, and numerosity coding. Here, we exhaustively review macaque VIP function, cytoarchitectonics, and anatomical connectivity and integrate it with human studies that have attempted to identify a potential human VIP homologue. We show that human VIP research has consistently identified three, rather than one, bilateral parietal areas that each appear to subsume some, but not all, of the macaque area's functionality. Available evidence suggests that this human "VIP complex" has evolved as an expansion of the macaque area, but that some precursory specialization within macaque VIP has been previously overlooked. The three human areas are dominated, roughly, by coding the head or self in the environment, visual heading direction, and the peripersonal environment around the head, respectively. A unifying functional principle may be best described as prediction in space and time, linking VIP to state estimation as a key parietal sensorimotor function. VIP's expansive differentiation of head and self-related processing may have been key in the emergence of human bodily self-consciousness.
Collapse
Affiliation(s)
- Celia Foster
- Biopsychology & Cognitive Neuroscience, Faculty of Psychology & Sports Science, Bielefeld University, Bielefeld, Germany; Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Wei-An Sheng
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229, CNRS-University of Lyon 1, France
| | - Tobias Heed
- Biopsychology & Cognitive Neuroscience, Faculty of Psychology & Sports Science, Bielefeld University, Bielefeld, Germany; Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany; Department of Psychology, University of Salzburg, Salzburg, Austria; Centre for Cognitive Neuroscience, University of Salzburg, Salzburg, Austria.
| | - Suliann Ben Hamed
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229, CNRS-University of Lyon 1, France.
| |
Collapse
|
20
|
Orban GA, Sepe A, Bonini L. Parietal maps of visual signals for bodily action planning. Brain Struct Funct 2021; 226:2967-2988. [PMID: 34508272 PMCID: PMC8541987 DOI: 10.1007/s00429-021-02378-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/01/2021] [Indexed: 12/24/2022]
Abstract
The posterior parietal cortex (PPC) has long been understood as a high-level integrative station for computing motor commands for the body based on sensory (i.e., mostly tactile and visual) input from the outside world. In the last decade, accumulating evidence has shown that the parietal areas not only extract the pragmatic features of manipulable objects, but also subserve sensorimotor processing of others’ actions. A paradigmatic case is that of the anterior intraparietal area (AIP), which encodes the identity of observed manipulative actions that afford potential motor actions the observer could perform in response to them. On these bases, we propose an AIP manipulative action-based template of the general planning functions of the PPC and review existing evidence supporting the extension of this model to other PPC regions and to a wider set of actions: defensive and locomotor actions. In our model, a hallmark of PPC functioning is the processing of information about the physical and social world to encode potential bodily actions appropriate for the current context. We further extend the model to actions performed with man-made objects (e.g., tools) and artifacts, because they become integral parts of the subject’s body schema and motor repertoire. Finally, we conclude that existing evidence supports a generally conserved neural circuitry that transforms integrated sensory signals into the variety of bodily actions that primates are capable of preparing and performing to interact with their physical and social world.
Collapse
Affiliation(s)
- Guy A Orban
- Department of Medicine and Surgery, University of Parma, via Volturno 39/E, 43125, Parma, Italy.
| | - Alessia Sepe
- Department of Medicine and Surgery, University of Parma, via Volturno 39/E, 43125, Parma, Italy
| | - Luca Bonini
- Department of Medicine and Surgery, University of Parma, via Volturno 39/E, 43125, Parma, Italy.
| |
Collapse
|
21
|
Zhao B, Zhang Y, Chen A. Encoding of vestibular and optic flow cues to self-motion in the posterior superior temporal polysensory area. J Physiol 2021; 599:3937-3954. [PMID: 34192812 DOI: 10.1113/jp281913] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 06/28/2021] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS Neurons in the posterior superior temporal polysensory area (STPp) showed significant directional selectivity in response to vestibular, optic flow and combined visual-vestibular stimuli. By comparison to the dorsal medial superior temporal area, the visual latency was slower in STPp but the vestibular latency was faster. Heading preferences under combined stimulation in STPp were usually dominated by visual signals. Cross-modal enhancement was observed in STPp when both vestibular and visual cues were presented together at their heading preferences. ABSTRACT Human neuroimaging data implicated that the superior temporal polysensory area (STP) might be involved in vestibular-visual interaction during heading computations, but the heading selectivity has not been examined in the macaque. Here, we investigated the convergence of optic flow and vestibular signals in macaque STP by using a virtual-reality system and found that 6.3% of STP neurons showed multisensory responses, with visual and vestibular direction preferences either congruent or opposite in roughly equal proportion. The percentage of vestibular-tuned cells (18.3%) was much smaller than that of visual-tuned cells (30.4%) in STP. The vestibular tuning strength was usually weaker than the visual condition. The visual latency was significantly slower in STPp than in the dorsal medial superior temporal area (MSTd), but the vestibular latency was significantly faster than in MSTd. During the bimodal condition, STP cells' response was dominated by visual signals, with the visual heading preference not affected by the vestibular signals but the response amplitudes modulated by vestibular signals in a subadditive way.
Collapse
Affiliation(s)
- Bin Zhao
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| | - Yi Zhang
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| | - Aihua Chen
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| |
Collapse
|
22
|
Liu B, Tian Q, Gu Y. Robust vestibular self-motion signals in macaque posterior cingulate region. eLife 2021; 10:e64569. [PMID: 33827753 PMCID: PMC8032402 DOI: 10.7554/elife.64569] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 03/29/2021] [Indexed: 11/13/2022] Open
Abstract
Self-motion signals, distributed ubiquitously across parietal-temporal lobes, propagate to limbic hippocampal system for vector-based navigation via hubs including posterior cingulate cortex (PCC) and retrosplenial cortex (RSC). Although numerous studies have indicated posterior cingulate areas are involved in spatial tasks, it is unclear how their neurons represent self-motion signals. Providing translation and rotation stimuli to macaques on a 6-degree-of-freedom motion platform, we discovered robust vestibular responses in PCC. A combined three-dimensional spatiotemporal model captured data well and revealed multiple temporal components including velocity, acceleration, jerk, and position. Compared to PCC, RSC contained moderate vestibular temporal modulations and lacked significant spatial tuning. Visual self-motion signals were much weaker in both regions compared to the vestibular signals. We conclude that macaque posterior cingulate region carries vestibular-dominant self-motion signals with plentiful temporal components that could be useful for path integration.
Collapse
Affiliation(s)
- Bingyu Liu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Key Laboratory of Primate Neurobiology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- University of Chinese Academy of SciencesBeijingChina
| | - Qingyang Tian
- CAS Center for Excellence in Brain Science and Intelligence Technology, Key Laboratory of Primate Neurobiology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- University of Chinese Academy of SciencesBeijingChina
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Key Laboratory of Primate Neurobiology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- University of Chinese Academy of SciencesBeijingChina
| |
Collapse
|
23
|
The role of cognitive factors and personality traits in the perception of illusory self-motion (vection). Atten Percept Psychophys 2021; 83:1804-1817. [PMID: 33409903 PMCID: PMC8084801 DOI: 10.3758/s13414-020-02228-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2020] [Indexed: 01/22/2023]
Abstract
Vection is a perceptual phenomenon that describes the visually induced subjective sensation of self-motion in the absence of physical motion. Previous research has discussed the potential involvement of top-down cognitive mechanisms on vection. Here, we quantified how cognitive manipulations such as contextual information (i.e., expectation) and plausibility (i.e., chair configuration) alter vection. We also explored how individual traits such as field dependence, depersonalization, anxiety, and social desirability might be related to vection. Fifty-one healthy adults were exposed to an optic flow stimulus that consisted of horizontally moving black-and-white bars presented on three adjacent monitors to generate circular vection. Participants were divided into three groups and given experimental instructions designed to induce either strong, weak, or no expectation with regard to the intensity of vection. In addition, the configuration of the chair (rotatable or fixed) was modified during the experiment. Vection onset time, duration, and intensity were recorded. Results showed that expectation altered vection intensity, but only when the chair was in the rotatable configuration. Positive correlations for vection measures with field dependence and depersonalization, but no sex-related effects were found. Our results show that vection can be altered by cognitive factors and that individual traits can affect the perception of vection, suggesting that vection is not a purely perceptual phenomenon, but can also be affected by top-down mechanisms.
Collapse
|
24
|
De Castro V, Smith AT, Beer AL, Leguen C, Vayssière N, Héjja-Brichard Y, Audurier P, Cottereau BR, Durand JB. Connectivity of the Cingulate Sulcus Visual Area (CSv) in Macaque Monkeys. Cereb Cortex 2021; 31:1347-1364. [PMID: 33067998 PMCID: PMC7786354 DOI: 10.1093/cercor/bhaa301] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 08/12/2020] [Accepted: 09/11/2020] [Indexed: 12/27/2022] Open
Abstract
In humans, the posterior cingulate cortex contains an area sensitive to visual cues to self-motion. This cingulate sulcus visual area (CSv) is structurally and functionally connected with several (multi)sensory and (pre)motor areas recruited during locomotion. In nonhuman primates, electrophysiology has shown that the cingulate cortex is also related to spatial navigation. Recently, functional MRI in macaque monkeys identified a cingulate area with similar visual properties to human CSv. In order to bridge the gap between human and nonhuman primate research, we examined the structural and functional connectivity of putative CSv in three macaque monkeys adopting the same approach as in humans based on diffusion MRI and resting-state functional MRI. The results showed that putative monkey CSv connects with several visuo-vestibular areas (e.g., VIP/FEFsem/VPS/MSTd) as well as somatosensory cortex (e.g., dorsal aspects of areas 3/1/2), all known to process sensory signals that can be triggered by self-motion. Additionally, strong connections are observed with (pre)motor areas located in the dorsal prefrontal cortex (e.g., F3/F2/F1) and within the anterior cingulate cortex (e.g., area 24). This connectivity pattern is strikingly reminiscent of that described for human CSv, suggesting that the sensorimotor control of locomotion relies on similar organizational principles in human and nonhuman primates.
Collapse
Affiliation(s)
- V De Castro
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex, France
| | - A T Smith
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| | - A L Beer
- Institut für Psychologie, Universität Regensburg, 93053 Regensburg, Germany
| | - C Leguen
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex, France
| | - N Vayssière
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex, France
| | - Y Héjja-Brichard
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex, France
| | - P Audurier
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex, France
| | - B R Cottereau
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex, France
| | - J B Durand
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex, France
| |
Collapse
|
25
|
Abstract
Previous work shows that observers can use information from optic flow to perceive the direction of self-motion (i.e. heading) and that perceived heading exhibits a bias towards the center of the display (center bias). More recent work shows that the brain is sensitive to serial correlations and the perception of current stimuli can be affected by recently seen stimuli, a phenomenon known as serial dependence. In the current study, we examined whether, apart from center bias, serial dependence could be independently observed in heading judgments and how adding noise to optic flow affected center bias and serial dependence. We found a repulsive serial dependence effect in heading judgments after factoring out center bias in heading responses. The serial effect expands heading estimates away from the previously seen heading to increase overall sensitivity to changes in heading directions. Both the center bias and repulsive serial dependence effects increased with increasing noise in optic flow, and the noise-dependent changes in the serial effect were consistent with an ideal observer model. Our results suggest that the center bias effect is due to a prior of the straight-ahead direction in the Bayesian inference account for heading perception, whereas the repulsive serial dependence is an effect that reduces response errors and has the added utility of counteracting the center bias in heading judgments.
Collapse
Affiliation(s)
- Qi Sun
- Department of Psychology, The University of Hong Kong, Hong Kong SAR.,
| | - Huihui Zhang
- School of Psychology, The University of Sydney, Sydney, Australia.,
| | - David Alais
- School of Psychology, The University of Sydney, Sydney, Australia.,
| | - Li Li
- Department of Psychology, The University of Hong Kong, Hong Kong SAR.,Faculty of Arts and Science, New York University Shanghai, Shanghai, People's Republic of China.,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, People's Republic of China.,
| |
Collapse
|
26
|
Field DT, Biagi N, Inman LA. The role of the ventral intraparietal area (VIP/pVIP) in the perception of object-motion and self-motion. Neuroimage 2020; 213:116679. [DOI: 10.1016/j.neuroimage.2020.116679] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 01/15/2020] [Accepted: 02/23/2020] [Indexed: 10/24/2022] Open
|
27
|
Ertl M, Boegle R. Investigating the vestibular system using modern imaging techniques-A review on the available stimulation and imaging methods. J Neurosci Methods 2019; 326:108363. [PMID: 31351972 DOI: 10.1016/j.jneumeth.2019.108363] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Revised: 07/12/2019] [Accepted: 07/12/2019] [Indexed: 02/06/2023]
Abstract
The vestibular organs, located in the inner ear, sense linear and rotational acceleration of the head and its position relative to the gravitational field of the earth. These signals are essential for many fundamental skills such as the coordination of eye and head movements in the three-dimensional space or the bipedal locomotion of humans. Furthermore, the vestibular signals have been shown to contribute to higher cognitive functions such as navigation. As the main aim of the vestibular system is the sensation of motion it is a challenging system to be studied in combination with modern imaging methods. Over the last years various different methods were used for stimulating the vestibular system. These methods range from artificial approaches like galvanic or caloric vestibular stimulation to passive full body accelerations using hexapod motion platforms, or rotatory chairs. In the first section of this review we provide an overview over all methods used in vestibular stimulation in combination with imaging methods (fMRI, PET, E/MEG, fNIRS). The advantages and disadvantages of every method are discussed, and we summarize typical settings and parameters used in previous studies. In the second section the role of the four imaging techniques are discussed in the context of vestibular research and their potential strengths and interactions with the presented stimulation methods are outlined.
Collapse
Affiliation(s)
- Matthias Ertl
- Department of Psychology, University of Bern, Switzerland; Sleep-Wake-Epilepsy Center, Department of Neurology, University Hospital (Inselspital) Bern, Switzerland.
| | - Rainer Boegle
- Department of Neurology, Ludwig-Maximilians-Universität München, Germany; German Center for Vertigo and Balance Disorders, IFB-LMU, Ludwig-Maximilians Universität, Munich, Germany
| |
Collapse
|
28
|
Zhang WH, Wang H, Chen A, Gu Y, Lee TS, Wong KM, Wu S. Complementary congruent and opposite neurons achieve concurrent multisensory integration and segregation. eLife 2019; 8:43753. [PMID: 31120416 PMCID: PMC6565362 DOI: 10.7554/elife.43753] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Accepted: 05/22/2019] [Indexed: 11/13/2022] Open
Abstract
Our brain perceives the world by exploiting multisensory cues to extract information about various aspects of external stimuli. The sensory cues from the same stimulus should be integrated to improve perception, and otherwise segregated to distinguish different stimuli. In reality, however, the brain faces the challenge of recognizing stimuli without knowing in advance the sources of sensory cues. To address this challenge, we propose that the brain conducts integration and segregation concurrently with complementary neurons. Studying the inference of heading-direction via visual and vestibular cues, we develop a network model with two reciprocally connected modules modeling interacting visual-vestibular areas. In each module, there are two groups of neurons whose tunings under each sensory cue are either congruent or opposite. We show that congruent neurons implement integration, while opposite neurons compute cue disparity information for segregation, and the interplay between two groups of neurons achieves efficient multisensory information processing.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong.,Center of the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States
| | - He Wang
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics, Primate Research Center, East China Normal University, Shanghai, China
| | - Yong Gu
- Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Tai Sing Lee
- Center of the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States
| | - Ky Michael Wong
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong
| | - Si Wu
- School of Electronics Engineering and Computer Science, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China
| |
Collapse
|
29
|
Abstract
Detection of the state of self-motion, such as the instantaneous heading direction, the traveled trajectory and traveled distance or time, is critical for efficient spatial navigation. Numerous psychophysical studies have indicated that the vestibular system, originating from the otolith and semicircular canals in our inner ears, provides robust signals for different aspects of self-motion perception. In addition, vestibular signals interact with other sensory signals such as visual optic flow to facilitate natural navigation. These behavioral results are consistent with recent findings in neurophysiological studies. In particular, vestibular activity in response to the translation or rotation of the head/body in darkness is revealed in a growing number of cortical regions, many of which are also sensitive to visual motion stimuli. The temporal dynamics of the vestibular activity in the central nervous system can vary widely, ranging from acceleration-dominant to velocity-dominant. Different temporal dynamic signals may be decoded by higher level areas for different functions. For example, the acceleration signals during the translation of body in the horizontal plane may be used by the brain to estimate the heading directions. Although translation and rotation signals arise from independent peripheral organs, that is, otolith and canals, respectively, they frequently converge onto single neurons in the central nervous system including both the brainstem and the cerebral cortex. The convergent neurons typically exhibit stronger responses during a combined curved motion trajectory which may serve as the neural correlate for complex path perception. During spatial navigation, traveled distance or time may be encoded by different population of neurons in multiple regions including hippocampal-entorhinal system, posterior parietal cortex, or frontal cortex.
Collapse
Affiliation(s)
- Zhixian Cheng
- Department of Neuroscience, Yale School of Medicine, New Haven, CT, United States
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
30
|
Gu Y. Vestibular signals in primate cortex for self-motion perception. Curr Opin Neurobiol 2018; 52:10-17. [DOI: 10.1016/j.conb.2018.04.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 03/12/2018] [Accepted: 04/07/2018] [Indexed: 10/17/2022]
|
31
|
Noel JP, Blanke O, Serino A. From multisensory integration in peripersonal space to bodily self-consciousness: from statistical regularities to statistical inference. Ann N Y Acad Sci 2018; 1426:146-165. [PMID: 29876922 DOI: 10.1111/nyas.13867] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 04/24/2018] [Accepted: 05/02/2018] [Indexed: 01/09/2023]
Abstract
Integrating information across sensory systems is a critical step toward building a cohesive representation of the environment and one's body, and as illustrated by numerous illusions, scaffolds subjective experience of the world and self. In the last years, classic principles of multisensory integration elucidated in the subcortex have been translated into the language of statistical inference understood by the neocortical mantle. Most importantly, a mechanistic systems-level description of multisensory computations via probabilistic population coding and divisive normalization is actively being put forward. In parallel, by describing and understanding bodily illusions, researchers have suggested multisensory integration of bodily inputs within the peripersonal space as a key mechanism in bodily self-consciousness. Importantly, certain aspects of bodily self-consciousness, although still very much a minority, have been recently casted under the light of modern computational understandings of multisensory integration. In doing so, we argue, the field of bodily self-consciousness may borrow mechanistic descriptions regarding the neural implementation of inference computations outlined by the multisensory field. This computational approach, leveraged on the understanding of multisensory processes generally, promises to advance scientific comprehension regarding one of the most mysterious questions puzzling humankind, that is, how our brain creates the experience of a self in interaction with the environment.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience (LNCO), Center for Neuroprosthetics (CNP), Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
- Department of Neurology, University of Geneva, Geneva, Switzerland
| | - Andrea Serino
- MySpace Lab, Department of Clinical Neuroscience, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
32
|
Flexible egocentric and allocentric representations of heading signals in parietal cortex. Proc Natl Acad Sci U S A 2018; 115:E3305-E3312. [PMID: 29555744 DOI: 10.1073/pnas.1715625115] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
By systematically manipulating head position relative to the body and eye position relative to the head, previous studies have shown that vestibular tuning curves of neurons in the ventral intraparietal (VIP) area remain invariant when expressed in body-/world-centered coordinates. However, body orientation relative to the world was not manipulated; thus, an egocentric, body-centered representation could not be distinguished from an allocentric, world-centered reference frame. We manipulated the orientation of the body relative to the world such that we could distinguish whether vestibular heading signals in VIP are organized in body- or world-centered reference frames. We found a hybrid representation, depending on gaze direction. When gaze remained fixed relative to the body, the vestibular heading tuning of VIP neurons shifted systematically with body orientation, indicating an egocentric, body-centered reference frame. In contrast, when gaze remained fixed relative to the world, this representation changed to be intermediate between body- and world-centered. We conclude that the neural representation of heading in posterior parietal cortex is flexible, depending on gaze and possibly attentional demands.
Collapse
|
33
|
Cottereau BR, Smith AT, Rima S, Fize D, Héjja-Brichard Y, Renaud L, Lejards C, Vayssière N, Trotter Y, Durand JB. Processing of Egomotion-Consistent Optic Flow in the Rhesus Macaque Cortex. Cereb Cortex 2018; 27:330-343. [PMID: 28108489 PMCID: PMC5939222 DOI: 10.1093/cercor/bhw412] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2016] [Indexed: 11/12/2022] Open
Abstract
The cortical network that processes visual cues to self-motion was characterized with functional magnetic resonance imaging in 3 awake behaving macaques. The experimental protocol was similar to previous human studies in which the responses to a single large optic flow patch were contrasted with responses to an array of 9 similar flow patches. This distinguishes cortical regions where neurons respond to flow in their receptive fields regardless of surrounding motion from those that are sensitive to whether the overall image arises from self-motion. In all 3 animals, significant selectivity for egomotion-consistent flow was found in several areas previously associated with optic flow processing, and notably dorsal middle superior temporal area, ventral intra-parietal area, and VPS. It was also seen in areas 7a (Opt), STPm, FEFsem, FEFsac and in a region of the cingulate sulcus that may be homologous with human area CSv. Selectivity for egomotion-compatible flow was never total but was particularly strong in VPS and putative macaque CSv. Direct comparison of results with the equivalent human studies reveals several commonalities but also some differences.
Collapse
Affiliation(s)
- Benoit R Cottereau
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Andrew T Smith
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Samy Rima
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Denis Fize
- Laboratoire d'Anthropologie Moléculaire et Imagerie de Synthèse, CNRS-Université de Toulouse, Toulouse, France
| | - Yseult Héjja-Brichard
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Luc Renaud
- CNRS, CE2F PRIM UMS3537, Marseille, France.,Aix Marseille Université, Centre d'Exploration Fonctionnelle et de Formation, Marseille, France
| | - Camille Lejards
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Nathalie Vayssière
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Yves Trotter
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Jean-Baptiste Durand
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| |
Collapse
|
34
|
Yang L, Gu Y. Distinct spatial coordinate of visual and vestibular heading signals in macaque FEFsem and MSTd. eLife 2017; 6. [PMID: 29134944 PMCID: PMC5685470 DOI: 10.7554/elife.29809] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 11/03/2017] [Indexed: 11/17/2022] Open
Abstract
Precise heading estimate requires integration of visual optic flow and vestibular inertial motion originating from distinct spatial coordinates (eye- and head-centered, respectively). To explore whether the two heading signals may share a common reference frame along the hierarchy of cortical stages, we explored two multisensory areas in macaques: the smooth pursuit area of the frontal eye field (FEFsem) closer to the motor side, and the dorsal portion of medial superior temporal area (MSTd) closer to the sensory side. In both areas, vestibular signals are head-centered, whereas visual signals are mainly eye-centered. However, visual signals in FEFsem are more shifted towards the head coordinate compared to MSTd. These results are robust being largely independent on: (1) smooth pursuit eye movement, (2) motion parallax cue, and (3) behavioral context for active heading estimation, indicating that the visual and vestibular heading signals may be represented in distinct spatial coordinate in sensory cortices.
Collapse
Affiliation(s)
- Lihua Yang
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
35
|
Dissociation of Self-Motion and Object Motion by Linear Population Decoding That Approximates Marginalization. J Neurosci 2017; 37:11204-11219. [PMID: 29030435 DOI: 10.1523/jneurosci.1177-17.2017] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2017] [Revised: 10/02/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
We use visual image motion to judge the movement of objects, as well as our own movements through the environment. Generally, image motion components caused by object motion and self-motion are confounded in the retinal image. Thus, to estimate heading, the brain would ideally marginalize out the effects of object motion (or vice versa), but little is known about how this is accomplished neurally. Behavioral studies suggest that vestibular signals play a role in dissociating object motion and self-motion, and recent computational work suggests that a linear decoder can approximate marginalization by taking advantage of diverse multisensory representations. By measuring responses of MSTd neurons in two male rhesus monkeys and by applying a recently-developed method to approximate marginalization by linear population decoding, we tested the hypothesis that vestibular signals help to dissociate self-motion and object motion. We show that vestibular signals stabilize tuning for heading in neurons with congruent visual and vestibular heading preferences, whereas they stabilize tuning for object motion in neurons with discrepant preferences. Thus, vestibular signals enhance the separability of joint tuning for object motion and self-motion. We further show that a linear decoder, designed to approximate marginalization, allows the population to represent either self-motion or object motion with good accuracy. Decoder weights are broadly consistent with a readout strategy, suggested by recent computational work, in which responses are decoded according to the vestibular preferences of multisensory neurons. These results demonstrate, at both single neuron and population levels, that vestibular signals help to dissociate self-motion and object motion.SIGNIFICANCE STATEMENT The brain often needs to estimate one property of a changing environment while ignoring others. This can be difficult because multiple properties of the environment may be confounded in sensory signals. The brain can solve this problem by marginalizing over irrelevant properties to estimate the property-of-interest. We explore this problem in the context of self-motion and object motion, which are inherently confounded in the retinal image. We examine how diversity in a population of multisensory neurons may be exploited to decode self-motion and object motion from the population activity of neurons in macaque area MSTd.
Collapse
|
36
|
Garzorz IT, MacNeilage PR. Visual-Vestibular Conflict Detection Depends on Fixation. Curr Biol 2017; 27:2856-2861.e4. [DOI: 10.1016/j.cub.2017.08.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 06/19/2017] [Accepted: 08/04/2017] [Indexed: 10/18/2022]
|
37
|
Neuronal Encoding of Self and Others' Head Rotation in the Macaque Dorsal Prefrontal Cortex. Sci Rep 2017; 7:8571. [PMID: 28819117 PMCID: PMC5561028 DOI: 10.1038/s41598-017-08936-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 07/17/2017] [Indexed: 12/25/2022] Open
Abstract
Following gaze is a crucial skill, in primates, for understanding where and at what others are looking, and often requires head rotation. The neural basis underlying head rotation are deemed to overlap with the parieto-frontal attention/gaze-shift network. Here, we show that a set of neurons in monkey’s Brodmann area 9/46dr (BA 9/46dr), which is involved in orienting processes and joint attention, becomes active during self head rotation and that the activity of these neurons cannot be accounted for by saccade-related activity (head-rotation neurons). Another set of BA 9/46dr neurons encodes head rotation performed by an observed agent facing the monkey (visually triggered neurons). Among these latter neurons, almost half exhibit the intriguing property of encoding both execution and observation of head rotation (mirror-like neurons). Finally, by means of neuronal tracing techniques, we showed that BA 9/46dr takes part into two distinct networks: a dorso/mesial network, playing a role in spatial head/gaze orientation, and a ventrolateral network, likely involved in processing social stimuli and mirroring others’ head. The overall results of this study provide a new, comprehensive picture of the role of BA 9/46dr in encoding self and others’ head rotation, likely playing a role in head-following behaviors.
Collapse
|
38
|
Smith AT, Greenlee MW, DeAngelis GC, Angelaki D. Distributed Visual–Vestibular Processing in the Cerebral Cortex of Man and Macaque. Multisens Res 2017. [DOI: 10.1163/22134808-00002568] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Recent advances in understanding the neurobiological underpinnings of visual–vestibular interactions underlying self-motion perception are reviewed with an emphasis on comparisons between the macaque and human brains. In both species, several distinct cortical regions have been identified that are active during both visual and vestibular stimulation and in some of these there is clear evidence for sensory integration. Several possible cross-species homologies between cortical regions are identified. A key feature of cortical organization is that the same information is apparently represented in multiple, anatomically diverse cortical regions, suggesting that information about self-motion is used for different purposes in different brain regions.
Collapse
Affiliation(s)
- Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| | - Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, 93053 Regensburg, Germany
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030, USA
| |
Collapse
|
39
|
Laurens J, Liu S, Yu XJ, Chan R, Dickman D, DeAngelis GC, Angelaki DE. Transformation of spatiotemporal dynamics in the macaque vestibular system from otolith afferents to cortex. eLife 2017; 6:e20787. [PMID: 28075326 PMCID: PMC5226653 DOI: 10.7554/elife.20787] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2016] [Accepted: 12/22/2016] [Indexed: 01/27/2023] Open
Abstract
Sensory signals undergo substantial recoding when neural activity is relayed from sensors through pre-thalamic and thalamic nuclei to cortex. To explore how temporal dynamics and directional tuning are sculpted in hierarchical vestibular circuits, we compared responses of macaque otolith afferents with neurons in the vestibular and cerebellar nuclei, as well as five cortical areas, to identical three-dimensional translational motion. We demonstrate a remarkable spatio-temporal transformation: otolith afferents carry spatially aligned cosine-tuned translational acceleration and jerk signals. In contrast, brainstem and cerebellar neurons exhibit non-linear, mixed selectivity for translational velocity, acceleration, jerk and position. Furthermore, these components often show dissimilar spatial tuning. Moderate further transformation of translation signals occurs in the cortex, such that similar spatio-temporal properties are found in multiple cortical areas. These results suggest that the first synapse represents a key processing element in vestibular pathways, robustly shaping how self-motion is represented in central vestibular circuits and cortical areas.
Collapse
Affiliation(s)
- Jean Laurens
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| | - Sheng Liu
- State Key Laboratory of Ophthalmology, Zhongshan Opthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiong-Jie Yu
- Department of Neuroscience, Baylor College of Medicine, Houston, United States,Zhejiang University Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University, Hangzhou, China,Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, China
| | - Raymond Chan
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| | - David Dickman
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| | - Gregory C DeAngelis
- Deptartment of Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, United States,
| |
Collapse
|
40
|
Chandrasekaran C. Computational principles and models of multisensory integration. Curr Opin Neurobiol 2016; 43:25-34. [PMID: 27918886 DOI: 10.1016/j.conb.2016.11.002] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Revised: 10/27/2016] [Accepted: 11/09/2016] [Indexed: 12/22/2022]
Abstract
Combining information from multiple senses creates robust percepts, speeds up responses, enhances learning, and improves detection, discrimination, and recognition. In this review, I discuss computational models and principles that provide insight into how this process of multisensory integration occurs at the behavioral and neural level. My initial focus is on drift-diffusion and Bayesian models that can predict behavior in multisensory contexts. I then highlight how recent neurophysiological and perturbation experiments provide evidence for a distributed redundant network for multisensory integration. I also emphasize studies which show that task-relevant variables in multisensory contexts are distributed in heterogeneous neural populations. Finally, I describe dimensionality reduction methods and recurrent neural network models that may help decipher heterogeneous neural populations involved in multisensory integration.
Collapse
|
41
|
Abstract
UNLABELLED How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. SIGNIFICANCE STATEMENT To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain.
Collapse
|