1
|
Magrou L, Joyce MKP, Froudist-Walsh S, Datta D, Wang XJ, Martinez-Trujillo J, Arnsten AFT. The meso-connectomes of mouse, marmoset, and macaque: network organization and the emergence of higher cognition. Cereb Cortex 2024; 34:bhae174. [PMID: 38771244 PMCID: PMC11107384 DOI: 10.1093/cercor/bhae174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/29/2024] [Accepted: 04/08/2024] [Indexed: 05/22/2024] Open
Abstract
The recent publications of the inter-areal connectomes for mouse, marmoset, and macaque cortex have allowed deeper comparisons across rodent vs. primate cortical organization. In general, these show that the mouse has very widespread, "all-to-all" inter-areal connectivity (i.e. a "highly dense" connectome in a graph theoretical framework), while primates have a more modular organization. In this review, we highlight the relevance of these differences to function, including the example of primary visual cortex (V1) which, in the mouse, is interconnected with all other areas, therefore including other primary sensory and frontal areas. We argue that this dense inter-areal connectivity benefits multimodal associations, at the cost of reduced functional segregation. Conversely, primates have expanded cortices with a modular connectivity structure, where V1 is almost exclusively interconnected with other visual cortices, themselves organized in relatively segregated streams, and hierarchically higher cortical areas such as prefrontal cortex provide top-down regulation for specifying precise information for working memory storage and manipulation. Increased complexity in cytoarchitecture, connectivity, dendritic spine density, and receptor expression additionally reveal a sharper hierarchical organization in primate cortex. Together, we argue that these primate specializations permit separable deconstruction and selective reconstruction of representations, which is essential to higher cognition.
Collapse
Affiliation(s)
- Loïc Magrou
- Department of Neural Science, New York University, New York, NY 10003, United States
| | - Mary Kate P Joyce
- Department of Neuroscience, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Sean Froudist-Walsh
- School of Engineering Mathematics and Technology, University of Bristol, Bristol, BS8 1QU, United Kingdom
| | - Dibyadeep Datta
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Xiao-Jing Wang
- Department of Neural Science, New York University, New York, NY 10003, United States
| | - Julio Martinez-Trujillo
- Departments of Physiology and Pharmacology, and Psychiatry, Schulich School of Medicine and Dentistry, Western University, London, ON, N6A 3K7, Canada
| | - Amy F T Arnsten
- Department of Neuroscience, Yale University School of Medicine, New Haven, CT 06510, United States
| |
Collapse
|
2
|
Rubinstein JF, Singh M, Kowler E. Bayesian approaches to smooth pursuit of random dot kinematograms: effects of varying RDK noise and the predictability of RDK direction. J Neurophysiol 2024; 131:394-416. [PMID: 38149327 DOI: 10.1152/jn.00116.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 12/28/2023] Open
Abstract
Smooth pursuit eye movements respond on the basis of both immediate and anticipated target motion, where anticipations may be derived from either memory or perceptual cues. To study the combined influence of both immediate sensory motion and anticipation, subjects pursued clear or noisy random dot kinematograms (RDKs) whose mean directions were chosen from Gaussian distributions with SDs = 10° (narrow prior) or 45° (wide prior). Pursuit directions were consistent with Bayesian theory in that transitions over time from dependence on the prior to near total dependence on immediate sensory motion (likelihood) took longer with the noisier RDKs and with the narrower, more reliable, prior. Results were fit to Bayesian models in which parameters representing the variability of the likelihood either were or were not constrained to be the same for both priors. The unconstrained model provided a statistically better fit, with the influence of the prior in the constrained model smaller than predicted from strict reliability-based weighting of prior and likelihood. Factors that may have contributed to this outcome include prior variability different from nominal values, low-level sensorimotor learning with the narrow prior, or departures of pursuit from strict adherence to reliability-based weighting. Although modifications of, or alternatives to, the normative Bayesian model will be required, these results, along with previous studies, suggest that Bayesian approaches are a promising framework to understand how pursuit combines immediate sensory motion, past history, and informative perceptual cues to accurately track the target motion that is most likely to occur in the immediate future.NEW & NOTEWORTHY Smooth pursuit eye movements respond on the basis of anticipated, as well as immediate, target motions. Bayesian models using reliability-based weighting of previous (prior) and immediate target motions (likelihood) accounted for many, but not all, aspects of pursuit of clear and noisy random dot kinematograms with different levels of predictability. Bayesian approaches may solve the long-standing problem of how pursuit combines immediate sensory motion and anticipation of future motion to configure an effective response.
Collapse
Affiliation(s)
- Jason F Rubinstein
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Manish Singh
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Eileen Kowler
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| |
Collapse
|
3
|
Zheng Q, Gu Y. From Multisensory Integration to Multisensory Decision-Making. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:23-35. [PMID: 38270851 DOI: 10.1007/978-981-99-7611-9_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Organisms live in a dynamic environment in which sensory information from multiple sources is ever changing. A conceptually complex task for the organisms is to accumulate evidence across sensory modalities and over time, a process known as multisensory decision-making. This is a new concept, in terms of that previous researches have been largely conducted in parallel disciplines. That is, much efforts have been put either in sensory integration across modalities using activity summed over a duration of time, or in decision-making with only one sensory modality that evolves over time. Recently, a few studies with neurophysiological measurements emerge to study how different sensory modality information is processed, accumulated, and integrated over time in decision-related areas such as the parietal or frontal lobes in mammals. In this review, we summarize and comment on these studies that combine the long-existed two parallel fields of multisensory integration and decision-making. We show how the new findings provide insight into our understanding about neural mechanisms mediating multisensory information processing in a more complete way.
Collapse
Affiliation(s)
- Qihao Zheng
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Yong Gu
- Systems Neuroscience, SInstitute of Neuroscience, Chinese Academy of Sciences, Shanghai, China.
| |
Collapse
|
4
|
Newman PM, Qi Y, Mou W, McNamara TP. Statistically Optimal Cue Integration During Human Spatial Navigation. Psychon Bull Rev 2023; 30:1621-1642. [PMID: 37038031 DOI: 10.3758/s13423-023-02254-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/08/2023] [Indexed: 04/12/2023]
Abstract
In 2007, Cheng and colleagues published their influential review wherein they analyzed the literature on spatial cue interaction during navigation through a Bayesian lens, and concluded that models of optimal cue integration often applied in psychophysical studies could explain cue interaction during navigation. Since then, numerous empirical investigations have been conducted to assess the degree to which human navigators are optimal when integrating multiple spatial cues during a variety of navigation-related tasks. In the current review, we discuss the literature on human cue integration during navigation that has been published since Cheng et al.'s original review. Evidence from most studies demonstrate optimal navigation behavior when humans are presented with multiple spatial cues. However, applications of optimal cue integration models vary in their underlying assumptions (e.g., uninformative priors and decision rules). Furthermore, cue integration behavior depends in part on the nature of the cues being integrated and the navigational task (e.g., homing versus non-home goal localization). We discuss the implications of these models and suggest directions for future research.
Collapse
Affiliation(s)
- Phillip M Newman
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA.
| | - Yafei Qi
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Weimin Mou
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Timothy P McNamara
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA
| |
Collapse
|
5
|
Kondo T, Hirao Y, Narumi T, Amemiya T. Effects of bone-conducted vibration stimulation of various frequencies on the vertical vection. Sci Rep 2023; 13:15759. [PMID: 37735202 PMCID: PMC10514326 DOI: 10.1038/s41598-023-42589-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 09/12/2023] [Indexed: 09/23/2023] Open
Abstract
Illusory self-motion ("vection") has been used to present a sense of movement in virtual reality (VR) and other similar applications. It is crucial in vection research to present a stronger sense of movement. Bone-conducted vibration (BCV) is a small and generally acceptable method for enhancing the sense of movement in VR. However, its effects on vection have not been extensively studied. Here, we conducted two experiments to investigate the effect of BCV on the vection, which generates an upward sensation under the hypothesis that BCV stimulation to the mastoid processes causes noise in the vestibular system and enhances visually-induced self-motion perception. The experiments focused on the effects of BCV stimuli of different frequencies on the vection experience. The results suggested that 500 Hz BCV was more effective as noise to the vestibular system than other frequency BCVs and improved self-motion sensation. This study examines the effects of BCV with different frequencies on the vection experience and designs a theory for using BCV in VR.
Collapse
Affiliation(s)
- Tetsuta Kondo
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, 1138656, Japan
| | - Yutaro Hirao
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, 1138656, Japan
| | - Takuji Narumi
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, 1138656, Japan
| | - Tomohiro Amemiya
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, 1138656, Japan.
- Information Technology Center, The University of Tokyo, Tokyo, 1138658, Japan.
- Virtual Reality Educational Research Center, The University of Tokyo, Tokyo, 1138656, Japan.
| |
Collapse
|
6
|
Rosenberg A, Thompson LW, Doudlah R, Chang TY. Neuronal Representations Supporting Three-Dimensional Vision in Nonhuman Primates. Annu Rev Vis Sci 2023; 9:337-359. [PMID: 36944312 DOI: 10.1146/annurev-vision-111022-123857] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2023]
Abstract
The visual system must reconstruct the dynamic, three-dimensional (3D) world from ambiguous two-dimensional (2D) retinal images. In this review, we synthesize current literature on how the visual system of nonhuman primates performs this transformation through multiple channels within the classically defined dorsal (where) and ventral (what) pathways. Each of these channels is specialized for processing different 3D features (e.g., the shape, orientation, or motion of objects, or the larger scene structure). Despite the common goal of 3D reconstruction, neurocomputational differences between the channels impose distinct information-limiting constraints on perception. Convergent evidence further points to the little-studied area V3A as a potential branchpoint from which multiple 3D-fugal processing channels diverge. We speculate that the expansion of V3A in humans may have supported the emergence of advanced 3D spatial reasoning skills. Lastly, we discuss future directions for exploring 3D information transmission across brain areas and experimental approaches that can further advance the understanding of 3D vision.
Collapse
Affiliation(s)
- Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Lowell W Thompson
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Raymond Doudlah
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Ting-Yu Chang
- School of Medicine, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
7
|
Arshad I, Gallagher M, Ferrè ER. Visuo-vestibular conflicts within the roll plane modulate multisensory verticality perception. Neurosci Lett 2023; 792:136963. [PMID: 36375625 DOI: 10.1016/j.neulet.2022.136963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 10/19/2022] [Accepted: 11/08/2022] [Indexed: 11/13/2022]
Abstract
The integration of visuo-vestibular information is crucial when interacting with the external environment. Under normal circumstances, vision and vestibular signals provide corroborating information, for example regarding the direction and speed of self-motion. However, conflicts in visuo-vestibular signalling, such as optic flow presented to a stationary observer, can change subsequent processing in either modality. While previous studies have demonstrated the impact of sensory conflict on unisensory visual or vestibular percepts, here we investigated whether visuo-vestibular conflicts impact sensitivity to multisensory percepts, specifically verticality. Participants were exposed to a visuo-vestibular conflicting or non-conflicting motion adaptor before completing a Vertical Detection Task. Sensitivity to vertical stimuli was reduced following visuo-vestibular conflict. No significant differences in criterion were found. Our findings suggest that visuo-vestibular conflicts not only modulate processing in unimodal channels, but also broader multisensory percepts, which may have implications for higher-level processing dependent on the integration of visual and vestibular signals.
Collapse
Affiliation(s)
- I Arshad
- Department of Psychology, Royal Holloway University of London, United Kingdom; Department of Psychological Sciences, Birkbeck University of London, United Kingdom
| | - M Gallagher
- School of Psychology, Cardiff University, United Kingdom; School of Psychology, University of Kent, United Kingdom.
| | - E R Ferrè
- Department of Psychological Sciences, Birkbeck University of London, United Kingdom
| |
Collapse
|
8
|
Abekawa N, Doya K, Gomi H. Body and visual instabilities functionally modulate implicit reaching corrections. iScience 2022; 26:105751. [PMID: 36590158 PMCID: PMC9800534 DOI: 10.1016/j.isci.2022.105751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 07/31/2022] [Accepted: 12/02/2022] [Indexed: 12/12/2022] Open
Abstract
Hierarchical brain-information-processing schemes have frequently assumed that the flexible but slow voluntary action modulates a direct sensorimotor process that can quickly generate a reaction in dynamical interaction. Here we show that the quick visuomotor process for manual movement is modulated by postural and visual instability contexts that are related but remote and prior states to manual movements. A preceding unstable postural context significantly enhanced the reflexive manual response induced by a large-field visual motion during hand reaching while the response was evidently weakened by imposing a preceding random-visual-motion context. These modulations are successfully explained by the Bayesian optimal formulation in which the manual response elicited by visual motion is ascribed to the compensatory response to the estimated self-motion affected by the preceding contextual situations. Our findings suggest an implicit and functional mechanism that links the variability and uncertainty of remote states to the quick sensorimotor transformation.
Collapse
Affiliation(s)
- Naotoshi Abekawa
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Kanawaga, 243-0198, Japan
| | - Kenji Doya
- Okinawa Institute of Science and Technology Graduate University, Okinawa 904-0495, Japan
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Kanawaga, 243-0198, Japan,Corresponding author
| |
Collapse
|
9
|
Bill J, Gershman SJ, Drugowitsch J. Visual motion perception as online hierarchical inference. Nat Commun 2022; 13:7403. [PMID: 36456546 PMCID: PMC9715570 DOI: 10.1038/s41467-022-34805-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 11/07/2022] [Indexed: 12/03/2022] Open
Abstract
Identifying the structure of motion relations in the environment is critical for navigation, tracking, prediction, and pursuit. Yet, little is known about the mental and neural computations that allow the visual system to infer this structure online from a volatile stream of visual information. We propose online hierarchical Bayesian inference as a principled solution for how the brain might solve this complex perceptual task. We derive an online Expectation-Maximization algorithm that explains human percepts qualitatively and quantitatively for a diverse set of stimuli, covering classical psychophysics experiments, ambiguous motion scenes, and illusory motion displays. We thereby identify normative explanations for the origin of human motion structure perception and make testable predictions for future psychophysics experiments. The proposed online hierarchical inference model furthermore affords a neural network implementation which shares properties with motion-sensitive cortical areas and motivates targeted experiments to reveal the neural representations of latent structure.
Collapse
Affiliation(s)
- Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA. .,Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Samuel J Gershman
- Department of Psychology, Harvard University, Cambridge, MA, USA.,Center for Brain Science, Harvard University, Cambridge, MA, USA.,Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.,Center for Brain Science, Harvard University, Cambridge, MA, USA
| |
Collapse
|
10
|
Gabriel GA, Harris LR, Henriques DYP, Pandi M, Campos JL. Multisensory visual-vestibular training improves visual heading estimation in younger and older adults. Front Aging Neurosci 2022; 14:816512. [PMID: 36092809 PMCID: PMC9452741 DOI: 10.3389/fnagi.2022.816512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 08/01/2022] [Indexed: 11/16/2022] Open
Abstract
Self-motion perception (e.g., when walking/driving) relies on the integration of multiple sensory cues including visual, vestibular, and proprioceptive signals. Changes in the efficacy of multisensory integration have been observed in older adults (OA), which can sometimes lead to errors in perceptual judgments and have been associated with functional declines such as increased falls risk. The objectives of this study were to determine whether passive, visual-vestibular self-motion heading perception could be improved by providing feedback during multisensory training, and whether training-related effects might be more apparent in OAs vs. younger adults (YA). We also investigated the extent to which training might transfer to improved standing-balance. OAs and YAs were passively translated and asked to judge their direction of heading relative to straight-ahead (left/right). Each participant completed three conditions: (1) vestibular-only (passive physical motion in the dark), (2) visual-only (cloud-of-dots display), and (3) bimodal (congruent vestibular and visual stimulation). Measures of heading precision and bias were obtained for each condition. Over the course of 3 days, participants were asked to make bimodal heading judgments and were provided with feedback (“correct”/“incorrect”) on 900 training trials. Post-training, participants’ biases, and precision in all three sensory conditions (vestibular, visual, bimodal), and their standing-balance performance, were assessed. Results demonstrated improved overall precision (i.e., reduced JNDs) in heading perception after training. Pre- vs. post-training difference scores showed that improvements in JNDs were only found in the visual-only condition. Particularly notable is that 27% of OAs initially could not discriminate their heading at all in the visual-only condition pre-training, but subsequently obtained thresholds in the visual-only condition post-training that were similar to those of the other participants. While OAs seemed to show optimal integration pre- and post-training (i.e., did not show significant differences between predicted and observed JNDs), YAs only showed optimal integration post-training. There were no significant effects of training for bimodal or vestibular-only heading estimates, nor standing-balance performance. These results indicate that it may be possible to improve unimodal (visual) heading perception using a multisensory (visual-vestibular) training paradigm. The results may also help to inform interventions targeting tasks for which effective self-motion perception is important.
Collapse
Affiliation(s)
- Grace A. Gabriel
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Laurence R. Harris
- Department of Psychology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| | - Denise Y. P. Henriques
- Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Kinesiology, York University, Toronto, ON, Canada
| | - Maryam Pandi
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Jennifer L. Campos
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
- *Correspondence: Jennifer L. Campos,
| |
Collapse
|
11
|
Mao Y, Pan L, Li W, Xiao S, Qi R, Zhao L, Wang J, Cai Y. Stroboscopic lighting with intensity synchronized to rotation velocity alleviates motion sickness gastrointestinal symptoms and motor disorders in rats. Front Integr Neurosci 2022; 16:941947. [PMID: 35965602 PMCID: PMC9366139 DOI: 10.3389/fnint.2022.941947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 07/06/2022] [Indexed: 11/18/2022] Open
Abstract
Motion sickness (MS) is caused by mismatch between conflicted motion perception produced by motion challenges and expected “internal model” of integrated motion sensory pattern formed under normal condition in the brain. Stroboscopic light could reduce MS nausea symptom via increasing fixation ability for gaze stabilization to reduce visuo-vestibular confliction triggered by distorted vision during locomotion. This study tried to clarify whether MS induced by passive motion could be alleviated by stroboscopic light with emitting rate and intensity synchronized to acceleration–deceleration phase of motion. We observed synchronized and unsynchronized stroboscopic light (SSL: 6 cycle/min; uSSL: 2, 4, and 8 cycle/min) on MS-related gastrointestinal symptoms (conditioned gaping and defecation responses), motor disorders (hypoactivity and balance disturbance), and central Fos protein expression in rats receiving Ferris wheel-like rotation (6 cycle/min). The effects of color temperature and peak light intensity were also examined. We found that SSL (6 cycle/min) significantly reduced rotation-induced conditioned gaping and defecation responses and alleviated rotation-induced decline in spontaneous locomotion activity and disruption in balance beam performance. The efficacy of SSL against MS behavioral responses was affected by peak light intensity but not color temperature. The uSSL (4 and 8 cycle/min) only released defecation but less efficiently than SSL, while uSSL (2 cycle/min) showed no beneficial effect in MS animals. SSL but not uSSL inhibited Fos protein expression in the caudal vestibular nucleus, the nucleus of solitary tract, the parabrachial nucleus, the central nucleus of amygdala, and the paraventricular nucleus of hypothalamus, while uSSL (4 and 8 cycle/min) only decreased Fos expression in the paraventricular nucleus of hypothalamus. These results suggested that stroboscopic light synchronized to motion pattern might alleviate MS gastrointestinal symptoms and motor disorders and inhibit vestibular-autonomic pathways. Our study supports the utilization of motion-synchronous stroboscopic light as a potential countermeasure against MS under abnormal motion condition in future.
Collapse
|
12
|
Bruschetta M, de Winkel KN, Mion E, Pretto P, Beghi A, Bülthoff HH. Assessing the contribution of active somatosensory stimulation to self-acceleration perception in dynamic driving simulators. PLoS One 2021; 16:e0259015. [PMID: 34793458 PMCID: PMC8601569 DOI: 10.1371/journal.pone.0259015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 10/11/2021] [Indexed: 11/18/2022] Open
Abstract
In dynamic driving simulators, the experience of operating a vehicle is reproduced by combining visual stimuli generated by graphical rendering with inertial stimuli generated by platform motion. Due to inherent limitations of the platform workspace, inertial stimulation is subject to shortcomings in the form of missing cues, false cues, and/or scaling errors, which negatively affect simulation fidelity. In the present study, we aim at quantifying the relative contribution of an active somatosensory stimulation to the perceived intensity of self-motion, relative to other sensory systems. Participants judged the intensity of longitudinal and lateral driving maneuvers in a dynamic driving simulator in passive driving conditions, with and without additional active somatosensory stimulation, as provided by an Active Seat (AS) and Active Belts (AB) integrated system (ASB). The results show that ASB enhances the perceived intensity of sustained decelerations, and increases the precision of acceleration perception overall. Our findings are consistent with models of perception, and indicate that active somatosensory stimulation can indeed be used to improve simulation fidelity.
Collapse
Affiliation(s)
- Mattia Bruschetta
- Department of Information Engineering, University of Padova, Padova, Italy
| | - Ksander N. de Winkel
- TU Delft, Cognitive Robotics Delft, Delft, Netherlands
- Department of Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Enrico Mion
- Department of Information Engineering, University of Padova, Padova, Italy
- Department of Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- * E-mail:
| | | | - Alessandro Beghi
- Department of Information Engineering, University of Padova, Padova, Italy
| | - Heinrich H. Bülthoff
- Department of Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
13
|
Abstract
We perceive our environment through multiple independent sources of sensory input. The brain is tasked with deciding whether multiple signals are produced by the same or different events (i.e., solve the problem of causal inference). Here, we train a neural network to solve causal inference by either combining or separating visual and vestibular inputs in order to estimate self- and scene motion. We find that the network recapitulates key neurophysiological (i.e., congruent and opposite neurons) and behavioral (e.g., reliability-based cue weighting) properties of biological systems. We show how congruent and opposite neurons support motion estimation and how the balance in activity between these subpopulations determines whether to combine or separate multisensory signals. Sitting in a static railway carriage can produce illusory self-motion if the train on an adjoining track moves off. While our visual system registers motion, vestibular signals indicate that we are stationary. The brain is faced with a difficult challenge: is there a single cause of sensations (I am moving) or two causes (I am static, another train is moving)? If a single cause, integrating signals produces a more precise estimate of self-motion, but if not, one cue should be ignored. In many cases, this process of causal inference works without error, but how does the brain achieve it? Electrophysiological recordings show that the macaque medial superior temporal area contains many neurons that encode combinations of vestibular and visual motion cues. Some respond best to vestibular and visual motion in the same direction (“congruent” neurons), while others prefer opposing directions (“opposite” neurons). Congruent neurons could underlie cue integration, but the function of opposite neurons remains a puzzle. Here, we seek to explain this computational arrangement by training a neural network model to solve causal inference for motion estimation. Like biological systems, the model develops congruent and opposite units and recapitulates known behavioral and neurophysiological observations. We show that all units (both congruent and opposite) contribute to motion estimation. Importantly, however, it is the balance between their activity that distinguishes whether visual and vestibular cues should be integrated or separated. This explains the computational purpose of puzzling neural representations and shows how a relatively simple feedforward network can solve causal inference.
Collapse
|
14
|
Qi RR, Xiao SF, Pan LL, Mao YQ, Su Y, Wang LJ, Cai YL. Profiling of cybersickness and balance disturbance induced by virtual ship motion immersion combined with galvanic vestibular stimulation. APPLIED ERGONOMICS 2021; 92:103312. [PMID: 33338973 DOI: 10.1016/j.apergo.2020.103312] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 11/10/2020] [Accepted: 11/16/2020] [Indexed: 06/12/2023]
Abstract
Profile of cybersickness and balance disturbance induced by virtual ship motion alone and in combination with galvanic vestibular stimulation (GVS) remained unclear. Subjects were exposed to a ship deck vision scene under simulated Degree 5 or 3 sea condition using a head-mounted virtual reality display with or without GVS. Virtual ship motion at Degree 5 induced significant cybersickness with symptom profile: nausea syndrome > central (headache and dizziness) > peripheral (cold sweating) > increased salivation. During a single session of virtual ship motion exposure, GVS aggravated balance disturbance but did not affect most cybersickness symptoms except cold sweating. Repeated exposure induced cybersickness habituation which was delayed by GVS, while the temporal change of balance disturbance was unaffected. These results suggested that vestibular inputs play different roles in cybersickness and balance disturbance during virtual reality exposure. GVS might not serve as a potential countermeasure against cybersickness induced by virtual ship motion.
Collapse
Affiliation(s)
- Rui-Rui Qi
- Department of Nautical Injury Prevention, Faculty of Navy Medicine, Naval Medical University, Shanghai, China
| | - Shui-Feng Xiao
- Department of Nautical Injury Prevention, Faculty of Navy Medicine, Naval Medical University, Shanghai, China
| | - Lei-Lei Pan
- Department of Nautical Injury Prevention, Faculty of Navy Medicine, Naval Medical University, Shanghai, China
| | - Yu-Qi Mao
- Department of Nautical Injury Prevention, Faculty of Navy Medicine, Naval Medical University, Shanghai, China
| | - Yang Su
- Department of Nautical Injury Prevention, Faculty of Navy Medicine, Naval Medical University, Shanghai, China
| | - Lin-Jie Wang
- Department of Nautical Injury Prevention, Faculty of Navy Medicine, Naval Medical University, Shanghai, China.
| | - Yi-Ling Cai
- Department of Nautical Injury Prevention, Faculty of Navy Medicine, Naval Medical University, Shanghai, China.
| |
Collapse
|
15
|
Keshner EA, Lamontagne A. The Untapped Potential of Virtual Reality in Rehabilitation of Balance and Gait in Neurological Disorders. FRONTIERS IN VIRTUAL REALITY 2021; 2:641650. [PMID: 33860281 PMCID: PMC8046008 DOI: 10.3389/frvir.2021.641650] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Dynamic systems theory transformed our understanding of motor control by recognizing the continual interaction between the organism and the environment. Movement could no longer be visualized simply as a response to a pattern of stimuli or as a demonstration of prior intent; movement is context dependent and is continuously reshaped by the ongoing dynamics of the world around us. Virtual reality is one methodological variable that allows us to control and manipulate that environmental context. A large body of literature exists to support the impact of visual flow, visual conditions, and visual perception on the planning and execution of movement. In rehabilitative practice, however, this technology has been employed mostly as a tool for motivation and enjoyment of physical exercise. The opportunity to modulate motor behavior through the parameters of the virtual world is often ignored in practice. In this article we present the results of experiments from our laboratories and from others demonstrating that presenting particular characteristics of the virtual world through different sensory modalities will modify balance and locomotor behavior. We will discuss how movement in the virtual world opens a window into the motor planning processes and informs us about the relative weighting of visual and somatosensory signals. Finally, we discuss how these findings should influence future treatment design.
Collapse
Affiliation(s)
- Emily A. Keshner
- Department of Health and Rehabilitation Sciences, Temple University, Philadelphia, PA, United States
- Correspondence: Emily A. Keshner,
| | - Anouk Lamontagne
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada
- Virtual Reality and Mobility Laboratory, CISSS Laval—Jewish Rehabilitation Hospital Site of the Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Laval, QC, Canada
| |
Collapse
|
16
|
Tootell RBH, Nasr S. Scotopic Vision Is Selectively Processed in Thick-Type Columns in Human Extrastriate Cortex. Cereb Cortex 2021; 31:1163-1181. [PMID: 33073288 PMCID: PMC7786355 DOI: 10.1093/cercor/bhaa284] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 07/25/2020] [Accepted: 08/17/2020] [Indexed: 11/26/2022] Open
Abstract
In humans, visual stimuli can be perceived across an enormous range of light levels. Evidence suggests that different neural mechanisms process different subdivisions of this range. For instance, in the retina, stimuli presented at very low (scotopic) light levels activate rod photoreceptors, whereas cone photoreceptors are activated relatively more at higher (photopic) light levels. Similarly, different retinal ganglion cells are activated by scotopic versus photopic stimuli. However, in the brain, it remains unknown whether scotopic versus photopic information is: 1) processed in distinct channels, or 2) neurally merged. Using high-resolution functional magnetic resonance imaging at 7 T, we confirmed the first hypothesis. We first localized thick versus thin-type columns within areas V2, V3, and V4, based on photopic selectivity to motion versus color, respectively. Next, we found that scotopic stimuli selectively activated thick- (compared to thin-) type columns in V2 and V3 (in measurements of both overlap and amplitude) and V4 (based on overlap). Finally, we found stronger resting-state functional connections between scotopically dominated area MT with thick- (compared to thin-) type columns in areas V2, V3, and V4. We conclude that scotopic stimuli are processed in partially segregated parallel streams, emphasizing magnocellular influence, from retina through middle stages of visual cortex.
Collapse
Affiliation(s)
- Roger B H Tootell
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA 02114, USA.,Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
| | - Shahin Nasr
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA 02114, USA.,Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
17
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
18
|
Abstract
We recently showed that motion dynamics greatly enhance the magnitude of certain size contrast illusions, such as the Ebbinghaus and Delboeuf illusions. Here, we extend our study of the effect of motion dynamics on size illusions through a novel dynamic corridor illusion, in which a single target translates along a corridor background. Across three psychophysical experiments, we quantify the effects of stimulus dynamics on the Ebbinghaus and corridor illusions across different viewing conditions. The results revealed that stimulus dynamics had opposite effects on these different classes of size illusions. Whereas dynamic motion enhanced the magnitude of the Ebbinghaus illusion, it attenuated the magnitude the corridor illusion. Our results highlight precision-driven weighting of visual cues by neural circuits computing perceived object size. This hypothesis is consistent with observations beyond size perception and may represent a more general principle of cue integration in the visual system.
Collapse
|
19
|
Gallagher M, Choi R, Ferrè ER. Multisensory Interactions in Virtual Reality: Optic Flow Reduces Vestibular Sensitivity, but Only for Congruent Planes of Motion. Multisens Res 2020; 33:625-644. [PMID: 31972542 DOI: 10.1163/22134808-20201487] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Accepted: 12/02/2019] [Indexed: 11/19/2022]
Abstract
During exposure to Virtual Reality (VR) a sensory conflict may be present, whereby the visual system signals that the user is moving in a certain direction with a certain acceleration, while the vestibular system signals that the user is stationary. In order to reduce this conflict, the brain may down-weight vestibular signals, which may in turn affect vestibular contributions to self-motion perception. Here we investigated whether vestibular perceptual sensitivity is affected by VR exposure. Participants' ability to detect artificial vestibular inputs was measured during optic flow or random motion stimuli on a VR head-mounted display. Sensitivity to vestibular signals was significantly reduced when optic flow stimuli were presented, but importantly this was only the case when both visual and vestibular cues conveyed information on the same plane of self-motion. Our results suggest that the brain dynamically adjusts the weight given to incoming sensory cues for self-motion in VR; however this is dependent on the congruency of visual and vestibular cues.
Collapse
Affiliation(s)
| | - Reno Choi
- Royal Holloway, University of London, Egham, UK
| | | |
Collapse
|
20
|
Schmitt C, Baltaretu BR, Crawford JD, Bremmer F. A Causal Role of Area hMST for Self-Motion Perception in Humans. Cereb Cortex Commun 2020; 1:tgaa042. [PMID: 34296111 PMCID: PMC8152865 DOI: 10.1093/texcom/tgaa042] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 07/12/2020] [Accepted: 07/22/2020] [Indexed: 02/04/2023] Open
Abstract
Previous studies in the macaque monkey have provided clear causal evidence for an involvement of the medial-superior-temporal area (MST) in the perception of self-motion. These studies also revealed an overrepresentation of contraversive heading. Human imaging studies have identified a functional equivalent (hMST) of macaque area MST. Yet, causal evidence of hMST in heading perception is lacking. We employed neuronavigated transcranial magnetic stimulation (TMS) to test for such a causal relationship. We expected TMS over hMST to induce increased perceptual variance (i.e., impaired precision), while leaving mean heading perception (accuracy) unaffected. We presented 8 human participants with an optic flow stimulus simulating forward self-motion across a ground plane in one of 3 directions. Participants indicated perceived heading. In 57% of the trials, TMS pulses were applied, temporally centered on self-motion onset. TMS stimulation site was either right-hemisphere hMST, identified by a functional magnetic resonance imaging (fMRI) localizer, or a control-area, just outside the fMRI localizer activation. As predicted, TMS over area hMST, but not over the control-area, increased response variance of perceived heading as compared with noTMS stimulation trials. As hypothesized, this effect was strongest for contraversive self-motion. These data provide a first causal evidence for a critical role of hMST in visually guided navigation.
Collapse
Affiliation(s)
- Constanze Schmitt
- Department of Neurophysics, University of Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior-CMBB, University of Marburg and Justus-Liebig-University Giessen, Germany.,International Research Training Group 1901: The Brain in Action
| | - Bianca R Baltaretu
- International Research Training Group 1901: The Brain in Action.,Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- International Research Training Group 1901: The Brain in Action.,Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada.,Departments of Psychology, Biology, Kinesiology and Health Science, York University, Toronto, Ontario, Canada
| | - Frank Bremmer
- Department of Neurophysics, University of Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior-CMBB, University of Marburg and Justus-Liebig-University Giessen, Germany.,International Research Training Group 1901: The Brain in Action
| |
Collapse
|
21
|
Nguyen NT, Takakura H, Nishijo H, Ueda N, Ito S, Fujisaka M, Akaogi K, Shojaku H. Cerebral Hemodynamic Responses to the Sensory Conflict Between Visual and Rotary Vestibular Stimuli: An Analysis With a Multichannel Near-Infrared Spectroscopy (NIRS) System. Front Hum Neurosci 2020; 14:125. [PMID: 32372931 PMCID: PMC7187689 DOI: 10.3389/fnhum.2020.00125] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2019] [Accepted: 03/19/2020] [Indexed: 12/11/2022] Open
Abstract
Sensory conflict among visual, vestibular, and somatosensory information induces vertiginous sensation and postural instability. To elucidate the cognitive mechanisms of the integration between the visual and vestibular cues in humans, we analyzed the cortical hemodynamic responses during sensory conflict between visual and horizontal rotatory vestibular stimulation using a multichannel near-infrared spectroscopy (NIRS) system. The subjects sat on a rotatory chair that was accelerated at 3°/s2 for 20 s to the right or left, kept rotating at 60°/s for 80 s, and then decelerated at 3°/s2 for 20 s. The subjects were instructed to watch white stripes projected on a screen surrounding the chair during the acceleration and deceleration periods. The white stripes moved in two ways; in the "congruent" condition, the stripes moved in the opposite direction of chair rotation at 3°/s2 (i.e., natural visual stimulation), whereas in the "incongruent" condition, the stripes moved in the same direction of chair rotation at 3°/s2 (i.e., conflicted visual stimulation). The cortical hemodynamic activity was recorded from the bilateral temporoparietal regions. Statistical analyses using NIRS-SPM software indicated that hemodynamic activity increased in the bilateral temporoparietal junctions (TPJs) and human MT+ complex, including the medial temporal (MT) area and medial superior temporal (MST) area in the incongruent condition. Furthermore, the subjective strength of the vertiginous sensation was negatively correlated with hemodynamic activity in the dorsal part of the supramarginal gyrus (SMG) in and around the intraparietal sulcus (IPS). These results suggest that sensory conflict between the visual and vestibular stimuli promotes cortical cognitive processes in the cortical network consisting of the TPJ, the medial temporal gyrus (MTG), and IPS, which might contribute to self-motion perception to maintain a sense of balance or equilibrioception during sensory conflict.
Collapse
Affiliation(s)
- Nghia Trong Nguyen
- Department of Otorhinolaryngology, Head and Neck Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Hiromasa Takakura
- Department of Otorhinolaryngology, Head and Neck Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Hisao Nishijo
- System Emotional Science Laboratory, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Naoko Ueda
- Department of Otorhinolaryngology, Head and Neck Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Shinsuke Ito
- Department of Otorhinolaryngology, Head and Neck Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Michiro Fujisaka
- Department of Otorhinolaryngology, Head and Neck Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Katsuichi Akaogi
- Department of Otorhinolaryngology, Toyama Red Cross Hospital, Toyama, Japan
| | - Hideo Shojaku
- Department of Otorhinolaryngology, Head and Neck Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| |
Collapse
|
22
|
Yakubovich S, Israeli-Korn S, Halperin O, Yahalom G, Hassin-Baer S, Zaidel A. Visual self-motion cues are impaired yet overweighted during visual-vestibular integration in Parkinson's disease. Brain Commun 2020; 2:fcaa035. [PMID: 32954293 PMCID: PMC7425426 DOI: 10.1093/braincomms/fcaa035] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 02/17/2020] [Accepted: 03/11/2020] [Indexed: 11/25/2022] Open
Abstract
Parkinson's disease is prototypically a movement disorder. Although perceptual and motor functions are highly interdependent, much less is known about perceptual deficits in Parkinson's disease, which are less observable by nature, and might go unnoticed if not tested directly. It is therefore imperative to seek and identify these, to fully understand the challenges facing patients with Parkinson's disease. Also, perceptual deficits may be related to motor symptoms. Posture, gait and balance, affected in Parkinson's disease, rely on veridical perception of one's own motion (self-motion) in space. Yet it is not known whether self-motion perception is impaired in Parkinson's disease. Using a well-established multisensory paradigm of heading discrimination (that has not been previously applied to Parkinson's disease), we tested unisensory visual and vestibular self-motion perception, as well as multisensory integration of visual and vestibular cues, in 19 Parkinson's disease, 23 healthy age-matched and 20 healthy young-adult participants. After experiencing vestibular (on a motion platform), visual (optic flow) or multisensory (combined visual-vestibular) self-motion stimuli at various headings, participants reported whether their perceived heading was to the right or left of straight ahead. Parkinson's disease participants and age-matched controls were tested twice (Parkinson's disease participants on and off medication). Parkinson's disease participants demonstrated significantly impaired visual self-motion perception compared with age-matched controls on both visits, irrespective of medication status. Young controls performed slightly (but not significantly) better than age-matched controls and significantly better than the Parkinson's disease group. The visual self-motion perception impairment in Parkinson's disease correlated significantly with clinical disease severity. By contrast, vestibular performance was unimpaired in Parkinson's disease. Remarkably, despite impaired visual self-motion perception, Parkinson's disease participants significantly overweighted the visual cues during multisensory (visual-vestibular ) integration (compared with Bayesian predictions of optimal integration) and significantly more than controls. These findings indicate that self-motion perception in Parkinson's disease is affected by impaired visual cues and by suboptimal visual-vestibular integration (overweighting of visual cues). Notably, vestibular self-motion perception was unimpaired. Thus, visual self-motion perception is specifically impaired in early-stage Parkinson's disease. This can impact Parkinson's disease diagnosis and subtyping. Overweighting of visual cues could reflect a general multisensory integration deficit in Parkinson's disease, or specific overestimation of visual cue reliability. Finally, impaired self-motion perception in Parkinson's disease may contribute to impaired balance and gait control. Future investigation into this connection might open up new avenues of alternative therapies to better treat these difficult symptoms.
Collapse
Affiliation(s)
- Sol Yakubovich
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Simon Israeli-Korn
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- The Neurology and Neurosurgery Department, The Sackler School of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Orly Halperin
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Gilad Yahalom
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- Department of Neurology, Movement Disorders Clinic, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Sharon Hassin-Baer
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- The Neurology and Neurosurgery Department, The Sackler School of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| |
Collapse
|
23
|
Gallagher M, Dowsett R, Ferrè ER. Vection in virtual reality modulates vestibular-evoked myogenic potentials. Eur J Neurosci 2019; 50:3557-3565. [PMID: 31233640 DOI: 10.1111/ejn.14499] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 06/05/2019] [Accepted: 06/17/2019] [Indexed: 11/28/2022]
Abstract
The popularity of virtual reality (VR) has increased rapidly in recent years. While significant technological advancements are apparent, a troublesome problem with VR is that between 20% and 80% of users will experience unpleasant side effects such as nausea, disorientation, blurred vision and headaches-a malady known as Cybersickness. Cybersickness may be caused by a conflict between sensory signals for self-motion: while vision signals that the user is moving in a certain direction with certain acceleration, the vestibular organs provide no corroborating information. To resolve the sensory conflict, vestibular cues may be down-weighted leading to an alteration of how the brain interprets actual vestibular information. This may account for the frequently reported after-effects of VR exposure. Here, we investigated whether exposure to vection in VR modulates vestibular processing. We measured vestibular-evoked myogenic potentials (VEMPs) during brief immersion in a vection-inducing VR environment presented via head-mounted display. We found changes in VEMP asymmetry ratio, with a substantial increase in VEMP amplitude recorded on the left sternocleidomastoid muscle following just one minute of exposure to vection in VR. Our results suggest that exposure to vection in VR modulates vestibular processing, which may explain common after-effects of VR.
Collapse
Affiliation(s)
- Maria Gallagher
- Department of Psychology, Royal Holloway University of London, Egham, UK
| | - Ross Dowsett
- Department of Psychology, Royal Holloway University of London, Egham, UK
| | | |
Collapse
|
24
|
Cullen KE. Vestibular processing during natural self-motion: implications for perception and action. Nat Rev Neurosci 2019; 20:346-363. [PMID: 30914780 PMCID: PMC6611162 DOI: 10.1038/s41583-019-0153-1] [Citation(s) in RCA: 116] [Impact Index Per Article: 23.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
How the brain computes accurate estimates of our self-motion relative to the world and our orientation relative to gravity in order to ensure accurate perception and motor control is a fundamental neuroscientific question. Recent experiments have revealed that the vestibular system encodes this information during everyday activities using pathway-specific neural representations. Furthermore, new findings have established that vestibular signals are selectively combined with extravestibular information at the earliest stages of central vestibular processing in a manner that depends on the current behavioural goal. These findings have important implications for our understanding of the brain mechanisms that ensure accurate perception and behaviour during everyday activities and for our understanding of disorders of vestibular processing.
Collapse
Affiliation(s)
- Kathleen E Cullen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
25
|
Rodriguez R, Crane BT. Effect of range of heading differences on human visual-inertial heading estimation. Exp Brain Res 2019; 237:1227-1237. [PMID: 30847539 DOI: 10.1007/s00221-019-05506-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2018] [Accepted: 03/01/2019] [Indexed: 11/29/2022]
Abstract
Both visual and inertial cues are salient in heading determination. However, optic flow can ambiguously represent self-motion or environmental motion. It is unclear how visual and inertial heading cues are determined to have common cause and integrated vs perceived independently. In four experiments visual and inertial headings were presented simultaneously with ten subjects reporting visual or inertial headings in separate trial blocks. Experiment 1 examined inertial headings within 30° of straight-ahead and visual headings that were offset by up to 60°. Perception of the inertial heading was shifted in the direction of the visual stimulus by as much as 35° by the 60° offset, while perception of the visual stimulus remained largely uninfluenced. Experiment 2 used ± 140° range of inertial headings with up to 120° visual offset. This experiment found variable behavior between subjects with most perceiving the sensory stimuli to be shifted towards an intermediate heading but a few perceiving the headings independently. The visual and inertial headings influenced each other even at the largest offsets. Experiments 3 and 4 had similar inertial headings to experiments 1 and 2, respectively, except subjects reported environmental motion direction. Experiment 4 displayed similar perceptual influences as experiment 2, but in experiment 3 percepts were independent. Results suggested that perception of visual and inertial stimuli tend to be perceived as having common causation in most subjects with offsets up to 90° although with significant variation in perception between individuals. Limiting the range of inertial headings caused the visual heading to dominate the perception.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Bioengineering, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA
| | - Benjamin T Crane
- Department of Bioengineering, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA. .,Department of Otolaryngology, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA. .,Department of Neuroscience, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA.
| |
Collapse
|
26
|
Britton Z, Arshad Q. Vestibular and Multi-Sensory Influences Upon Self-Motion Perception and the Consequences for Human Behavior. Front Neurol 2019; 10:63. [PMID: 30899238 PMCID: PMC6416181 DOI: 10.3389/fneur.2019.00063] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Accepted: 01/17/2019] [Indexed: 11/16/2022] Open
Abstract
In this manuscript, we comprehensively review both the human and animal literature regarding vestibular and multi-sensory contributions to self-motion perception. This covers the anatomical basis and how and where the signals are processed at all levels from the peripheral vestibular system to the brainstem and cerebellum and finally to the cortex. Further, we consider how and where these vestibular signals are integrated with other sensory cues to facilitate self-motion perception. We conclude by demonstrating the wide-ranging influences of the vestibular system and self-motion perception upon behavior, namely eye movement, postural control, and spatial awareness as well as new discoveries that such perception can impact upon numerical cognition, human affect, and bodily self-consciousness.
Collapse
Affiliation(s)
- Zelie Britton
- Department of Neuro-Otology, Charing Cross Hospital, Imperial College London, London, United Kingdom
| | - Qadeer Arshad
- Department of Neuro-Otology, Charing Cross Hospital, Imperial College London, London, United Kingdom
| |
Collapse
|
27
|
Zhang Y, Li S, Jiang D, Chen A. Response Properties of Interneurons and Pyramidal Neurons in Macaque MSTd and VPS Areas During Self-Motion. Front Neural Circuits 2018; 12:105. [PMID: 30532695 PMCID: PMC6265351 DOI: 10.3389/fncir.2018.00105] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2018] [Accepted: 11/05/2018] [Indexed: 11/29/2022] Open
Abstract
To perceive self-motion perception, the brain needs to integrate multi-modal sensory signals such as visual, vestibular and proprioceptive cues. Self-motion perception is very complex and involves multi candidate areas. Previous studies related to self-motion perception during passive motion have revealed that some of the areas show selective response to different directions for both visual (optic flow) and vestibular stimuli, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the visual posterior sylvian fissure (VPS), although MSTd is dominated by visual signals and VPS is dominated by vestibular signals. However, none of studies related to self-motion perception have distinguished the different neuron types with distinct neuronal properties in cortical microcircuitry, which limited our understanding of the local circuits for self-motion perception. In the current study, we classified the recorded MSTd and VPS neurons into putative pyramidal neurons and putative interneurons based on the extracellular action potential waveforms and spontaneous firing rates. We found that: (1) the putative interneurons exhibited obviously broader direction tuning than putative pyramidal neurons in response to their dominant (visual for MSTd; vestibular for VPS) stimulation type; (2) either in visual or vestibular condition, the putative interneurons were more responsive but with larger variability than the putative pyramidal neurons for both MSTd and VPS areas; and (3) the timing of vestibular and visual peak directional tuning was earlier in the putative interneurons than that of the putative pyramidal neurons for both MSTd and VPS areas. Based on these findings we speculated that, within the microcircuitry, several adjacent putative interneurons with broad direction tuning receive earlier strong but variable signals, which might act feedforward input to shape the direction tuning of the target putative pyramidal neuron, but each interneuron may participate in several microcircuitries, targeting different output neurons.
Collapse
Affiliation(s)
| | | | | | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, China
| |
Collapse
|
28
|
Milleret C, Bui Quoc E. Beyond Rehabilitation of Acuity, Ocular Alignment, and Binocularity in Infantile Strabismus. Front Syst Neurosci 2018; 12:29. [PMID: 30072876 PMCID: PMC6058758 DOI: 10.3389/fnsys.2018.00029] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 06/15/2018] [Indexed: 11/13/2022] Open
Abstract
Infantile strabismus impairs the perception of all attributes of the visual scene. High spatial frequency components are no longer visible, leading to amblyopia. Binocularity is altered, leading to the loss of stereopsis. Spatial perception is impaired as well as detection of vertical orientation, the fastest movements, directions of movement, the highest contrasts and colors. Infantile strabismus also affects other vision-dependent processes such as control of postural stability. But presently, rehabilitative therapies for infantile strabismus by ophthalmologists, orthoptists and optometrists are restricted to preventing or curing amblyopia of the deviated eye, aligning the eyes and, whenever possible, preserving or restoring binocular vision during the critical period of development, i.e., before ~10 years of age. All the other impairments are thus ignored; whether they may recover after strabismus treatment even remains unknown. We argue here that medical and paramedical professionals may extend their present treatments of the perceptual losses associated with infantile strabismus. This hypothesis is based on findings from fundamental research on visual system organization of higher mammals in particular at the cortical level. In strabismic subjects (as in normal-seeing ones), information about all of the visual attributes converge, interact and are thus inter-dependent at multiple levels of encoding ranging from the single neuron to neuronal assemblies in visual cortex. Thus if the perception of one attribute is restored this may help to rehabilitate the perception of other attributes. Concomitantly, vision-dependent processes may also improve. This could occur spontaneously, but still should be assessed and validated. If not, medical and paramedical staff, in collaboration with neuroscientists, will have to break new ground in the field of therapies to help reorganize brain circuitry and promote more comprehensive functional recovery. Findings from fundamental research studies in both young and adult patients already support our hypothesis and are reviewed here. For example, presenting different contrasts to each eye of a strabismic patient during training sessions facilitates recovery of acuity in the amblyopic eye as well as of 3D perception. Recent data also demonstrate that visual recoveries in strabismic subjects improve postural stability. These findings form the basis for a roadmap for future research and clinical development to extend presently applied rehabilitative therapies for infantile strabismus.
Collapse
Affiliation(s)
- Chantal Milleret
- Center for Interdisciplinary Research in Biology, Centre National de la Recherche Scientifique, College de France, INSERM, PSL Research University, Paris, France
| | - Emmanuel Bui Quoc
- Department of Ophthalmology, Robert Debré University Hospital, Assistance Publique - Hôpitaux de Paris Paris, France
| |
Collapse
|
29
|
Noel JP, Blanke O, Serino A. From multisensory integration in peripersonal space to bodily self-consciousness: from statistical regularities to statistical inference. Ann N Y Acad Sci 2018; 1426:146-165. [PMID: 29876922 DOI: 10.1111/nyas.13867] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 04/24/2018] [Accepted: 05/02/2018] [Indexed: 01/09/2023]
Abstract
Integrating information across sensory systems is a critical step toward building a cohesive representation of the environment and one's body, and as illustrated by numerous illusions, scaffolds subjective experience of the world and self. In the last years, classic principles of multisensory integration elucidated in the subcortex have been translated into the language of statistical inference understood by the neocortical mantle. Most importantly, a mechanistic systems-level description of multisensory computations via probabilistic population coding and divisive normalization is actively being put forward. In parallel, by describing and understanding bodily illusions, researchers have suggested multisensory integration of bodily inputs within the peripersonal space as a key mechanism in bodily self-consciousness. Importantly, certain aspects of bodily self-consciousness, although still very much a minority, have been recently casted under the light of modern computational understandings of multisensory integration. In doing so, we argue, the field of bodily self-consciousness may borrow mechanistic descriptions regarding the neural implementation of inference computations outlined by the multisensory field. This computational approach, leveraged on the understanding of multisensory processes generally, promises to advance scientific comprehension regarding one of the most mysterious questions puzzling humankind, that is, how our brain creates the experience of a self in interaction with the environment.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience (LNCO), Center for Neuroprosthetics (CNP), Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
- Department of Neurology, University of Geneva, Geneva, Switzerland
| | - Andrea Serino
- MySpace Lab, Department of Clinical Neuroscience, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
30
|
Vélez-Fort M, Bracey EF, Keshavarzi S, Rousseau CV, Cossell L, Lenzi SC, Strom M, Margrie TW. A Circuit for Integration of Head- and Visual-Motion Signals in Layer 6 of Mouse Primary Visual Cortex. Neuron 2018; 98:179-191.e6. [PMID: 29551490 PMCID: PMC5896233 DOI: 10.1016/j.neuron.2018.02.023] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 01/19/2018] [Accepted: 02/23/2018] [Indexed: 11/10/2022]
Abstract
To interpret visual-motion events, the underlying computation must involve internal reference to the motion status of the observer's head. We show here that layer 6 (L6) principal neurons in mouse primary visual cortex (V1) receive a diffuse, vestibular-mediated synaptic input that signals the angular velocity of horizontal rotation. Behavioral and theoretical experiments indicate that these inputs, distributed over a network of 100 L6 neurons, provide both a reliable estimate and, therefore, physiological separation of head-velocity signals. During head rotation in the presence of visual stimuli, L6 neurons exhibit postsynaptic responses that approximate the arithmetic sum of the vestibular and visual-motion response. Functional input mapping reveals that these internal motion signals arrive into L6 via a direct projection from the retrosplenial cortex. We therefore propose that visual-motion processing in V1 L6 is multisensory and contextually dependent on the motion status of the animal's head.
Collapse
Affiliation(s)
- Mateo Vélez-Fort
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Edward F Bracey
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Sepiedeh Keshavarzi
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Charly V Rousseau
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Lee Cossell
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Stephen C Lenzi
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Molly Strom
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Troy W Margrie
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK.
| |
Collapse
|
31
|
O'Hare L, Sharp A, Dickinson P, Richardson G, Shearer J. Investigating Head Movements Induced by 'Riloid' Patterns in Migraine and Control Groups Using a Virtual Reality Display. Multisens Res 2018; 31:753-777. [PMID: 31264621 DOI: 10.1163/22134808-20181310] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Accepted: 04/23/2018] [Indexed: 12/30/2022]
Abstract
Certain striped patterns can induce illusory motion, such as those used in op-art. The visual system and the vestibular system work together closely, and so it is possible that illusory motion from a visual stimulus can result in uncertainty in the vestibular system. This increased uncertainty may be measureable in terms of the magnitude of head movements. Head movements were measured using a head-mounted visual display. Results showed that stimuli associated with illusory motion also seem to induce greater head movements when compared to similar stimuli. Individuals with migraine are more susceptible to visual discomfort, and this includes illusory motion from striped stimuli. However, there was no evidence of increased effect of illusory motion on those with migraine compared to those without, suggesting that while motion illusions may affect discomfort judgements, this is not limited to only those with migraine.
Collapse
Affiliation(s)
- Louise O'Hare
- 1School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Alex Sharp
- 1School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Patrick Dickinson
- 2School of Computer Science, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Graham Richardson
- 3School of Life Sciences, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - John Shearer
- 2School of Computer Science, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| |
Collapse
|
32
|
Gallagher M, Ferrè ER. Cybersickness: a Multisensory Integration Perspective. Multisens Res 2018; 31:645-674. [PMID: 31264611 DOI: 10.1163/22134808-20181293] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 01/05/2018] [Indexed: 11/19/2022]
Abstract
In the past decade, there has been a rapid advance in Virtual Reality (VR) technology. Key to the user's VR experience are multimodal interactions involving all senses. The human brain must integrate real-time vision, hearing, vestibular and proprioceptive inputs to produce the compelling and captivating feeling of immersion in a VR environment. A serious problem with VR is that users may develop symptoms similar to motion sickness, a malady called cybersickness. At present the underlying cause of cybersickness is not yet fully understood. Cybersickness may be due to a discrepancy between the sensory signals which provide information about the body's orientation and motion: in many VR applications, optic flow elicits an illusory sensation of motion which tells users that they are moving in a certain direction with certain acceleration. However, since users are not actually moving, their proprioceptive and vestibular organs provide no cues of self-motion. These conflicting signals may lead to sensory discrepancies and eventually cybersickness. Here we review the current literature to develop a conceptual scheme for understanding the neural mechanisms of cybersickness. We discuss an approach to cybersickness based on sensory cue integration, focusing on the dynamic re-weighting of visual and vestibular signals for self-motion.
Collapse
Affiliation(s)
- Maria Gallagher
- Department of Psychology, Royal Holloway University of London, Egham, UK
| | | |
Collapse
|
33
|
Rogers C, Rushton SK, Warren PA. Peripheral Visual Cues Contribute to the Perception of Object Movement During Self-Movement. Iperception 2017; 8:2041669517736072. [PMID: 29201335 PMCID: PMC5700793 DOI: 10.1177/2041669517736072] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Safe movement through the environment requires us to monitor our surroundings for moving objects or people. However, identification of moving objects in the scene is complicated by self-movement, which adds motion across the retina. To identify world-relative object movement, the brain thus has to ‘compensate for’ or ‘parse out’ the components of retinal motion that are due to self-movement. We have previously demonstrated that retinal cues arising from central vision contribute to solving this problem. Here, we investigate the contribution of peripheral vision, commonly thought to provide strong cues to self-movement. Stationary participants viewed a large field of view display, with radial flow patterns presented in the periphery, and judged the trajectory of a centrally presented probe. Across two experiments, we demonstrate and quantify the contribution of peripheral optic flow to flow parsing during forward and backward movement.
Collapse
Affiliation(s)
| | | | - Paul A Warren
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| |
Collapse
|
34
|
Page WK, Duffy CJ. Path perturbation detection tasks reduce MSTd neuronal self-movement heading responses. J Neurophysiol 2017; 119:124-133. [PMID: 29046430 DOI: 10.1152/jn.00958.2016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We presented optic flow and real movement heading stimuli while recording MSTd neuronal activity. Monkeys were alternately engaged in three tasks: visual detection of optic flow heading perturbations, vestibular detection of real movement heading perturbations, and auditory detection of brief tones. Push-button RTs were fastest for tones and slower for visual and vestibular heading perturbations, suggesting that the tone detection task was easier. Neuronal heading selectivity was strongest during the tone detection task, and weaker during the visual and vestibular heading perturbation detection tasks. Heading selectivity was weaker during visual and vestibular path perturbation detection, despite our presented heading cues only in the visual and vestibular modalities. We conclude that focusing on the self-movement transients of path perturbation distracted the monkeys from their heading and reduced neuronal responsiveness to heading direction. NEW & NOTEWORTHY Heading analysis is critical for steering and navigation. We recorded the activity of monkey cortical heading neurons during naturalistic self-movement. When the monkeys were required to respond to transient changes in their path, neuronal responses to heading direction were diminished. This suggests that the need to respond to momentary path perturbations reduces your ability to process your heading direction.
Collapse
Affiliation(s)
- William K Page
- Departments of Neurology, Neurobiology and Anatomy, Ophthalmology, Brain and Cognitive Sciences, and The Center for Visual Science, The University of Rochester Medical Center , Rochester, New York
| | - Charles J Duffy
- Departments of Neurology, Neurobiology and Anatomy, Ophthalmology, Brain and Cognitive Sciences, and The Center for Visual Science, The University of Rochester Medical Center , Rochester, New York
| |
Collapse
|
35
|
Greenlee MW. Self-Motion Perception: Ups and Downs of Multisensory Integration and Conflict Detection. Curr Biol 2017; 27:R1006-R1007. [PMID: 28950080 DOI: 10.1016/j.cub.2017.07.050] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
A new study indicates that, in humans, eye movements play an important role in self-motion perception, in particular in integrating information from the visual and vestibular systems and detecting possible conflicts between them.
Collapse
Affiliation(s)
- Mark W Greenlee
- Institute for Experimental Psychology, University of Regensburg, 93053 Regensburg, Germany.
| |
Collapse
|
36
|
Garzorz IT, MacNeilage PR. Visual-Vestibular Conflict Detection Depends on Fixation. Curr Biol 2017; 27:2856-2861.e4. [DOI: 10.1016/j.cub.2017.08.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 06/19/2017] [Accepted: 08/04/2017] [Indexed: 10/18/2022]
|
37
|
Happee R, de Bruijn E, Forbes PA, van der Helm FCT. Dynamic head-neck stabilization and modulation with perturbation bandwidth investigated using a multisegment neuromuscular model. J Biomech 2017; 58:203-211. [PMID: 28577906 DOI: 10.1016/j.jbiomech.2017.05.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2016] [Revised: 05/08/2017] [Accepted: 05/08/2017] [Indexed: 11/17/2022]
Abstract
The human head-neck system requires continuous stabilization in the presence of gravity and trunk motion. We investigated contributions of the vestibulocollic reflex (VCR), the cervicocollic reflex (CCR), and neck muscle co-contraction to head-in-space and head-on-trunk stabilization, and investigated modulation of the stabilization strategy with the frequency content of trunk perturbations and the presence of visual feedback. We developed a multisegment cervical spine model where reflex gains (VCR and CCR) and neck muscle co-contraction were estimated by fitting the model to the response of young healthy subjects, seated and exposed to anterior-posterior trunk motion, with frequency content from 0.3 up to 1, 2, 4 and 8Hz, with and without visual feedback. The VCR contributed to head-in-space stabilization with a strong reduction of head rotation (<8Hz) and a moderate reduction of head translation (>1Hz). The CCR contributed to head-on-trunk stabilization with a reduction of head rotation and head translation relative to the trunk (<2Hz). The CCR also proved essential to stabilize the individual intervertebral joints and prevent neck buckling. Co-contraction was estimated to be of minor relevance. Control strategies employed during low bandwidth perturbations most effectively reduced head rotation and head relative displacement up to 3Hz while control strategies employed during high bandwidth perturbations reduced head global translation between 1 and 4Hz. This indicates a shift from minimizing head-on-trunk rotation and translation during low bandwidth perturbations to minimizing head-in-space translation during high bandwidth perturbations. Presence of visual feedback had limited effects suggesting increased usage of vestibular feedback.
Collapse
Affiliation(s)
- Riender Happee
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands.
| | - Edo de Bruijn
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands
| | - Patrick A Forbes
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands; Department of Neuroscience, Erasmus Medical Centre, Rotterdam, The Netherlands
| | - Frans C T van der Helm
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands; Laboratory of Biomechanical Engineering, University of Twente, Enschede, The Netherlands
| |
Collapse
|
38
|
de Winkel KN, Katliar M, Bülthoff HH. Causal Inference in Multisensory Heading Estimation. PLoS One 2017; 12:e0169676. [PMID: 28060957 PMCID: PMC5218471 DOI: 10.1371/journal.pone.0169676] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2016] [Accepted: 12/20/2016] [Indexed: 11/30/2022] Open
Abstract
A large body of research shows that the Central Nervous System (CNS) integrates multisensory information. However, this strategy should only apply to multisensory signals that have a common cause; independent signals should be segregated. Causal Inference (CI) models account for this notion. Surprisingly, previous findings suggested that visual and inertial cues on heading of self-motion are integrated regardless of discrepancy. We hypothesized that CI does occur, but that characteristics of the motion profiles affect multisensory processing. Participants estimated heading of visual-inertial motion stimuli with several different motion profiles and a range of intersensory discrepancies. The results support the hypothesis that judgments of signal causality are included in the heading estimation process. Moreover, the data suggest a decreasing tolerance for discrepancies and an increasing reliance on visual cues for longer duration motions.
Collapse
Affiliation(s)
- Ksander N. de Winkel
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Baden-Württemburg, Germany
- * E-mail:
| | - Mikhail Katliar
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Baden-Württemburg, Germany
| | - Heinrich H. Bülthoff
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Baden-Württemburg, Germany
| |
Collapse
|
39
|
Weech S, Troje NF. Vection Latency Is Reduced by Bone-Conducted Vibration and Noisy Galvanic Vestibular Stimulation. Multisens Res 2017. [DOI: 10.1163/22134808-00002545] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
Studies of the illusory sense of self-motion elicited by a moving visual surround (‘vection’) have revealed key insights about how sensory information is integrated. Vection usually occurs after a delay of several seconds following visual motion onset, whereas self-motion in the natural environment is perceived immediately. It has been suggested that this latency relates to the sensory mismatch between visual and vestibular signals at motion onset. Here, we tested three techniques with the potential to reduce sensory mismatch in order to shorten vection onset latency: noisy galvanic vestibular stimulation (GVS) and bone conducted vibration (BCV) at the mastoid processes, and body vibration applied to the lower back. In Experiment 1, we examined vection latency for wide field visual rotations about the roll axis and applied a burst of stimulation at the start of visual motion. Both GVS and BCV reduced vection latency by two seconds compared to the control condition, whereas body vibration had no effect on latency. In Experiment 2, the visual stimulus rotated about the pitch, roll, or yaw axis and we found a similar facilitation of vection by both BCV and GVS in each case. In a control experiment, we confirmed that air-conducted sound administered through headphones was not sufficient to reduce vection onset latency. Together the results suggest that noisy vestibular stimulation facilitates vection, likely due to an upweighting of visual information caused by a reduction in vestibular sensory reliability.
Collapse
Affiliation(s)
- Séamas Weech
- Department of Psychology, Queen’s University, Kingston, ON, Canada
| | - Nikolaus F. Troje
- Department of Psychology, Queen’s University, Kingston, ON, Canada
- Department of Biology, Queen’s University, Kingston, ON, Canada
- School of Computing, Queen’s University, Kingston, ON, Canada
| |
Collapse
|
40
|
Evidence for a Causal Contribution of Macaque Vestibular, But Not Intraparietal, Cortex to Heading Perception. J Neurosci 2016; 36:3789-98. [PMID: 27030763 DOI: 10.1523/jneurosci.2485-15.2016] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Accepted: 01/31/2016] [Indexed: 12/30/2022] Open
Abstract
UNLABELLED Multisensory convergence of visual and vestibular signals has been observed within a network of cortical areas involved in representing heading. Vestibular-dominant heading tuning has been found in the macaque parietoinsular vestibular cortex (PIVC) and the adjacent visual posterior sylvian (VPS) area, whereas relatively balanced visual/vestibular tuning was encountered in the ventral intraparietal (VIP) area and visual-dominant tuning was found in the dorsal medial superior temporal (MSTd) area. Although the respective functional roles of these areas remain unclear, perceptual deficits in heading discrimination following reversible chemical inactivation of area MSTd area suggested that areas with vestibular-dominant heading tuning also contribute to behavior. To explore the roles of other areas in heading perception, muscimol injections were used to reversibly inactivate either the PIVC or the VIP area bilaterally in macaques. Inactivation of the anterior PIVC increased psychophysical thresholds when heading judgments were based on either optic flow or vestibular cues, although effects were stronger for vestibular stimuli. All behavioral deficits recovered within 36 h. Visual deficits were larger following inactivation of the posterior portion of the PIVC, likely because these injections encroached upon the VPS area, which contains neurons with optic flow tuning (unlike the PIVC). In contrast, VIP inactivation led to no behavioral deficits, despite the fact that VIP neurons show much stronger choice-related activity than MSTd neurons. These results suggest that the VIP area either provides a parallel and partially redundant pathway for this task, or does not participate in heading discrimination. In contrast, the PIVC/VPS area, along with the MSTd area, make causal contributions to heading perception based on either vestibular or visual signals. SIGNIFICANCE STATEMENT Multisensory vestibular and visual signals are found in multiple cortical areas, but their causal contribution to self-motion perception has been previously tested only in the dorsal medial superior temporal (MSTd) area. In these experiments, we show that inactivation of the parietoinsular vestibular cortex (PIVC) also results in causal deficits during heading discrimination for both visual and vestibular cues. In contrast, ventral intraparietal (VIP) area inactivation led to no behavioral deficits, despite the fact that VIP neurons show much stronger choice-related activity than MSTd or PIVC neurons. These results demonstrate that choice-related activity does not always imply a causal role in sensory perception.
Collapse
|
41
|
Cheng Z, Gu Y. Distributed Representation of Curvilinear Self-Motion in the Macaque Parietal Cortex. Cell Rep 2016; 15:1013-1023. [PMID: 27117412 DOI: 10.1016/j.celrep.2016.03.089] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2015] [Revised: 12/10/2015] [Accepted: 03/24/2016] [Indexed: 11/29/2022] Open
Abstract
Information about translations and rotations of the body is critical for complex self-motion perception during spatial navigation. However, little is known about the nature and function of their convergence in the cortex. We measured neural activity in multiple areas in the macaque parietal cortex in response to three different types of body motion applied through a motion platform: translation, rotation, and combined stimuli, i.e., curvilinear motion. We found a continuous representation of motion types in each area. In contrast to single-modality cells preferring either translation-only or rotation-only stimuli, convergent cells tend to be optimally tuned to curvilinear motion. A weighted summation model captured the data well, suggesting that translation and rotation signals are integrated subadditively in the cortex. Interestingly, variation in the activity of convergent cells parallels behavioral outputs reported in human psychophysical experiments. We conclude that representation of curvilinear self-motion perception is widely distributed in the primate sensory cortex.
Collapse
Affiliation(s)
- Zhixian Cheng
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yong Gu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China.
| |
Collapse
|
42
|
Straka H, Zwergal A, Cullen KE. Vestibular animal models: contributions to understanding physiology and disease. J Neurol 2016; 263 Suppl 1:S10-23. [PMID: 27083880 PMCID: PMC4833800 DOI: 10.1007/s00415-015-7909-y] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2015] [Revised: 09/11/2015] [Accepted: 09/12/2015] [Indexed: 12/20/2022]
Abstract
Our knowledge of the vestibular sensory system, its functional significance for gaze and posture stabilization, and its capability to ensure accurate spatial orientation perception and spatial navigation has greatly benefitted from experimental approaches using a variety of vertebrate species. This review summarizes the attempts to establish the roles of semicircular canal and otolith endorgans in these functions followed by an overview of the most relevant fields of vestibular research including major findings that have advanced our understanding of how this system exerts its influence on reflexive and cognitive challenges encountered during daily life. In particular, we highlight the contributions of different animal models and the advantage of using a comparative research approach. Cross-species comparisons have established that the morpho-physiological properties underlying vestibular signal processing are evolutionarily inherent, thereby disclosing general principles. Based on the documented success of this approach, we suggest that future research employing a balanced spectrum of standard animal models such as fish/frog, mouse and primate will optimize our progress in understanding vestibular processing in health and disease. Moreover, we propose that this should be further supplemented by research employing more “exotic” species that offer unique experimental access and/or have specific vestibular adaptations due to unusual locomotor capabilities or lifestyles. Taken together this strategy will expedite our understanding of the basic principles underlying vestibular computations to reveal relevant translational aspects. Accordingly, studies employing animal models are indispensible and even mandatory for the development of new treatments, medication and technical aids (implants) for patients with vestibular pathologies.
Collapse
Affiliation(s)
- Hans Straka
- Department Biology II, Ludwig-Maximilians-University Munich, Grosshaderner Str. 2, 82152, Planegg, Germany. .,German Center for Vertigo and Balance Disorders, DSGZ, Ludwig-Maximilians-University of Munich, Munich, Germany.
| | - Andreas Zwergal
- German Center for Vertigo and Balance Disorders, DSGZ, Ludwig-Maximilians-University of Munich, Munich, Germany.,Department of Neurology, Ludwig-Maximilians-University of Munich, Munich, Germany
| | - Kathleen E Cullen
- Department of Physiology, McGill University, Montreal, QC, H3A 0G4, Canada
| |
Collapse
|
43
|
Abstract
Low-level perception results from neural-based computations, which build a multimodal skeleton of unconscious or self-generated inferences on our environment. This review identifies bottleneck issues concerning the role of early primary sensory cortical areas, mostly in rodent and higher mammals (cats and non-human primates), where perception substrates can be searched at multiple scales of neural integration. We discuss the limitation of purely bottom-up approaches for providing realistic models of early sensory processing and the need for identification of fast adaptive processes, operating within the time of a percept. Future progresses will depend on the careful use of comparative neuroscience (guiding the choices of experimental models and species adapted to the questions under study), on the definition of agreed-upon benchmarks for sensory stimulation, on the simultaneous acquisition of neural data at multiple spatio-temporal scales, and on the in vivo identification of key generic integration and plasticity algorithms validated experimentally and in simulations.
Collapse
|
44
|
Abstract
The relative simplicity of the neural circuits that mediate vestibular reflexes is well suited for linking systems and cellular levels of analyses. Notably, a distinctive feature of the vestibular system is that neurons at the first central stage of sensory processing in the vestibular nuclei are premotor neurons; the same neurons that receive vestibular-nerve input also send direct projections to motor pathways. For example, the simplicity of the three-neuron pathway that mediates the vestibulo-ocular reflex leads to the generation of compensatory eye movements within ~5ms of a head movement. Similarly, relatively direct pathways between the labyrinth and spinal cord control vestibulospinal reflexes. A second distinctive feature of the vestibular system is that the first stage of central processing is strongly multimodal. This is because the vestibular nuclei receive inputs from a wide range of cortical, cerebellar, and other brainstem structures in addition to direct inputs from the vestibular nerve. Recent studies in alert animals have established how extravestibular signals shape these "simple" reflexes to meet the needs of current behavioral goal. Moreover, multimodal interactions at higher levels, such as the vestibular cerebellum, thalamus, and cortex, play a vital role in ensuring accurate self-motion and spatial orientation perception.
Collapse
Affiliation(s)
- K E Cullen
- Department of Physiology, McGill University, Montreal, Quebec, Canada.
| |
Collapse
|
45
|
Greenlee M, Frank S, Kaliuzhna M, Blanke O, Bremmer F, Churan J, Cuturi LF, MacNeilage P, Smith A. Multisensory Integration in Self Motion Perception. Multisens Res 2016. [DOI: 10.1163/22134808-00002527] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate one’s position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities.
Collapse
Affiliation(s)
- Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
| | - Sebastian M. Frank
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Mariia Kaliuzhna
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Frank Bremmer
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Jan Churan
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Luigi F. Cuturi
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Paul R. MacNeilage
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, UK
| |
Collapse
|
46
|
Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation. PLoS One 2015; 10:e0145015. [PMID: 26658990 PMCID: PMC4687653 DOI: 10.1371/journal.pone.0145015] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2015] [Accepted: 11/25/2015] [Indexed: 11/29/2022] Open
Abstract
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.
Collapse
|
47
|
de Winkel KN, Katliar M, Bülthoff HH. Forced fusion in multisensory heading estimation. PLoS One 2015; 10:e0127104. [PMID: 25938235 PMCID: PMC4418840 DOI: 10.1371/journal.pone.0127104] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Accepted: 04/10/2015] [Indexed: 11/18/2022] Open
Abstract
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.
Collapse
Affiliation(s)
- Ksander N. de Winkel
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
| | - Mikhail Katliar
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
| | - Heinrich H. Bülthoff
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
- Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713, Korea
- * E-mail:
| |
Collapse
|
48
|
Arnoldussen DM, Goossens J, van Den Berg AV. Dissociation of retinal and headcentric disparity signals in dorsal human cortex. Front Syst Neurosci 2015; 9:16. [PMID: 25759642 PMCID: PMC4338660 DOI: 10.3389/fnsys.2015.00016] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2014] [Accepted: 02/02/2015] [Indexed: 11/20/2022] Open
Abstract
Recent fMRI studies have shown fusion of visual motion and disparity signals for shape perception (Ban et al., 2012), and unmasking camouflaged surfaces (Rokers et al., 2009), but no such interaction is known for typical dorsal motion pathway tasks, like grasping and navigation. Here, we investigate human speed perception of forward motion and its representation in the human motion network. We observe strong interaction in medial (V3ab, V6) and lateral motion areas (MT+), which differ significantly. Whereas the retinal disparity dominates the binocular contribution to the BOLD activity in the anterior part of area MT+, headcentric disparity modulation of the BOLD response dominates in area V3ab and V6. This suggests that medial motion areas not only represent rotational speed of the head (Arnoldussen et al., 2011), but also translational speed of the head relative to the scene. Interestingly, a strong response to vergence eye movements was found in area V1, which showed a dependency on visual direction, just like vertical-size disparity. This is the first report of a vertical-size disparity correlate in human striate cortex.
Collapse
Affiliation(s)
- David M Arnoldussen
- Section Biophysics, Department of Cognitive Neuroscience, Radboud University Nijmegen Medical Centre, Donders Institute for Brain, Cognition, and Behavior Nijmegen, Netherlands ; School of Psychology, University of Nottingham Nottingham, UK
| | - Jeroen Goossens
- Section Biophysics, Department of Cognitive Neuroscience, Radboud University Nijmegen Medical Centre, Donders Institute for Brain, Cognition, and Behavior Nijmegen, Netherlands
| | - Albert V van Den Berg
- Section Biophysics, Department of Cognitive Neuroscience, Radboud University Nijmegen Medical Centre, Donders Institute for Brain, Cognition, and Behavior Nijmegen, Netherlands
| |
Collapse
|
49
|
Sharp emergence of feature-selective sustained activity along the dorsal visual pathway. Nat Neurosci 2014; 17:1255-62. [PMID: 25108910 PMCID: PMC4978542 DOI: 10.1038/nn.3785] [Citation(s) in RCA: 153] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2014] [Accepted: 07/15/2014] [Indexed: 12/20/2022]
Abstract
Sustained activity encoding visual working memory representations has been observed in several cortical areas of primates. Where along the visual pathways this activity emerges remains unknown. Here we show in macaques that sustained spiking activity encoding memorized visual motion directions is absent in direction-selective neurons in early visual area middle temporal (MT). However, it is robustly present immediately downstream, in multimodal association area medial superior temporal (MST), and in the lateral prefrontal cortex (LPFC). This sharp emergence of sustained activity along the dorsal pathway suggests a functional boundary between early visual areas, encoding sensory inputs, and downstream association areas, additionally encoding mnemonic representations. Moreover, local field potential oscillations in MT encoded the memorized directions and, in the low frequencies, were phase-coherent with LPFC spikes. This suggests that LPFC sustained activity modulates synaptic activity in MT, a putative top-down mechanism by which memory signals influence stimulus processing in early visual cortex.
Collapse
|
50
|
Identifying and quantifying multisensory integration: a tutorial review. Brain Topogr 2014; 27:707-30. [PMID: 24722880 DOI: 10.1007/s10548-014-0365-7] [Citation(s) in RCA: 133] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 03/26/2014] [Indexed: 12/19/2022]
Abstract
We process information from the world through multiple senses, and the brain must decide what information belongs together and what information should be segregated. One challenge in studying such multisensory integration is how to quantify the multisensory interactions, a challenge that is amplified by the host of methods that are now used to measure neural, behavioral, and perceptual responses. Many of the measures that have been developed to quantify multisensory integration (and which have been derived from single unit analyses), have been applied to these different measures without much consideration for the nature of the process being studied. Here, we provide a review focused on the means with which experimenters quantify multisensory processes and integration across a range of commonly used experimental methodologies. We emphasize the most commonly employed measures, including single- and multiunit responses, local field potentials, functional magnetic resonance imaging, and electroencephalography, along with behavioral measures of detection, accuracy, and response times. In each section, we will discuss the different metrics commonly used to quantify multisensory interactions, including the rationale for their use, their advantages, and the drawbacks and caveats associated with them. Also discussed are possible alternatives to the most commonly used metrics.
Collapse
|