1
|
Zaidel A. Multisensory Calibration: A Variety of Slow and Fast Brain Processes Throughout the Lifespan. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:139-152. [PMID: 38270858 DOI: 10.1007/978-981-99-7611-9_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
From before we are born, throughout development, adulthood, and aging, we are immersed in a multisensory world. At each of these stages, our sensory cues are constantly changing, due to body, brain, and environmental changes. While integration of information from our different sensory cues improves precision, this only improves accuracy if the underlying cues are unbiased. Thus, multisensory calibration is a vital and ongoing process. To meet this grand challenge, our brains have evolved a variety of mechanisms. First, in response to a systematic discrepancy between sensory cues (without external feedback) the cues calibrate one another (unsupervised calibration). Second, multisensory function is calibrated to external feedback (supervised calibration). These two mechanisms superimpose. While the former likely reflects a lower level mechanism, the latter likely reflects a higher level cognitive mechanism. Indeed, neural correlates of supervised multisensory calibration in monkeys were found in higher level multisensory cortical area VIP, but not in the relatively lower level multisensory area MSTd. In addition, even without a cue discrepancy (e.g., when experiencing stimuli from different sensory cues in series) the brain monitors supra-modal statistics of events in the environment and adapts perception cross-modally. This too comprises a variety of mechanisms, including confirmation bias to prior choices, and lower level cross-sensory adaptation. Further research into the neuronal underpinnings of the broad and diverse functions of multisensory calibration, with improved synthesis of theories is needed to attain a more comprehensive understanding of multisensory brain function.
Collapse
Affiliation(s)
- Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, Israel.
| |
Collapse
|
2
|
Sun Q, Gong XM, Zhan LZ, Wang SY, Dong LL. Serial dependence bias can predict the overall estimation error in visual perception. J Vis 2023; 23:2. [PMID: 37917052 PMCID: PMC10627302 DOI: 10.1167/jov.23.13.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 10/07/2023] [Indexed: 11/03/2023] Open
Abstract
Although visual feature estimations are accurate and precise, overall estimation errors (i.e., the difference between estimates and actual values) tend to show systematic patterns. For example, estimates of orientations are systematically biased away from horizontal and vertical orientations, showing an oblique illusion. Additionally, many recent studies have demonstrated that estimations of current visual features are systematically biased toward previously seen features, showing a serial dependence. However, no study examined whether the overall estimation errors were correlated with the serial dependence bias. To address this question, we enrolled three groups of participants to estimate orientation, motion speed, and point-light-walker direction. The results showed that the serial dependence bias explained over 20% of overall estimation errors in the three tasks, indicating that we could use the serial dependence bias to predict the overall estimation errors. The current study first demonstrated that the serial dependence bias was not independent from the overall estimation errors. This finding could inspire researchers to investigate the neural bases underlying the visual feature estimation and serial dependence.
Collapse
Affiliation(s)
- Qi Sun
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China, PRC
| | - Xiu-Mei Gong
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
| | - Lin-Zhe Zhan
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
| | - Si-Yu Wang
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
| | | |
Collapse
|
3
|
Lin R, Zeng F, Wang Q, Chen A. Cross-Modal Plasticity during Self-Motion Perception. Brain Sci 2023; 13:1504. [PMID: 38002465 PMCID: PMC10669852 DOI: 10.3390/brainsci13111504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 10/13/2023] [Accepted: 10/23/2023] [Indexed: 11/26/2023] Open
Abstract
To maintain stable and coherent perception in an ever-changing environment, the brain needs to continuously and dynamically calibrate information from multiple sensory sources, using sensory and non-sensory information in a flexible manner. Here, we review how the vestibular and visual signals are recalibrated during self-motion perception. We illustrate two different types of recalibration: one long-term cross-modal (visual-vestibular) recalibration concerning how multisensory cues recalibrate over time in response to a constant cue discrepancy, and one rapid-term cross-modal (visual-vestibular) recalibration concerning how recent prior stimuli and choices differentially affect subsequent self-motion decisions. In addition, we highlight the neural substrates of long-term visual-vestibular recalibration, with profound differences observed in neuronal recalibration across multisensory cortical areas. We suggest that multisensory recalibration is a complex process in the brain, is modulated by many factors, and requires the coordination of many distinct cortical areas. We hope this review will shed some light on research into the neural circuits of visual-vestibular recalibration and help develop a more generalized theory for cross-modal plasticity.
Collapse
Affiliation(s)
- Rushi Lin
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Fu Zeng
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Qingjun Wang
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China; (R.L.); (F.Z.); (Q.W.)
- NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai 200122, China
| |
Collapse
|
4
|
Xu LH, Sun Q, Zhang B, Li X. Attractive serial dependence in heading perception from optic flow occurs at the perceptual and postperceptual stages. J Vis 2022; 22:11. [DOI: 10.1167/jov.22.12.11] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Affiliation(s)
- Ling-Hao Xu
- Department of Systems & Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Qi Sun
- Department of Psychology, Zhejiang Normal University, Jinhua, People's Republic of China
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, People's Republic of China
| | - Baoyuan Zhang
- Department of Psychology, Zhejiang Normal University, Jinhua, People's Republic of China
| | - Xinyu Li
- Department of Psychology, Zhejiang Normal University, Jinhua, People's Republic of China
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, People's Republic of China
| |
Collapse
|
5
|
Perceptual Biases as the Side Effect of a Multisensory Adaptive System: Insights from Verticality and Self-Motion Perception. Vision (Basel) 2022; 6:vision6030053. [PMID: 36136746 PMCID: PMC9502132 DOI: 10.3390/vision6030053] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 07/22/2022] [Accepted: 08/04/2022] [Indexed: 11/17/2022] Open
Abstract
Perceptual biases can be interpreted as adverse consequences of optimal processes which otherwise improve system performance. The review presented here focuses on the investigation of inaccuracies in multisensory perception by focusing on the perception of verticality and self-motion, where the vestibular sensory modality has a prominent role. Perception of verticality indicates how the system processes gravity. Thus, it represents an indirect measurement of vestibular perception. Head tilts can lead to biases in perceived verticality, interpreted as the influence of a vestibular prior set at the most common orientation relative to gravity (i.e., upright), useful for improving precision when upright (e.g., fall avoidance). Studies on the perception of verticality across development and in the presence of blindness show that prior acquisition is mediated by visual experience, thus unveiling the fundamental role of visuo-vestibular interconnections across development. Such multisensory interactions can be behaviorally tested with cross-modal aftereffect paradigms which test whether adaptation in one sensory modality induces biases in another, eventually revealing an interconnection between the tested sensory modalities. Such phenomena indicate the presence of multisensory neural mechanisms that constantly function to calibrate self-motion dedicated sensory modalities with each other as well as with the environment. Thus, biases in vestibular perception reveal how the brain optimally adapts to environmental requests, such as spatial navigation and steady changes in the surroundings.
Collapse
|
6
|
Gabriel GA, Harris LR, Gnanasegaram JJ, Cushing SL, Gordon KA, Haycock BC, Campos JL. Age-related changes to vestibular heave and pitch perception and associations with postural control. Sci Rep 2022; 12:6426. [PMID: 35440744 PMCID: PMC9018785 DOI: 10.1038/s41598-022-09807-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 03/21/2022] [Indexed: 11/09/2022] Open
Abstract
Falls are a common cause of injury in older adults (OAs), and age-related declines across the sensory systems are associated with increased falls risk. The vestibular system is particularly important for maintaining balance and supporting safe mobility, and aging has been associated with declines in vestibular end-organ functioning. However, few studies have examined potential age-related differences in vestibular perceptual sensitivities or their association with postural stability. Here we used an adaptive-staircase procedure to measure detection and discrimination thresholds in 19 healthy OAs and 18 healthy younger adults (YAs), by presenting participants with passive heave (linear up-and-down translations) and pitch (forward-backward tilt rotations) movements on a motion-platform in the dark. We also examined participants' postural stability under various standing-balance conditions. Associations among these postural measures and vestibular perceptual thresholds were further examined. Ultimately, OAs showed larger heave and pitch detection thresholds compared to YAs, and larger perceptual thresholds were associated with greater postural sway, but only in OAs. Overall, these results suggest that vestibular perceptual sensitivity declines with older age and that such declines are associated with poorer postural stability. Future studies could consider the potential applicability of these results in the development of screening tools for falls prevention in OAs.
Collapse
Affiliation(s)
- Grace A Gabriel
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Department of Psychology, University of Toronto, 500 University Avenue, Toronto, ON, M5G 2A2, Canada
| | - Laurence R Harris
- Department of Psychology and Centre for Vision Research, York University, Toronto, ON, Canada
| | - Joshua J Gnanasegaram
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Sharon L Cushing
- Department of Otolaryngology-Head and Neck Surgery, Hospital for Sick Children, Toronto, ON, Canada.,Department of Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, ON, Canada.,Archie's Cochlear Implant Laboratory, Hospital for Sick Children, Toronto, ON, Canada
| | - Karen A Gordon
- Department of Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, ON, Canada.,Archie's Cochlear Implant Laboratory, Hospital for Sick Children, Toronto, ON, Canada
| | - Bruce C Haycock
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,University of Toronto Institute for Aerospace Studies, Toronto, ON, Canada
| | - Jennifer L Campos
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada. .,Department of Psychology, University of Toronto, 500 University Avenue, Toronto, ON, M5G 2A2, Canada.
| |
Collapse
|
7
|
Mooti R, Park H. Contribution of Cervical Proprioception, Vision, and Vestibular Feedback on Reducing Dynamic Head–Trunk Orientation Error in the Yaw Direction. Front Neurosci 2022; 15:774448. [PMID: 35140583 PMCID: PMC8818861 DOI: 10.3389/fnins.2021.774448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 12/17/2021] [Indexed: 11/13/2022] Open
Abstract
The contribution of cervical proprioception, vision, and vestibular feedback to the dynamic head–trunk orientation error in the yaw direction was investigated to further the understanding over the mechanism of coordination among different sensory modalities for dynamic head–trunk orientation. To test the contribution of each sensory modality, individually and together, to dynamic head–trunk orientation, 10 healthy human subjects participated in the extended cervical joint position error test, measuring the ability of repositioning the head back to the reference orientation after 45° yaw rotation of head or trunk. The error between initial and returned angles was measured. The test was repeated under eight different conditions of sensory feedback, with or without each of three sensory modalities. Each subject completed 64 trials (8 per condition) in a random order for fair comparison. No change was found in bias when one of the three modalities was missing, while variance was largest at the lack of dynamic cervical proprioception. When two of the three modalities were missing (i.e., one of the three modalities was present), both bias and variance were minimum at the presence of cervical proprioception. Additionally, both visual and vestibular feedback was redundant (i.e., no further improvement in both bias and variance), if the other one (visual or vestibular feedback) was present with dynamic cervical proprioception. In sum, the experimental results suggest that dynamic cervical proprioception is the most significant sensory modality for reducing the dynamic head–trunk orientation error in the yaw direction.
Collapse
|
8
|
Rapid cross-sensory adaptation of self-motion perception. Cortex 2022; 148:14-30. [DOI: 10.1016/j.cortex.2021.11.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 10/24/2021] [Accepted: 11/16/2021] [Indexed: 11/19/2022]
|
9
|
Modeling Physiological Sources of Heading Bias from Optic Flow. eNeuro 2021; 8:ENEURO.0307-21.2021. [PMID: 34642226 PMCID: PMC8607907 DOI: 10.1523/eneuro.0307-21.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 09/01/2021] [Accepted: 09/20/2021] [Indexed: 11/21/2022] Open
Abstract
Human heading perception from optic flow is accurate for directions close to the straight-ahead and systematic biases emerge in the periphery (Cuturi and Macneilage, 2013; Sun et al., 2020). In pursuit of the underlying neural mechanisms, primate brain dorsal medial superior temporal (MSTd) area has been a focus because of its causal link with heading perception (Gu et al., 2012). Computational models generally explain heading sensitivity in individual MSTd neurons as a feedforward integration of motion signals from medial temporal (MT) area that resemble full-field optic flow patterns consistent with the preferred heading direction (Britten, 2008; Mineault et al., 2012). In the present simulation study, we quantified within the structure of this feedforward model how physiological properties of MT and MSTd shape heading signals. We found that known physiological tuning characteristics generally supported the accuracy of heading estimation, but not always. A weak-to-moderate overrepresentation of peripheral headings in MSTd garnered the highest accuracy and precision out of the models that we tested. The model also performed well when noise corrupted high proportions of the optic flow vectors. Such a peripheral MSTd model performed well when units possessed a range of receptive field (RF) sizes and were strongly direction tuned. Physiological biases in MT direction tuning toward the radial direction also supported heading estimation, but the tendency for MT preferred speed and RF size to scale with eccentricity did not. Our findings help elucidate the extent to which different physiological tuning properties influence the accuracy and precision of neural heading signals.
Collapse
|
10
|
Rodriguez R, Crane BT. Effect of timing delay between visual and vestibular stimuli on heading perception. J Neurophysiol 2021; 126:304-312. [PMID: 34191637 DOI: 10.1152/jn.00351.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Heading direction is perceived based on visual and inertial cues. The current study examined the effect of their relative timing on the ability of offset visual headings to influence inertial perception. Seven healthy human subjects experienced 2 s of translation along a heading of 0°, ±35°, ±70°, ±105°, or ±140°. These inertial headings were paired with 2-s duration visual headings that were presented at relative offsets of 0°, ±30°, ±60°, ±90°, or ±120°. The visual stimuli were also presented at 17 temporal delays ranging from -500 ms (visual lead) to 2,000 ms (visual delay) relative to the inertial stimulus. After each stimulus, subjects reported the direction of the inertial stimulus using a dial. The bias of the inertial heading toward the visual heading was robust at ±250 ms when examined across subjects during this period: 8.0° ± 0.5° with a 30° offset, 12.2° ± 0.5° with a 60° offset, 11.7° ± 0.6° with a 90° offset, and 9.8° ± 0.7° with a 120° offset (mean bias toward visual ± SE). The mean bias was much diminished with temporal misalignments of ±500 ms, and there was no longer any visual influence on the inertial heading when the visual stimulus was delayed by 1,000 ms or more. Although the amount of bias varied between subjects, the effect of delay was similar.NEW & NOTEWORTHY The effect of timing on visual-inertial integration on heading perception has not been previously examined. This study finds that visual direction influence inertial heading perception when timing differences are within 250 ms. This suggests visual-inertial stimuli can be integrated over a wider range than reported for visual-auditory integration and may be due to the unique nature of inertial sensation, which can only sense acceleration while the visual system senses position but encodes velocity.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Biomedical Engineering, University of Rochester, Rochester, New York
| | - Benjamin T Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York.,Department of Otolaryngology, University of Rochester, Rochester, New York.,Department of Neuroscience, University of Rochester, Rochester, New York
| |
Collapse
|
11
|
Diaz-Artiles A, Karmali F. Vestibular Precision at the Level of Perception, Eye Movements, Posture, and Neurons. Neuroscience 2021; 468:282-320. [PMID: 34087393 DOI: 10.1016/j.neuroscience.2021.05.028] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 05/20/2021] [Accepted: 05/24/2021] [Indexed: 11/18/2022]
Abstract
Precision and accuracy are two fundamental properties of any system, including the nervous system. Reduced precision (i.e., imprecision) results from the presence of neural noise at each level of sensory, motor, and perceptual processing. This review has three objectives: (1) to show the importance of studying vestibular precision, and specifically that studying accuracy without studying precision ignores fundamental aspects of the vestibular system; (2) to synthesize key hypotheses about precision in vestibular perception, the vestibulo-ocular reflex, posture, and neurons; and (3) to show that groups of studies that are thoughts to be distinct (e.g., perceptual thresholds, subjective visual vertical variability, neuronal variability) are actually "two sides of the same coin" - because the methods used allow results to be related to the standard deviation of a Gaussian distribution describing the underlying neural noise. Vestibular precision varies with age, stimulus amplitude, stimulus frequency, body orientation, motion direction, pathology, medication, and electrical/mechanical vestibular stimulation, but does not vary with sex. The brain optimizes precision during integration of vestibular cues with visual, auditory, and/or somatosensory cues. Since a common concern with precision metrics is time required for testing, we describe approaches to optimize data collection and provide evidence that fatigue and session effects are minimal. Finally, we summarize how precision is an individual trait that is correlated with clinical outcomes in patients as well as with performance in functional tasks like balance. These findings highlight the importance of studying vestibular precision and accuracy, and that knowledge gaps remain.
Collapse
Affiliation(s)
- Ana Diaz-Artiles
- Bioastronautics and Human Performance Laboratory, Department of Aerospace Engineering, Department of Health and Kinesiology, Texas A&M University, College Station, TX 77843-3141, USA. https://bhp.engr.tamu.edu
| | - Faisal Karmali
- Jenks Vestibular Physiology Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA, USA; Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston MA, USA.
| |
Collapse
|
12
|
Schmitt C, Baltaretu BR, Crawford JD, Bremmer F. A Causal Role of Area hMST for Self-Motion Perception in Humans. Cereb Cortex Commun 2020; 1:tgaa042. [PMID: 34296111 PMCID: PMC8152865 DOI: 10.1093/texcom/tgaa042] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 07/12/2020] [Accepted: 07/22/2020] [Indexed: 02/04/2023] Open
Abstract
Previous studies in the macaque monkey have provided clear causal evidence for an involvement of the medial-superior-temporal area (MST) in the perception of self-motion. These studies also revealed an overrepresentation of contraversive heading. Human imaging studies have identified a functional equivalent (hMST) of macaque area MST. Yet, causal evidence of hMST in heading perception is lacking. We employed neuronavigated transcranial magnetic stimulation (TMS) to test for such a causal relationship. We expected TMS over hMST to induce increased perceptual variance (i.e., impaired precision), while leaving mean heading perception (accuracy) unaffected. We presented 8 human participants with an optic flow stimulus simulating forward self-motion across a ground plane in one of 3 directions. Participants indicated perceived heading. In 57% of the trials, TMS pulses were applied, temporally centered on self-motion onset. TMS stimulation site was either right-hemisphere hMST, identified by a functional magnetic resonance imaging (fMRI) localizer, or a control-area, just outside the fMRI localizer activation. As predicted, TMS over area hMST, but not over the control-area, increased response variance of perceived heading as compared with noTMS stimulation trials. As hypothesized, this effect was strongest for contraversive self-motion. These data provide a first causal evidence for a critical role of hMST in visually guided navigation.
Collapse
Affiliation(s)
- Constanze Schmitt
- Department of Neurophysics, University of Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior-CMBB, University of Marburg and Justus-Liebig-University Giessen, Germany.,International Research Training Group 1901: The Brain in Action
| | - Bianca R Baltaretu
- International Research Training Group 1901: The Brain in Action.,Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- International Research Training Group 1901: The Brain in Action.,Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada.,Departments of Psychology, Biology, Kinesiology and Health Science, York University, Toronto, Ontario, Canada
| | - Frank Bremmer
- Department of Neurophysics, University of Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior-CMBB, University of Marburg and Justus-Liebig-University Giessen, Germany.,International Research Training Group 1901: The Brain in Action
| |
Collapse
|
13
|
Cuturi LF, Torazza D, Campus C, Merello A, Lorini C, Crepaldi M, Sandini G, Gori M. The RT-Chair: a Novel Motion Simulator to Measure Vestibular Perception. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:3318-3322. [PMID: 33018714 DOI: 10.1109/embc44109.2020.9176295] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Vestibular perception is useful to maintain heading direction and successful spatial navigation. In this study, we present a novel equipment capable of delivering both rotational and translational movements, namely the RT-Chair. The system comprises two motors and it is controlled by the user via MATLAB. To validate the measurability of vestibular perception with the RT-chair, we ran a threshold measurement experiment with healthy participants. Our results show thresholds comparable to previous literature, thus confirming the validity of the system to measure vestibular perception.
Collapse
|
14
|
Gibson ME, Kim JJJ, McManus M, Harris LR. The effect of training on the perceived approach angle in visual vertical heading judgements in a virtual environment. Exp Brain Res 2020; 238:1861-1869. [PMID: 32514713 PMCID: PMC7438363 DOI: 10.1007/s00221-020-05841-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 05/25/2020] [Indexed: 11/29/2022]
Abstract
Past studies have found poorer performance on vertical heading judgement accuracy compared to horizontal heading judgement accuracy. In everyday life, precise vertical heading judgements are used less often than horizontal heading judgements as we cannot usually control our vertical direction. However, pilots judging a landing approach need to consistently discriminate vertical heading angles to land safely. This study addresses the impact of training on participants' ability to judge their touchdown point relative to a target in a virtual environment with a clearly defined ground plane and horizon. Thirty-one participants completed a touchdown point estimation task twice, using three angles of descent (3°, 6° and 9°). In between the two testing tasks, half of the participants completed a flight simulator landing training task which provided feedback on their vertical heading performance; while, the other half completed a two-dimensional puzzle game as a control. Overall, participants were more precise in their responses in the second testing compared to the first (from a SD of ± 0.91° to ± 0.67°), but only the experimental group showed improvement in accuracy (from a mean error of - 2.1° to - 0.6°). Our results suggest that with training, vertical heading judgments can be as accurate as horizontal heading judgments. This study is the first to show the effectiveness of training in vertical heading judgement in naïve individuals. The results are applicable in the field of aviation, informing possible strategies for pilot training.
Collapse
Affiliation(s)
- Molly E Gibson
- Centre for Vision Research, York University, Toronto, ON, Canada
| | - John J-J Kim
- Centre for Vision Research, York University, Toronto, ON, Canada.,Department of Psychology, York University, 4700 Keele St, Toronto, ON, M3J 1P3, Canada
| | - Meaghan McManus
- Centre for Vision Research, York University, Toronto, ON, Canada. .,Department of Psychology, York University, 4700 Keele St, Toronto, ON, M3J 1P3, Canada.
| | - Laurence R Harris
- Centre for Vision Research, York University, Toronto, ON, Canada.,Department of Psychology, York University, 4700 Keele St, Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
15
|
Rodriguez R, Crane BT. Common causation and offset effects in human visual-inertial heading direction integration. J Neurophysiol 2020; 123:1369-1379. [PMID: 32130052 DOI: 10.1152/jn.00019.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Movement direction can be determined from a combination of visual and inertial cues. Visual motion (optic flow) can represent self-motion through a fixed environment or environmental motion relative to an observer. Simultaneous visual and inertial heading cues present the question of whether the cues have a common cause (i.e., should be integrated) or whether they should be considered independent. This was studied in eight healthy human subjects who experienced 12 visual and inertial headings in the horizontal plane divided in 30° increments. The headings were estimated in two unisensory and six multisensory trial blocks. Each unisensory block included 72 stimulus presentations, while each multisensory block included 144 stimulus presentations, including every possible combination of visual and inertial headings in random order. After each multisensory stimulus, subjects reported their perception of visual and inertial headings as congruous (i.e., having common causation) or not. In the multisensory trial blocks, subjects also reported visual or inertial heading direction (3 trial blocks for each). For aligned visual-inertial headings, the rate of common causation was higher during alignment in cardinal than noncardinal directions. When visual and inertial stimuli were separated by 30°, the rate of reported common causation remained >50%, but it decreased to 15% or less for separation of ≥90°. The inertial heading was biased toward the visual heading by 11-20° for separations of 30-120°. Thus there was sensory integration even in conditions without reported common causation. The visual heading was minimally influenced by inertial direction. When trials with common causation perception were compared with those without, inertial heading perception had a stronger bias toward visual stimulus direction.NEW & NOTEWORTHY Optic flow ambiguously represents self-motion or environmental motion. When these are in different directions, it is uncertain whether these are integrated into a common perception or not. This study looks at that issue by determining whether the two modalities are consistent and by measuring their perceived directions to get a degree of influence. The visual stimulus can have significant influence on the inertial stimulus even when they are perceived as inconsistent.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Biomedical Engineering, University of Rochester, Rochester, New York
| | - Benjamin T Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York.,Department of Otolaryngology, University of Rochester, Rochester, New York.,Department of Neuroscience, University of Rochester, Rochester, New York
| |
Collapse
|
16
|
de Winkel KN, Kurtz M, Bülthoff HH. Effects of visual stimulus characteristics and individual differences in heading estimation. J Vis 2019; 18:9. [PMID: 30347100 DOI: 10.1167/18.11.9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual heading estimation is subject to periodic patterns of constant (bias) and variable (noise) error. The nature of the errors, however, appears to differ between studies, showing underestimation in some, but overestimation in others. We investigated whether field of view (FOV), the availability of binocular disparity cues, motion profile, and visual scene layout can account for error characteristics, with a potential mediating effect of vection. Twenty participants (12 females) reported heading and rated vection for visual horizontal motion stimuli with headings ranging the full circle, while we systematically varied the above factors. Overall, the results show constant errors away from the fore-aft axis. Error magnitude was affected by FOV, disparity, and scene layout. Variable errors varied with heading angle, and depended on scene layout. Higher vection ratings were associated with smaller variable errors. Vection ratings depended on FOV, motion profile, and scene layout, with the highest ratings for a large FOV, cosine-bell velocity profile, and a ground plane scene rather than a dot cloud scene. Although the factors did affect error magnitude, differences in its direction were observed only between participants. We show that the observations are consistent with prior beliefs that headings align with the cardinal axes, where the attraction of each axis is an idiosyncratic property.
Collapse
Affiliation(s)
- Ksander N de Winkel
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Max Kurtz
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Department of Human Factors and Engineering Psychology, University of Twente, Enschede, The Netherlands
| | - Heinrich H Bülthoff
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
17
|
Rodriguez R, Crane BT. Effect of range of heading differences on human visual-inertial heading estimation. Exp Brain Res 2019; 237:1227-1237. [PMID: 30847539 DOI: 10.1007/s00221-019-05506-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2018] [Accepted: 03/01/2019] [Indexed: 11/29/2022]
Abstract
Both visual and inertial cues are salient in heading determination. However, optic flow can ambiguously represent self-motion or environmental motion. It is unclear how visual and inertial heading cues are determined to have common cause and integrated vs perceived independently. In four experiments visual and inertial headings were presented simultaneously with ten subjects reporting visual or inertial headings in separate trial blocks. Experiment 1 examined inertial headings within 30° of straight-ahead and visual headings that were offset by up to 60°. Perception of the inertial heading was shifted in the direction of the visual stimulus by as much as 35° by the 60° offset, while perception of the visual stimulus remained largely uninfluenced. Experiment 2 used ± 140° range of inertial headings with up to 120° visual offset. This experiment found variable behavior between subjects with most perceiving the sensory stimuli to be shifted towards an intermediate heading but a few perceiving the headings independently. The visual and inertial headings influenced each other even at the largest offsets. Experiments 3 and 4 had similar inertial headings to experiments 1 and 2, respectively, except subjects reported environmental motion direction. Experiment 4 displayed similar perceptual influences as experiment 2, but in experiment 3 percepts were independent. Results suggested that perception of visual and inertial stimuli tend to be perceived as having common causation in most subjects with offsets up to 90° although with significant variation in perception between individuals. Limiting the range of inertial headings caused the visual heading to dominate the perception.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Bioengineering, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA
| | - Benjamin T Crane
- Department of Bioengineering, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA. .,Department of Otolaryngology, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA. .,Department of Neuroscience, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA.
| |
Collapse
|
18
|
The influence of target distance on perceptual self-motion thresholds and the vestibulo-ocular reflex during interaural translation. PROGRESS IN BRAIN RESEARCH 2019; 248:197-208. [DOI: 10.1016/bs.pbr.2019.04.037] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
|
19
|
Abstract
Detection of the state of self-motion, such as the instantaneous heading direction, the traveled trajectory and traveled distance or time, is critical for efficient spatial navigation. Numerous psychophysical studies have indicated that the vestibular system, originating from the otolith and semicircular canals in our inner ears, provides robust signals for different aspects of self-motion perception. In addition, vestibular signals interact with other sensory signals such as visual optic flow to facilitate natural navigation. These behavioral results are consistent with recent findings in neurophysiological studies. In particular, vestibular activity in response to the translation or rotation of the head/body in darkness is revealed in a growing number of cortical regions, many of which are also sensitive to visual motion stimuli. The temporal dynamics of the vestibular activity in the central nervous system can vary widely, ranging from acceleration-dominant to velocity-dominant. Different temporal dynamic signals may be decoded by higher level areas for different functions. For example, the acceleration signals during the translation of body in the horizontal plane may be used by the brain to estimate the heading directions. Although translation and rotation signals arise from independent peripheral organs, that is, otolith and canals, respectively, they frequently converge onto single neurons in the central nervous system including both the brainstem and the cerebral cortex. The convergent neurons typically exhibit stronger responses during a combined curved motion trajectory which may serve as the neural correlate for complex path perception. During spatial navigation, traveled distance or time may be encoded by different population of neurons in multiple regions including hippocampal-entorhinal system, posterior parietal cortex, or frontal cortex.
Collapse
Affiliation(s)
- Zhixian Cheng
- Department of Neuroscience, Yale School of Medicine, New Haven, CT, United States
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
20
|
Rosenberg MJ, Galvan-Garza RC, Clark TK, Sherwood DP, Young LR, Karmali F. Human manual control precision depends on vestibular sensory precision and gravitational magnitude. J Neurophysiol 2018; 120:3187-3197. [PMID: 30379610 DOI: 10.1152/jn.00565.2018] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Precise motion control is critical to human survival on Earth and in space. Motion sensation is inherently imprecise, and the functional implications of this imprecision are not well understood. We studied a "vestibular" manual control task in which subjects attempted to keep themselves upright with a rotational hand controller (i.e., joystick) to null out pseudorandom, roll-tilt motion disturbances of their chair in the dark. Our first objective was to study the relationship between intersubject differences in manual control performance and sensory precision, determined by measuring vestibular perceptual thresholds. Our second objective was to examine the influence of altered gravity on manual control performance. Subjects performed the manual control task while supine during short-radius centrifugation, with roll tilts occurring relative to centripetal accelerations of 0.5, 1.0, and 1.33 GC (1 GC = 9.81 m/s2). Roll-tilt vestibular precision was quantified with roll-tilt vestibular direction-recognition perceptual thresholds, the minimum movement that one can reliably distinguish as leftward vs. rightward. A significant intersubject correlation was found between manual control performance (defined as the standard deviation of chair tilt) and thresholds, consistent with sensory imprecision negatively affecting functional precision. Furthermore, compared with 1.0 GC manual control was more precise in 1.33 GC (-18.3%, P = 0.005) and less precise in 0.5 GC (+39.6%, P < 0.001). The decrement in manual control performance observed in 0.5 GC and in subjects with high thresholds suggests potential risk factors for piloting and locomotion, both on Earth and during human exploration missions to the moon (0.16 G) and Mars (0.38 G). NEW & NOTEWORTHY The functional implications of imprecise motion sensation are not well understood. We found a significant correlation between subjects' vestibular perceptual thresholds and performance in a manual control task (using a joystick to keep their chair upright), consistent with sensory imprecision negatively affecting functional precision. Furthermore, using an altered-gravity centrifuge configuration, we found that manual control precision was improved in "hypergravity" and degraded in "hypogravity." These results have potential relevance for postural control, aviation, and spaceflight.
Collapse
Affiliation(s)
- Marissa J Rosenberg
- Jenks Vestibular Physiology Lab, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts.,KBRwyle Science, Technology and Engineering, NASA Johnson Space Center , Houston, Texas.,Center for Space Medicine, Baylor College of Medicine , Houston, Texas
| | - Raquel C Galvan-Garza
- Jenks Vestibular Physiology Lab, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts.,Massachusetts Institute of Technology , Cambridge, Massachusetts
| | - Torin K Clark
- Jenks Vestibular Physiology Lab, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts.,Massachusetts Institute of Technology , Cambridge, Massachusetts.,Department of Otolaryngology, Harvard Medical School , Boston, Massachusetts.,University of Colorado at Boulder , Boulder, Colorado
| | - David P Sherwood
- Massachusetts Institute of Technology , Cambridge, Massachusetts
| | - Laurence R Young
- Massachusetts Institute of Technology , Cambridge, Massachusetts
| | - Faisal Karmali
- Jenks Vestibular Physiology Lab, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts.,Massachusetts Institute of Technology , Cambridge, Massachusetts.,Department of Otolaryngology, Harvard Medical School , Boston, Massachusetts
| |
Collapse
|
21
|
Effect of vibration during visual-inertial integration on human heading perception during eccentric gaze. PLoS One 2018; 13:e0199097. [PMID: 29902253 PMCID: PMC6002115 DOI: 10.1371/journal.pone.0199097] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Accepted: 05/31/2018] [Indexed: 11/21/2022] Open
Abstract
Heading direction is determined from visual and inertial cues. Visual headings use retinal coordinates while inertial headings use body coordinates. Thus during eccentric gaze the same heading may be perceived differently by visual and inertial modalities. Stimulus weights depend on the relative reliability of these stimuli, but previous work suggests that the inertial heading may be given more weight than predicted. These experiments only varied the visual stimulus reliability, and it is unclear what occurs with variation in inertial reliability. Five human subjects completed a heading discrimination task using 2s of translation with a peak velocity of 16cm/s. Eye position was ±25° left/right with visual, inertial, or combined motion. The visual motion coherence was 50%. Inertial stimuli included 6 Hz vertical vibration with 0, 0.10, 0.15, or 0.20cm amplitude. Subjects reported perceived heading relative to the midline. With an inertial heading, perception was biased 3.6° towards the gaze direction. Visual headings biased perception 9.6° opposite gaze. The inertial threshold without vibration was 4.8° which increased significantly to 8.8° with vibration but the amplitude of vibration did not influence reliability. With visual-inertial headings, empirical stimulus weights were calculated from the bias and compared with the optimal weight calculated from the threshold. In 2 subjects empirical weights were near optimal while in the remaining 3 subjects the inertial stimuli were weighted greater than optimal predictions. On average the inertial stimulus was weighted greater than predicted. These results indicate multisensory integration may not be a function of stimulus reliability when inertial stimulus reliability is varied.
Collapse
|
22
|
Bremmer F, Churan J, Lappe M. Heading representations in primates are compressed by saccades. Nat Commun 2017; 8:920. [PMID: 29030557 PMCID: PMC5640607 DOI: 10.1038/s41467-017-01021-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Accepted: 08/13/2017] [Indexed: 01/06/2023] Open
Abstract
Perceptual illusions help to understand how sensory signals are decoded in the brain. Here we report that the opposite approach is also applicable, i.e., results from decoding neural activity from monkey extrastriate visual cortex correctly predict a hitherto unknown perceptual illusion in humans. We record neural activity from monkey medial superior temporal (MST) and ventral intraparietal (VIP) area during presentation of self-motion stimuli and concurrent reflexive eye movements. A heading-decoder performs veridically during slow eye movements. During fast eye movements (saccades), however, the decoder erroneously reports compression of heading toward straight ahead. Functional equivalents of macaque areas MST and VIP have been identified in humans, implying a perceptual correlate (illusion) of this perisaccadic decoding error. Indeed, a behavioral experiment in humans shows that perceived heading is perisaccadically compressed toward the direction of gaze. Response properties of primate areas MST and VIP are consistent with being the substrate of the newly described visual illusion.Macaque higher visual areas MST and VIP encode heading direction based on self-motion stimuli. Here the authors show that, while making saccades, the heading direction decoded from the neural responses is compressed toward straight-ahead, and independently demonstrate a perceptual illusion in humans based on this perisaccadic decoding error.
Collapse
Affiliation(s)
- Frank Bremmer
- Department of Neurophysics & Marburg Center for Mind, Brain and Behavior - MCMBB, Philipps-Universität Marburg, Karl-von-Frisch Straße 8a, 35043, Marburg, Germany.
| | - Jan Churan
- Department of Neurophysics & Marburg Center for Mind, Brain and Behavior - MCMBB, Philipps-Universität Marburg, Karl-von-Frisch Straße 8a, 35043, Marburg, Germany
| | - Markus Lappe
- Department of Psychology & Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Fliednerstraße 21, 48149, Münster, Germany
| |
Collapse
|
23
|
Abstract
Perception of a stimulus can be characterized by two fundamental psychophysical measures: how well the stimulus can be discriminated from similar ones (discrimination threshold) and how strongly the perceived stimulus value deviates on average from the true stimulus value (perceptual bias). We demonstrate that perceptual bias and discriminability, as functions of the stimulus value, follow a surprisingly simple mathematical relation. The relation, which is derived from a theory combining optimal encoding and decoding, is well supported by a wide range of reported psychophysical data including perceptual changes induced by contextual modulation. The large empirical support indicates that the proposed relation may represent a psychophysical law in human perception. Our results imply that the computational processes of sensory encoding and perceptual decoding are matched and optimized based on identical assumptions about the statistical structure of the sensory environment.
Collapse
|
24
|
Gimmon Y, Millar J, Pak R, Liu E, Schubert MC. Central not peripheral vestibular processing impairs gait coordination. Exp Brain Res 2017; 235:3345-3355. [PMID: 28819687 DOI: 10.1007/s00221-017-5061-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2017] [Accepted: 08/08/2017] [Indexed: 11/29/2022]
Abstract
Gait coordination is generated by neuronal inter-connections between central pattern generators in the spinal cord governed by cortical areas. Malfunction of central vestibular processing areas generates vestibular symptoms in the absence of an identifiable peripheral vestibular system lesion. Walking in the dark enforces a coordinated afference primarily from the vestibular and somatosensory systems. We hypothesized that patients with aberrant central vestibular processing would demonstrate unique gait characteristics, and have impaired gait coordination compared with those patients with abnormal peripheral vestibular function and healthy controls. One-hundred and eighteen subjects were recruited. Peripheral vestibular function was determined based on laboratory and clinical examinations. Patients with abnormal central vestibular processing had normal peripheral vestibular function. Subjects were instructed to walk at a comfortable pace during three visual conditions; eyes open, eyes open and closed intermittently, and eyes closed. Both patient groups showed a similar spatiotemporal gait pattern, significantly different from the pattern of the healthy controls. However, only the central vestibular patient group had an abnormal coordination of gait as measured by the phase coordination index (PCI). There were no significant interactions between the groups and walking conditions. Peripheral vestibular deficits impair gait though our data suggest that it is the central processing of such peripheral vestibular information that has a greater influence. This impairment may be related to a neural un-coupling between the brain and central pattern generator of the spinal cord based on the abnormal PCI, which seems to be a good indicator of the integrity of this linkage.
Collapse
Affiliation(s)
- Yoav Gimmon
- Laboratory of Vestibular NeuroAdaptation, Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins University School of Medicine, 601 N. Caroline Street, 6th Floor, Baltimore, MD, 21287-0910, USA
| | - Jennifer Millar
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Rebecca Pak
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Elizabeth Liu
- Laboratory of Vestibular NeuroAdaptation, Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins University School of Medicine, 601 N. Caroline Street, 6th Floor, Baltimore, MD, 21287-0910, USA
| | - Michael C Schubert
- Laboratory of Vestibular NeuroAdaptation, Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins University School of Medicine, 601 N. Caroline Street, 6th Floor, Baltimore, MD, 21287-0910, USA. .,Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
25
|
Crane BT. Effect of eye position during human visual-vestibular integration of heading perception. J Neurophysiol 2017; 118:1609-1621. [PMID: 28615328 DOI: 10.1152/jn.00037.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2017] [Revised: 06/13/2017] [Accepted: 06/13/2017] [Indexed: 11/22/2022] Open
Abstract
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems.NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability.
Collapse
Affiliation(s)
- Benjamin T Crane
- Department of Otolaryngology, University of Rochester, Rochester, New York
| |
Collapse
|
26
|
de Winkel KN, Katliar M, Bülthoff HH. Causal Inference in Multisensory Heading Estimation. PLoS One 2017; 12:e0169676. [PMID: 28060957 PMCID: PMC5218471 DOI: 10.1371/journal.pone.0169676] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2016] [Accepted: 12/20/2016] [Indexed: 11/30/2022] Open
Abstract
A large body of research shows that the Central Nervous System (CNS) integrates multisensory information. However, this strategy should only apply to multisensory signals that have a common cause; independent signals should be segregated. Causal Inference (CI) models account for this notion. Surprisingly, previous findings suggested that visual and inertial cues on heading of self-motion are integrated regardless of discrepancy. We hypothesized that CI does occur, but that characteristics of the motion profiles affect multisensory processing. Participants estimated heading of visual-inertial motion stimuli with several different motion profiles and a range of intersensory discrepancies. The results support the hypothesis that judgments of signal causality are included in the heading estimation process. Moreover, the data suggest a decreasing tolerance for discrepancies and an increasing reliance on visual cues for longer duration motions.
Collapse
Affiliation(s)
- Ksander N. de Winkel
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Baden-Württemburg, Germany
- * E-mail:
| | - Mikhail Katliar
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Baden-Württemburg, Germany
| | - Heinrich H. Bülthoff
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Baden-Württemburg, Germany
| |
Collapse
|
27
|
Crane BT. Perception of combined translation and rotation in the horizontal plane in humans. J Neurophysiol 2016; 116:1275-85. [PMID: 27334952 DOI: 10.1152/jn.00322.2016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Accepted: 06/20/2016] [Indexed: 11/22/2022] Open
Abstract
Thresholds and biases of human motion perception were determined for yaw rotation and sway (left-right) and surge (fore-aft) translation, independently and in combination. Stimuli were 1 Hz sinusoid in acceleration with a peak velocity of 14°/s or cm/s. Test stimuli were adjusted based on prior responses, whereas the distracting stimulus was constant. Seventeen human subjects between the ages of 20 and 83 completed the experiments and were divided into 2 groups: younger and older than 50. Both sway and surge translation thresholds significantly increased when combined with yaw rotation. Rotation thresholds were not significantly increased by the presence of translation. The presence of a yaw distractor significantly biased perception of sway translation, such that during 14°/s leftward rotation, the point of subjective equality (PSE) occurred with sway of 3.2 ± 0.7 (mean ± SE) cm/s to the right. Likewise, during 14°/s rightward motion, the PSE was with sway of 2.9 ± 0.7 cm/s to the left. A sway distractor did not bias rotation perception. When subjects were asked to report the direction of translation while varying the axis of yaw rotation, the PSE at which translation was equally likely to be perceived in either direction was 29 ± 11 cm anterior to the midline. These results demonstrated that rotation biased translation perception, such that it is minimized when rotating about an axis anterior to the head. Since the combination of translation and rotation during ambulation is consistent with an axis anterior to the head, this may reflect a mechanism by which movements outside the pattern that occurs during ambulation are perceived.
Collapse
Affiliation(s)
- Benjamin T Crane
- Department of Otolaryngology, University of Rochester, Rochester, New York; Department of Neurobiology and Anatomy, University of Rochester, Rochester, New York; and Department of Bioengineering, University of Rochester, Rochester, New York
| |
Collapse
|
28
|
Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object. J Neurosci 2016; 35:13599-607. [PMID: 26446214 DOI: 10.1523/jneurosci.2267-15.2015] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion.
Collapse
|
29
|
Rosenblatt SD, Crane BT. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane. PLoS One 2015; 10:e0142109. [PMID: 26536235 PMCID: PMC4633239 DOI: 10.1371/journal.pone.0142109] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2015] [Accepted: 10/16/2015] [Indexed: 12/02/2022] Open
Abstract
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.
Collapse
Affiliation(s)
- Steven David Rosenblatt
- Department of Otolaryngology, University of Rochester, Rochester, New York, United States of America
| | - Benjamin Thomas Crane
- Department of Otolaryngology, University of Rochester, Rochester, New York, United States of America
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, New York, United States of America
- Department of Bioengineering, University of Rochester, Rochester, New York, United States of America
- * E-mail:
| |
Collapse
|
30
|
Wei XX, Stocker AA. A Bayesian observer model constrained by efficient coding can explain 'anti-Bayesian' percepts. Nat Neurosci 2015; 18:1509-17. [PMID: 26343249 DOI: 10.1038/nn.4105] [Citation(s) in RCA: 171] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2015] [Accepted: 08/11/2015] [Indexed: 11/10/2022]
Abstract
Bayesian observer models provide a principled account of the fact that our perception of the world rarely matches physical reality. The standard explanation is that our percepts are biased toward our prior beliefs. However, reported psychophysical data suggest that this view may be simplistic. We propose a new model formulation based on efficient coding that is fully specified for any given natural stimulus distribution. The model makes two new and seemingly anti-Bayesian predictions. First, it predicts that perception is often biased away from an observer's prior beliefs. Second, it predicts that stimulus uncertainty differentially affects perceptual bias depending on whether the uncertainty is induced by internal or external noise. We found that both model predictions match reported perceptual biases in perceived visual orientation and spatial frequency, and were able to explain data that have not been explained before. The model is general and should prove applicable to other perceptual variables and tasks.
Collapse
Affiliation(s)
- Xue-Xin Wei
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Alan A Stocker
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
31
|
Crane BT. Coordinates of Human Visual and Inertial Heading Perception. PLoS One 2015; 10:e0135539. [PMID: 26267865 PMCID: PMC4534459 DOI: 10.1371/journal.pone.0135539] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Accepted: 07/22/2015] [Indexed: 11/22/2022] Open
Abstract
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.
Collapse
Affiliation(s)
- Benjamin Thomas Crane
- Department of Otolaryngology, University of Rochester, Rochester, NY, United States of America
- Department of Bioengineering, University of Rochester, Rochester, NY, United States of America
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
- * E-mail:
| |
Collapse
|
32
|
de Winkel KN, Katliar M, Bülthoff HH. Forced fusion in multisensory heading estimation. PLoS One 2015; 10:e0127104. [PMID: 25938235 PMCID: PMC4418840 DOI: 10.1371/journal.pone.0127104] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Accepted: 04/10/2015] [Indexed: 11/18/2022] Open
Abstract
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.
Collapse
Affiliation(s)
- Ksander N. de Winkel
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
| | - Mikhail Katliar
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
| | - Heinrich H. Bülthoff
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
- Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713, Korea
- * E-mail:
| |
Collapse
|
33
|
Butler JS, Campos JL, Bülthoff HH. Optimal visual–vestibular integration under conditions of conflicting intersensory motion profiles. Exp Brain Res 2014; 233:587-97. [DOI: 10.1007/s00221-014-4136-1] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2014] [Accepted: 10/20/2014] [Indexed: 10/24/2022]
|
34
|
Lich M, Bremmer F. Self-motion perception in the elderly. Front Hum Neurosci 2014; 8:681. [PMID: 25309379 PMCID: PMC4163979 DOI: 10.3389/fnhum.2014.00681] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2014] [Accepted: 08/14/2014] [Indexed: 11/18/2022] Open
Abstract
Self-motion through space generates a visual pattern called optic flow. It can be used to determine one's direction of self-motion (heading). Previous studies have already shown that this perceptual ability, which is of critical importance during everyday life, changes with age. In most of these studies subjects were asked to judge whether they appeared to be heading to the left or right of a target. Thresholds were found to increase continuously with age. In our current study, we were interested in absolute rather than relative heading judgments and in the question about a potential neural correlate of an age-related deterioration of heading perception. Two groups, older test subjects and younger controls, were shown optic flow stimuli in a virtual-reality setup. Visual stimuli simulated self-motion through a 3-D cloud of dots and subjects had to indicate their perceived heading direction after each trial. In different subsets of experiments we varied individually relevant stimulus parameters: presentation time, number of dots in the display, stereoscopic vs. non-stereoscopic stimulation, and motion coherence. We found decrements in heading performance with age for each stimulus parameter. In a final step we aimed to determine a putative neural basis of this behavioral decline. To this end we modified a neural network model which previously has proven to be capable of reproduce and predict certain aspects of heading perception. We show that the observed data can be modeled by implementing an age related neuronal cell loss in this neural network. We conclude that a continuous decline of certain aspects of motion perception, among them heading, might be based on an age-related progressive loss of groups of neurons being activated by visual motion.
Collapse
Affiliation(s)
- Matthias Lich
- Department Neurophysics, Philipps-Universität Marburg Marburg, Germany
| | - Frank Bremmer
- Department Neurophysics, Philipps-Universität Marburg Marburg, Germany
| |
Collapse
|
35
|
Human visual and vestibular heading perception in the vertical planes. J Assoc Res Otolaryngol 2013; 15:87-102. [PMID: 24249574 DOI: 10.1007/s10162-013-0423-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2013] [Accepted: 10/23/2013] [Indexed: 01/19/2023] Open
Abstract
Heading estimation has not previously been reported in the vertical planes. This is a potentially interesting issue because although distribution of neuronal direction sensitivities is near uniform for vertical headings, there is an overrepresentation of otolith organs sensitive to motion in the horizontal relative to the vertical plane. Furthermore, thresholds of horizontal motion perception are considerably lower than those of vertical motion which has the potential to bias heading perception. The current data from 14 human subjects (age 19 to 67) measured heading estimation in response to vestibular motion of 14 cm (28 cm/s) over a 360° of headings at 5° intervals. An analogous visual motion was tested in separate trials. In this study, earth and head vertical/horizontal were always aligned. Results demonstrated that the horizontal component of heading was overestimated relative to the vertical component for vestibular heading stimuli in the coronal (skew) and sagittal (elevation) planes. For visual headings, the bias was much smaller and in the opposite direction such that the vertical component of heading was overestimated. Subjects older than 50 had significantly worse precision and larger biases relative to that of younger subjects for the vestibular conditions, although visual heading estimates were similar. A vector addition model was fit to the data which explains the observed heading biases by the known distribution of otolith organs in humans. The greatly decreased precision with age is explained by the model with decreases in end organ numbers, and relatively greater loss of otoliths that are sensitive to vertical motion.
Collapse
|