1
|
Jörges B, Harris LR. The impact of visually simulated self-motion on predicting object motion. PLoS One 2024; 19:e0295110. [PMID: 38483949 PMCID: PMC10939277 DOI: 10.1371/journal.pone.0295110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 02/05/2024] [Indexed: 03/17/2024] Open
Abstract
To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigated this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants were shown a ball moving laterally which disappeared after a certain time. They then indicated by button press when they thought the ball would have hit a target rectangle positioned in the environment. While the ball was visible, participants sometimes experienced simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task was a two-interval forced choice task in which participants judged which of two motions was faster: in one interval they saw the same ball they observed in the first task while in the other they saw a ball cloud whose speed was controlled by a PEST staircase. While observing the single ball, they were again moved visually either in the same or opposite direction as the ball or they remained static. We found the expected biases in estimated time-to-contact, while for the speed estimation task, this was only the case when the ball and observer were moving in opposite directions. Our hypotheses regarding precision were largely unsupported by the data. Overall, we draw several conclusions from this experiment: first, incomplete flow parsing can affect motion prediction. Further, it suggests that time-to-contact estimation and speed judgements are determined by partially different mechanisms. Finally, and perhaps most strikingly, there appear to be certain compensatory mechanisms at play that allow for much higher-than-expected precision when observers are experiencing self-motion-even when self-motion is simulated only visually.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, Toronto, Ontario, Canada
| | | |
Collapse
|
2
|
Kuldavletova O, Denise P, Normand H, Quarck G, Etard O. Both whole-body rotation and visual flow induce cardiovascular autonomic response in human, but visual response is overridden by vestibular stimulation. Sci Rep 2023; 13:4191. [PMID: 36918631 PMCID: PMC10015060 DOI: 10.1038/s41598-023-31431-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 03/11/2023] [Indexed: 03/16/2023] Open
Abstract
While the influence of the vestibular and extra-vestibular gravity signals on the cardiovascular system has been demonstrated, there is little evidence that visual stimuli can trigger cardiovascular responses. Furthermore, there is no evidence of interaction between visual and vestibular signals in autonomic control, as would be expected since they are highly integrated. The present study explored the cardiovascular responses to vestibular and visual stimuli in normal subjects. We hypothesized that the visual stimuli would modify the cardiovascular response to vestibular stimulation, especially when the latter is ambiguous with respect to gravity. Off-Vertical-Axis-Rotation (OVAR) was used to stimulate vestibular and extra-vestibular receptors of gravity in 36 healthy young adults while virtual reality was used for visual stimulation. Arterial pressure (AP), respiratory rate and ECG were measured. The analysis accounted for the respiratory modulation of AP and heart rate (HR). Vestibular stimulation by OVAR was shown to modulate both mean arterial pressure (MAP) and HR, while the visual stimulation was significantly affecting HR modulation, but not MAP. Moreover, the specific visual effect was present only when the subjects were not in rotation. Therefore, visual stimulation is able to modulate the heart rate, but is overridden by vestibular stimulation due to real movement.
Collapse
Affiliation(s)
- O Kuldavletova
- Université de Caen Normandie, Inserm, COMETE U1075, CYCERON, CHU de Caen, Normandie Univ, 14000, Caen, France.
| | - P Denise
- Université de Caen Normandie, Inserm, COMETE U1075, CYCERON, CHU de Caen, Normandie Univ, 14000, Caen, France
| | - H Normand
- Université de Caen Normandie, Inserm, COMETE U1075, CYCERON, CHU de Caen, Normandie Univ, 14000, Caen, France
| | - G Quarck
- Université de Caen Normandie, Inserm, COMETE U1075, CYCERON, CHU de Caen, Normandie Univ, 14000, Caen, France
| | - O Etard
- Université de Caen Normandie, Inserm, COMETE U1075, CYCERON, CHU de Caen, Normandie Univ, 14000, Caen, France
| |
Collapse
|
3
|
Zeng F, Zaidel A, Chen A. Contrary neuronal recalibration in different multisensory cortical areas. eLife 2023; 12:82895. [PMID: 36877555 PMCID: PMC9988259 DOI: 10.7554/elife.82895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 02/21/2023] [Indexed: 03/07/2023] Open
Abstract
The adult brain demonstrates remarkable multisensory plasticity by dynamically recalibrating itself based on information from multiple sensory sources. After a systematic visual-vestibular heading offset is experienced, the unisensory perceptual estimates for subsequently presented stimuli are shifted toward each other (in opposite directions) to reduce the conflict. The neural substrate of this recalibration is unknown. Here, we recorded single-neuron activity from the dorsal medial superior temporal (MSTd), parietoinsular vestibular cortex (PIVC), and ventral intraparietal (VIP) areas in three male rhesus macaques during this visual-vestibular recalibration. Both visual and vestibular neuronal tuning curves in MSTd shifted - each according to their respective cues' perceptual shifts. Tuning of vestibular neurons in PIVC also shifted in the same direction as vestibular perceptual shifts (cells were not robustly tuned to the visual stimuli). By contrast, VIP neurons demonstrated a unique phenomenon: both vestibular and visual tuning shifted in accordance with vestibular perceptual shifts. Such that, visual tuning shifted, surprisingly, contrary to visual perceptual shifts. Therefore, while unsupervised recalibration (to reduce cue conflict) occurs in early multisensory cortices, higher-level VIP reflects only a global shift, in vestibular space.
Collapse
Affiliation(s)
- Fu Zeng
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal UniversityShanghaiChina
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan UniversityRamat GanIsrael
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal UniversityShanghaiChina
| |
Collapse
|
4
|
Kirollos R, Herdman CM. Caloric vestibular stimulation induces vestibular circular vection even with a conflicting visual display presented in a virtual reality headset. Iperception 2023; 14:20416695231168093. [PMID: 37113619 PMCID: PMC10126621 DOI: 10.1177/20416695231168093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 03/06/2023] [Indexed: 04/29/2023] Open
Abstract
This study explored visual-vestibular sensory integration when the vestibular system receives self-motion information using caloric irrigation. The objectives of this study were to (1) determine if measurable vestibular circular vection can be induced in healthy participants using caloric vestibular stimulation and (2) determine if a conflicting visual display could impact vestibular vection. In Experiment 1 (E1), participants had their eyes closed. Air caloric vestibular stimulation cooled the endolymph fluid of the horizontal semi-circular canal inducing vestibular circular vection. Participants reported vestibular circular vection with a potentiometer knob that measured circular vection direction, speed, and duration. In Experiment 2 (E2), participants viewed a stationary display in a virtual reality headset that did not signal self-motion while receiving caloric vestibular stimulation. This produced a visual-vestibular conflict. Participants indicated clockwise vection in the left ear and counter-clockwise vection in right ear in a significant proportion of trials in E1 and E2. Vection was significantly slower and shorter in E2 compared to E1. E2 results demonstrated that during visual-vestibular conflict, visual and vestibular cues are used to determine self-motion rather than one system overriding the other. These results are consistent with optimal cue integration hypothesis.
Collapse
Affiliation(s)
- Ramy Kirollos
- Ramy Kirollos, Defence Research and Development
Canada, Toronto Research Center, 1133 Sheppard Ave. W., Toronto, Ontario, M3 K 2C9,
Canada; Visualization and Simulation Center, Carleton University, 1125 Colonel By Drive,
Ottawa, Ontario, K1S 5B6, Canada.
| | | |
Collapse
|
5
|
Pimentel BN, dos Santos VAV. Influence of visual symptoms on posturographic performance after stroke. Codas 2023; 35:e20200262. [PMID: 36629550 PMCID: PMC10010430 DOI: 10.1590/2317-1782/20212020262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 03/20/2022] [Indexed: 01/11/2023] Open
Abstract
PURPOSE to verify the occurrence of visual symptoms in subjects with dizziness after stroke, to compare the posturographic results and to correlate their clinical aspects with the characteristics of the stroke. METHODS This is an observational, cross-sectional study with quantitative analysis. The inclusion criteria for the sample composition were to report dizziness after ischemic or hemorrhagic stroke and at least 18 years old. We evaluated 50 patients through clinical anamnesis and Dynamic Foam-Laser Posturography. Anteroposterior deviations were calculated with the measures of each SOT. The preferences of the functions were analyzed according to the means of the Sensory Organization Test. RESULTS twenty-eight subjects had stroke-related visual symptoms. The prevalent kind of dizziness was imbalance and the most frequent stroke was ischemic, mainly in the carotid territory. The values of tests were below the standard; there was a relationship between older subjects and proprioceptive system, and between visual preference and presence of visual symptoms, as well as the location of the posterior stroke. CONCLUSION there was a high frequency of visual symptoms among subjects with stroke sequelae and these have significant relationship with the worst values in visual preference system.
Collapse
Affiliation(s)
- Bianca Nunes Pimentel
- Programa de Pós-graduação em Distúrbios da Comunicação Humana - UFSM - Santa Maria (RS), Brasil.
| | | |
Collapse
|
6
|
Jörges B, Harris LR. The impact of visually simulated self-motion on predicting object motion-A registered report protocol. PLoS One 2023; 18:e0267983. [PMID: 36716328 PMCID: PMC9886253 DOI: 10.1371/journal.pone.0267983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 04/19/2022] [Indexed: 02/01/2023] Open
Abstract
To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigate this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants are shown a ball moving laterally which disappears after a certain time. They then indicate by button press when they think the ball would have hit a target rectangle positioned in the environment. While the ball is visible, participants sometimes experience simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task is a two-interval forced choice task in which participants judge which of two motions is faster: in one interval they see the same ball they observed in the first task while in the other they see a ball cloud whose speed is controlled by a PEST staircase. While observing the single ball, they are again moved visually either in the same or opposite direction as the ball or they remain static. We expect participants to overestimate the speed of a ball that moves opposite to their simulated self-motion (speed estimation task), which should then lead them to underestimate the time it takes the ball to reach the target rectangle (prediction task). Seeing the ball during visually simulated self-motion should increase variability in both tasks. We expect to find performance in both tasks to be correlated, both in accuracy and precision.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, Toronto, Canada
- * E-mail:
| | | |
Collapse
|
7
|
How much I moved: Robust biases in self-rotation perception. Atten Percept Psychophys 2022; 84:2670-2683. [PMID: 36261764 PMCID: PMC9630243 DOI: 10.3758/s13414-022-02589-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/27/2022] [Indexed: 11/16/2022]
Abstract
Vestibular cues are crucial to sense the linear and angular acceleration of our head in three-dimensional space. Previous literature showed that vestibular information precociously combines with other sensory modalities, such as proprioceptive and visual, to facilitate spatial navigation. Recent studies suggest that auditory cues may improve self-motion perception as well. The present study investigated the ability to estimate passive rotational displacements with and without virtual acoustic landmarks to determine how vestibular and auditory information interact in processing self-motion information. We performed two experiments. In both, healthy participants sat on a Rotational-Translational Chair. They experienced yaw rotations along the earth-vertical axis and performed a self-motion discrimination task. Their goal was to estimate both clockwise and counterclockwise rotations’ amplitude, with no visual information available, reporting whether they felt to be rotated more or less than 45°. According to the condition, vestibular-only or audio-vestibular information was present. Between the two experiments, we manipulated the procedure of presentation of the auditory cues (passive vs. active production of sounds). We computed the point of subjective equality (PSE) as a measure of accuracy and the just noticeable difference (JND) as the precision of the estimations for each condition and direction of rotations. Results in both experiments show a strong overestimation bias of the rotations, regardless of the condition, the direction, and the sound generation conditions. Similar to previously found heading biases, this bias in rotation estimation may facilitate the perception of substantial deviations from the most relevant directions in daily navigation activities.
Collapse
|
8
|
Gabriel GA, Harris LR, Henriques DYP, Pandi M, Campos JL. Multisensory visual-vestibular training improves visual heading estimation in younger and older adults. Front Aging Neurosci 2022; 14:816512. [PMID: 36092809 PMCID: PMC9452741 DOI: 10.3389/fnagi.2022.816512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 08/01/2022] [Indexed: 11/16/2022] Open
Abstract
Self-motion perception (e.g., when walking/driving) relies on the integration of multiple sensory cues including visual, vestibular, and proprioceptive signals. Changes in the efficacy of multisensory integration have been observed in older adults (OA), which can sometimes lead to errors in perceptual judgments and have been associated with functional declines such as increased falls risk. The objectives of this study were to determine whether passive, visual-vestibular self-motion heading perception could be improved by providing feedback during multisensory training, and whether training-related effects might be more apparent in OAs vs. younger adults (YA). We also investigated the extent to which training might transfer to improved standing-balance. OAs and YAs were passively translated and asked to judge their direction of heading relative to straight-ahead (left/right). Each participant completed three conditions: (1) vestibular-only (passive physical motion in the dark), (2) visual-only (cloud-of-dots display), and (3) bimodal (congruent vestibular and visual stimulation). Measures of heading precision and bias were obtained for each condition. Over the course of 3 days, participants were asked to make bimodal heading judgments and were provided with feedback (“correct”/“incorrect”) on 900 training trials. Post-training, participants’ biases, and precision in all three sensory conditions (vestibular, visual, bimodal), and their standing-balance performance, were assessed. Results demonstrated improved overall precision (i.e., reduced JNDs) in heading perception after training. Pre- vs. post-training difference scores showed that improvements in JNDs were only found in the visual-only condition. Particularly notable is that 27% of OAs initially could not discriminate their heading at all in the visual-only condition pre-training, but subsequently obtained thresholds in the visual-only condition post-training that were similar to those of the other participants. While OAs seemed to show optimal integration pre- and post-training (i.e., did not show significant differences between predicted and observed JNDs), YAs only showed optimal integration post-training. There were no significant effects of training for bimodal or vestibular-only heading estimates, nor standing-balance performance. These results indicate that it may be possible to improve unimodal (visual) heading perception using a multisensory (visual-vestibular) training paradigm. The results may also help to inform interventions targeting tasks for which effective self-motion perception is important.
Collapse
Affiliation(s)
- Grace A. Gabriel
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Laurence R. Harris
- Department of Psychology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| | - Denise Y. P. Henriques
- Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Kinesiology, York University, Toronto, ON, Canada
| | - Maryam Pandi
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Jennifer L. Campos
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
- *Correspondence: Jennifer L. Campos,
| |
Collapse
|
9
|
Cortical Mechanisms of Multisensory Linear Self-motion Perception. Neurosci Bull 2022; 39:125-137. [PMID: 35821337 PMCID: PMC9849545 DOI: 10.1007/s12264-022-00916-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 01/22/2023] Open
Abstract
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Collapse
|
10
|
Gabriel GA, Harris LR, Gnanasegaram JJ, Cushing SL, Gordon KA, Haycock BC, Campos JL. Age-related changes to vestibular heave and pitch perception and associations with postural control. Sci Rep 2022; 12:6426. [PMID: 35440744 PMCID: PMC9018785 DOI: 10.1038/s41598-022-09807-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 03/21/2022] [Indexed: 11/09/2022] Open
Abstract
Falls are a common cause of injury in older adults (OAs), and age-related declines across the sensory systems are associated with increased falls risk. The vestibular system is particularly important for maintaining balance and supporting safe mobility, and aging has been associated with declines in vestibular end-organ functioning. However, few studies have examined potential age-related differences in vestibular perceptual sensitivities or their association with postural stability. Here we used an adaptive-staircase procedure to measure detection and discrimination thresholds in 19 healthy OAs and 18 healthy younger adults (YAs), by presenting participants with passive heave (linear up-and-down translations) and pitch (forward-backward tilt rotations) movements on a motion-platform in the dark. We also examined participants' postural stability under various standing-balance conditions. Associations among these postural measures and vestibular perceptual thresholds were further examined. Ultimately, OAs showed larger heave and pitch detection thresholds compared to YAs, and larger perceptual thresholds were associated with greater postural sway, but only in OAs. Overall, these results suggest that vestibular perceptual sensitivity declines with older age and that such declines are associated with poorer postural stability. Future studies could consider the potential applicability of these results in the development of screening tools for falls prevention in OAs.
Collapse
Affiliation(s)
- Grace A Gabriel
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,Department of Psychology, University of Toronto, 500 University Avenue, Toronto, ON, M5G 2A2, Canada
| | - Laurence R Harris
- Department of Psychology and Centre for Vision Research, York University, Toronto, ON, Canada
| | - Joshua J Gnanasegaram
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Sharon L Cushing
- Department of Otolaryngology-Head and Neck Surgery, Hospital for Sick Children, Toronto, ON, Canada.,Department of Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, ON, Canada.,Archie's Cochlear Implant Laboratory, Hospital for Sick Children, Toronto, ON, Canada
| | - Karen A Gordon
- Department of Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, ON, Canada.,Archie's Cochlear Implant Laboratory, Hospital for Sick Children, Toronto, ON, Canada
| | - Bruce C Haycock
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.,University of Toronto Institute for Aerospace Studies, Toronto, ON, Canada
| | - Jennifer L Campos
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada. .,Department of Psychology, University of Toronto, 500 University Avenue, Toronto, ON, M5G 2A2, Canada.
| |
Collapse
|
11
|
Chung W, Barnett-Cowan M. Influence of Sensory Conflict on Perceived Timing of Passive Rotation in Virtual Reality. Multisens Res 2022; 35:1-23. [PMID: 35477696 DOI: 10.1163/22134808-bja10074] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 03/17/2022] [Indexed: 02/21/2024]
Abstract
Integration of incoming sensory signals from multiple modalities is central in the determination of self-motion perception. With the emergence of consumer virtual reality (VR), it is becoming increasingly common to experience a mismatch in sensory feedback regarding motion when using immersive displays. In this study, we explored whether introducing various discrepancies between the vestibular and visual motion would influence the perceived timing of self-motion. Participants performed a series of temporal-order judgements between an auditory tone and a passive whole-body rotation on a motion platform accompanied by visual feedback using a virtual environment generated through a head-mounted display. Sensory conflict was induced by altering the speed and direction by which the movement of the visual scene updated relative to the observer's physical rotation. There were no differences in perceived timing of the rotation without vision, with congruent visual feedback and when the speed of the updating of the visual motion was slower. However, the perceived timing was significantly further from zero when the direction of the visual motion was incongruent with the rotation. These findings demonstrate the potential interaction between visual and vestibular signals in the temporal perception of self-motion. Additionally, we recorded cybersickness ratings and found that sickness severity was significantly greater when visual motion was present and incongruent with the physical motion. This supports previous research regarding cybersickness and the sensory conflict theory, where a mismatch between the visual and vestibular signals may lead to a greater likelihood for the occurrence of sickness symptoms.
Collapse
Affiliation(s)
- William Chung
- Department of Kinesiology, University of Waterloo, Waterloo, Ontario, Canada
| | | |
Collapse
|
12
|
Rodriguez R, Crane BT. Effect of timing delay between visual and vestibular stimuli on heading perception. J Neurophysiol 2021; 126:304-312. [PMID: 34191637 DOI: 10.1152/jn.00351.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Heading direction is perceived based on visual and inertial cues. The current study examined the effect of their relative timing on the ability of offset visual headings to influence inertial perception. Seven healthy human subjects experienced 2 s of translation along a heading of 0°, ±35°, ±70°, ±105°, or ±140°. These inertial headings were paired with 2-s duration visual headings that were presented at relative offsets of 0°, ±30°, ±60°, ±90°, or ±120°. The visual stimuli were also presented at 17 temporal delays ranging from -500 ms (visual lead) to 2,000 ms (visual delay) relative to the inertial stimulus. After each stimulus, subjects reported the direction of the inertial stimulus using a dial. The bias of the inertial heading toward the visual heading was robust at ±250 ms when examined across subjects during this period: 8.0° ± 0.5° with a 30° offset, 12.2° ± 0.5° with a 60° offset, 11.7° ± 0.6° with a 90° offset, and 9.8° ± 0.7° with a 120° offset (mean bias toward visual ± SE). The mean bias was much diminished with temporal misalignments of ±500 ms, and there was no longer any visual influence on the inertial heading when the visual stimulus was delayed by 1,000 ms or more. Although the amount of bias varied between subjects, the effect of delay was similar.NEW & NOTEWORTHY The effect of timing on visual-inertial integration on heading perception has not been previously examined. This study finds that visual direction influence inertial heading perception when timing differences are within 250 ms. This suggests visual-inertial stimuli can be integrated over a wider range than reported for visual-auditory integration and may be due to the unique nature of inertial sensation, which can only sense acceleration while the visual system senses position but encodes velocity.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Biomedical Engineering, University of Rochester, Rochester, New York
| | - Benjamin T Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York.,Department of Otolaryngology, University of Rochester, Rochester, New York.,Department of Neuroscience, University of Rochester, Rochester, New York
| |
Collapse
|
13
|
Pöhlmann KMT, Föcker J, Dickinson P, Parke A, O'Hare L. The Effect of Motion Direction and Eccentricity on Vection, VR Sickness and Head Movements in Virtual Reality. Multisens Res 2021; 34:1-40. [PMID: 33882451 DOI: 10.1163/22134808-bja10049] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 04/05/2021] [Indexed: 11/19/2022]
Abstract
Virtual Reality (VR) experienced through head-mounted displays often leads to vection, discomfort and sway in the user. This study investigated the effect of motion direction and eccentricity on these three phenomena using optic flow patterns displayed using the Valve Index. Visual motion stimuli were presented in the centre, periphery or far periphery and moved either in depth (back and forth) or laterally (left and right). Overall vection was stronger for motion in depth compared to lateral motion. Additionally, eccentricity primarily affected stimuli moving in depth with stronger vection for more peripherally presented motion patterns compared to more central ones. Motion direction affected the various aspects of VR sickness differently and modulated the effect of eccentricity on VR sickness. For stimuli moving in depth far peripheral presentation caused more discomfort, whereas for lateral motion the central stimuli caused more discomfort. Stimuli moving in depth led to more head movements in the anterior-posterior direction when the entire visual field was stimulated. Observers demonstrated more head movements in the anterior-posterior direction compared to the medio-lateral direction throughout the entire experiment independent of motion direction or eccentricity of the presented moving stimulus. Head movements were elicited on the same plane as the moving stimulus only for stimuli moving in depth covering the entire visual field. A correlation showed a positive relationship between dizziness and vection duration and between general discomfort and sway. Identifying where in the visual field motion presented to an individual causes the least amount of VR sickness without losing vection and presence can guide development for Virtual Reality games, training and treatment programmes.
Collapse
Affiliation(s)
| | - Julia Föcker
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Patrick Dickinson
- School of Computer Science, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Adrian Parke
- School of Media, Culture and Society, University of the West of Scotland, Paisley Campus, Paisley PA1 2BE, UK
| | - Louise O'Hare
- Division of Psychology, Nottingham Trent University, 50 Shakespeare Street, Nottingham, NG1 4FQ, UK
| |
Collapse
|
14
|
Kenney DM, Jabbari Y, von Mohrenschildt M, Shedden JM. Visual-vestibular integration is preserved with healthy aging in a simple acceleration detection task. Neurobiol Aging 2021; 104:71-81. [PMID: 33975121 DOI: 10.1016/j.neurobiolaging.2021.03.017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 03/29/2021] [Accepted: 03/31/2021] [Indexed: 10/21/2022]
Abstract
Aging is associated with a gradual decline in the sensory systems and noisier sensory information. Some research has found that older adults compensate for this with enhanced multisensory integration. However, less is known about how aging influences visual-vestibular integration, an ability that underlies self-motion perception. We examined how visual-vestibular integration changes in participants from across the lifespan (18-79 years old) with a simple reaction time task. Participants were instructed to respond to visual (optic flow) and vestibular (inertial motion) acceleration cues, presented either alone or at a stimulus onset asynchrony. We measured reaction times and computed the violation area relative to the race model inequality as a measure of visual-vestibular integration. Across all ages, the greatest visual-vestibular integration occurred when the vestibular cue was presented first. Age was associated with longer reaction times and a significantly lower detection rate in the vestibular-only condition, a finding that is consistent with an age-related increase in vestibular noise. Although the relationship between age and visual-vestibular integration was positive, the effect size was very small and did not reach statistical significance. Our results suggest that although age is associated with a significant increase in vestibular perceptual threshold, the relative amount of visual-vestibular integration remains largely intact.
Collapse
Affiliation(s)
- Darren M Kenney
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada.
| | - Yasaman Jabbari
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | | | - Judith M Shedden
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
15
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
16
|
Velocity influences the relative contributions of visual and vestibular cues to self-acceleration. Exp Brain Res 2020; 238:1423-1432. [DOI: 10.1007/s00221-020-05824-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Accepted: 04/27/2020] [Indexed: 11/29/2022]
|
17
|
de Winkel KN, Kurtz M, Bülthoff HH. Effects of visual stimulus characteristics and individual differences in heading estimation. J Vis 2019; 18:9. [PMID: 30347100 DOI: 10.1167/18.11.9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual heading estimation is subject to periodic patterns of constant (bias) and variable (noise) error. The nature of the errors, however, appears to differ between studies, showing underestimation in some, but overestimation in others. We investigated whether field of view (FOV), the availability of binocular disparity cues, motion profile, and visual scene layout can account for error characteristics, with a potential mediating effect of vection. Twenty participants (12 females) reported heading and rated vection for visual horizontal motion stimuli with headings ranging the full circle, while we systematically varied the above factors. Overall, the results show constant errors away from the fore-aft axis. Error magnitude was affected by FOV, disparity, and scene layout. Variable errors varied with heading angle, and depended on scene layout. Higher vection ratings were associated with smaller variable errors. Vection ratings depended on FOV, motion profile, and scene layout, with the highest ratings for a large FOV, cosine-bell velocity profile, and a ground plane scene rather than a dot cloud scene. Although the factors did affect error magnitude, differences in its direction were observed only between participants. We show that the observations are consistent with prior beliefs that headings align with the cardinal axes, where the attraction of each axis is an idiosyncratic property.
Collapse
Affiliation(s)
- Ksander N de Winkel
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Max Kurtz
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Department of Human Factors and Engineering Psychology, University of Twente, Enschede, The Netherlands
| | - Heinrich H Bülthoff
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
18
|
Causal inference accounts for heading perception in the presence of object motion. Proc Natl Acad Sci U S A 2019; 116:9060-9065. [PMID: 30996126 DOI: 10.1073/pnas.1820373116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Collapse
|
19
|
Burkitt JJ, Campos JL, Lyons JL. Iterative Spatial Updating During Forward Linear Walking Revealed Using a Continuous Pointing Task. J Mot Behav 2019; 52:145-166. [PMID: 30982465 DOI: 10.1080/00222895.2019.1599807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
The continuous pointing task uses target-directed pointing responses to determine how perceived distance traveled is estimated during forward linear walking movements. To more precisely examine the regulation of this online process, the current study measured upper extremity joint angles and step-cycle kinematics in full vision and no-vision continuous pointing movements. Results show perceptual under-estimation of traveled distance in no-vision trials compared to full vision trials. Additionally, parsing of the shoulder plane of elevation trajectories revealed discontinuities that reflected this perceptual under-estimation and that were most frequently coupled with the early portion of the right foot swing phase of the step-cycle. This suggests that spatial updating may be composed of discrete iterations that are associated with gait parameters.
Collapse
Affiliation(s)
- James J Burkitt
- Department of Kinesiology, McMaster University, Hamilton, ON, Canada.,Faculty of Health Sciences, University of Ontario Institute of Technology, Oshawa, ON, Canada
| | - Jennifer L Campos
- Toronto Rehabilitation Institute - University Health Network, Toronto, ON, Canada.,Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - James L Lyons
- Department of Kinesiology, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
20
|
Britton Z, Arshad Q. Vestibular and Multi-Sensory Influences Upon Self-Motion Perception and the Consequences for Human Behavior. Front Neurol 2019; 10:63. [PMID: 30899238 PMCID: PMC6416181 DOI: 10.3389/fneur.2019.00063] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Accepted: 01/17/2019] [Indexed: 11/16/2022] Open
Abstract
In this manuscript, we comprehensively review both the human and animal literature regarding vestibular and multi-sensory contributions to self-motion perception. This covers the anatomical basis and how and where the signals are processed at all levels from the peripheral vestibular system to the brainstem and cerebellum and finally to the cortex. Further, we consider how and where these vestibular signals are integrated with other sensory cues to facilitate self-motion perception. We conclude by demonstrating the wide-ranging influences of the vestibular system and self-motion perception upon behavior, namely eye movement, postural control, and spatial awareness as well as new discoveries that such perception can impact upon numerical cognition, human affect, and bodily self-consciousness.
Collapse
Affiliation(s)
- Zelie Britton
- Department of Neuro-Otology, Charing Cross Hospital, Imperial College London, London, United Kingdom
| | - Qadeer Arshad
- Department of Neuro-Otology, Charing Cross Hospital, Imperial College London, London, United Kingdom
| |
Collapse
|
21
|
Abstract
Detection of the state of self-motion, such as the instantaneous heading direction, the traveled trajectory and traveled distance or time, is critical for efficient spatial navigation. Numerous psychophysical studies have indicated that the vestibular system, originating from the otolith and semicircular canals in our inner ears, provides robust signals for different aspects of self-motion perception. In addition, vestibular signals interact with other sensory signals such as visual optic flow to facilitate natural navigation. These behavioral results are consistent with recent findings in neurophysiological studies. In particular, vestibular activity in response to the translation or rotation of the head/body in darkness is revealed in a growing number of cortical regions, many of which are also sensitive to visual motion stimuli. The temporal dynamics of the vestibular activity in the central nervous system can vary widely, ranging from acceleration-dominant to velocity-dominant. Different temporal dynamic signals may be decoded by higher level areas for different functions. For example, the acceleration signals during the translation of body in the horizontal plane may be used by the brain to estimate the heading directions. Although translation and rotation signals arise from independent peripheral organs, that is, otolith and canals, respectively, they frequently converge onto single neurons in the central nervous system including both the brainstem and the cerebral cortex. The convergent neurons typically exhibit stronger responses during a combined curved motion trajectory which may serve as the neural correlate for complex path perception. During spatial navigation, traveled distance or time may be encoded by different population of neurons in multiple regions including hippocampal-entorhinal system, posterior parietal cortex, or frontal cortex.
Collapse
Affiliation(s)
- Zhixian Cheng
- Department of Neuroscience, Yale School of Medicine, New Haven, CT, United States
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
22
|
Gu Y. Vestibular signals in primate cortex for self-motion perception. Curr Opin Neurobiol 2018; 52:10-17. [DOI: 10.1016/j.conb.2018.04.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 03/12/2018] [Accepted: 04/07/2018] [Indexed: 10/17/2022]
|
23
|
Effect of vibration during visual-inertial integration on human heading perception during eccentric gaze. PLoS One 2018; 13:e0199097. [PMID: 29902253 PMCID: PMC6002115 DOI: 10.1371/journal.pone.0199097] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Accepted: 05/31/2018] [Indexed: 11/21/2022] Open
Abstract
Heading direction is determined from visual and inertial cues. Visual headings use retinal coordinates while inertial headings use body coordinates. Thus during eccentric gaze the same heading may be perceived differently by visual and inertial modalities. Stimulus weights depend on the relative reliability of these stimuli, but previous work suggests that the inertial heading may be given more weight than predicted. These experiments only varied the visual stimulus reliability, and it is unclear what occurs with variation in inertial reliability. Five human subjects completed a heading discrimination task using 2s of translation with a peak velocity of 16cm/s. Eye position was ±25° left/right with visual, inertial, or combined motion. The visual motion coherence was 50%. Inertial stimuli included 6 Hz vertical vibration with 0, 0.10, 0.15, or 0.20cm amplitude. Subjects reported perceived heading relative to the midline. With an inertial heading, perception was biased 3.6° towards the gaze direction. Visual headings biased perception 9.6° opposite gaze. The inertial threshold without vibration was 4.8° which increased significantly to 8.8° with vibration but the amplitude of vibration did not influence reliability. With visual-inertial headings, empirical stimulus weights were calculated from the bias and compared with the optimal weight calculated from the threshold. In 2 subjects empirical weights were near optimal while in the remaining 3 subjects the inertial stimuli were weighted greater than optimal predictions. On average the inertial stimulus was weighted greater than predicted. These results indicate multisensory integration may not be a function of stimulus reliability when inertial stimulus reliability is varied.
Collapse
|
24
|
Yang L, Gu Y. Distinct spatial coordinate of visual and vestibular heading signals in macaque FEFsem and MSTd. eLife 2017; 6. [PMID: 29134944 PMCID: PMC5685470 DOI: 10.7554/elife.29809] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 11/03/2017] [Indexed: 11/17/2022] Open
Abstract
Precise heading estimate requires integration of visual optic flow and vestibular inertial motion originating from distinct spatial coordinates (eye- and head-centered, respectively). To explore whether the two heading signals may share a common reference frame along the hierarchy of cortical stages, we explored two multisensory areas in macaques: the smooth pursuit area of the frontal eye field (FEFsem) closer to the motor side, and the dorsal portion of medial superior temporal area (MSTd) closer to the sensory side. In both areas, vestibular signals are head-centered, whereas visual signals are mainly eye-centered. However, visual signals in FEFsem are more shifted towards the head coordinate compared to MSTd. These results are robust being largely independent on: (1) smooth pursuit eye movement, (2) motion parallax cue, and (3) behavioral context for active heading estimation, indicating that the visual and vestibular heading signals may be represented in distinct spatial coordinate in sensory cortices.
Collapse
Affiliation(s)
- Lihua Yang
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
25
|
Nesti A, de Winkel K, Bülthoff HH. Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation. PLoS One 2017; 12:e0170497. [PMID: 28125681 PMCID: PMC5268484 DOI: 10.1371/journal.pone.0170497] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2016] [Accepted: 12/15/2016] [Indexed: 11/26/2022] Open
Abstract
While moving through the environment, our central nervous system accumulates sensory information over time to provide an estimate of our self-motion, allowing for completing crucial tasks such as maintaining balance. However, little is known on how the duration of the motion stimuli influences our performances in a self-motion discrimination task. Here we study the human ability to discriminate intensities of sinusoidal (0.5 Hz) self-rotations around the vertical axis (yaw) for four different stimulus durations (1, 2, 3 and 5 s) in darkness. In a typical trial, participants experienced two consecutive rotations of equal duration and different peak amplitude, and reported the one perceived as stronger. For each stimulus duration, we determined the smallest detectable change in stimulus intensity (differential threshold) for a reference velocity of 15 deg/s. Results indicate that differential thresholds decrease with stimulus duration and asymptotically converge to a constant, positive value. This suggests that the central nervous system accumulates sensory information on self-motion over time, resulting in improved discrimination performances. Observed trends in differential thresholds are consistent with predictions based on a drift diffusion model with leaky integration of sensory evidence.
Collapse
Affiliation(s)
- Alessandro Nesti
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Ksander de Winkel
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Heinrich H Bülthoff
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
26
|
de Winkel KN, Katliar M, Bülthoff HH. Causal Inference in Multisensory Heading Estimation. PLoS One 2017; 12:e0169676. [PMID: 28060957 PMCID: PMC5218471 DOI: 10.1371/journal.pone.0169676] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2016] [Accepted: 12/20/2016] [Indexed: 11/30/2022] Open
Abstract
A large body of research shows that the Central Nervous System (CNS) integrates multisensory information. However, this strategy should only apply to multisensory signals that have a common cause; independent signals should be segregated. Causal Inference (CI) models account for this notion. Surprisingly, previous findings suggested that visual and inertial cues on heading of self-motion are integrated regardless of discrepancy. We hypothesized that CI does occur, but that characteristics of the motion profiles affect multisensory processing. Participants estimated heading of visual-inertial motion stimuli with several different motion profiles and a range of intersensory discrepancies. The results support the hypothesis that judgments of signal causality are included in the heading estimation process. Moreover, the data suggest a decreasing tolerance for discrepancies and an increasing reliance on visual cues for longer duration motions.
Collapse
Affiliation(s)
- Ksander N. de Winkel
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Baden-Württemburg, Germany
- * E-mail:
| | - Mikhail Katliar
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Baden-Württemburg, Germany
| | - Heinrich H. Bülthoff
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Baden-Württemburg, Germany
| |
Collapse
|
27
|
Ramkhalawansingh R, Keshavarz B, Haycock B, Shahab S, Campos JL. Examining the Effect of Age on Visual–Vestibular Self-Motion Perception Using a Driving Paradigm. Perception 2016; 46:566-585. [DOI: 10.1177/0301006616675883] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual–vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual–vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.
Collapse
Affiliation(s)
- Robert Ramkhalawansingh
- Department of Psychology, University of Toronto, Canada; Toronto Rehabilitation Institute, University Health Network, Canada
| | - Behrang Keshavarz
- Toronto Rehabilitation Institute, University Health Network, Canada; Department of Psychology, Ryerson University
| | - Bruce Haycock
- Toronto Rehabilitation Institute, University Health Network, Canada; Institute for Aerospace Studies, University of Toronto, Canada
| | - Saba Shahab
- Faculty of Medicine, University of Toronto, Canada
| | - Jennifer L. Campos
- Toronto Rehabilitation Institute, University Health Network, Canada; Department of Psychology, University of Toronto, Canada
| |
Collapse
|
28
|
Nooij SAE, Nesti A, Bülthoff HH, Pretto P. Perception of rotation, path, and heading in circular trajectories. Exp Brain Res 2016; 234:2323-37. [PMID: 27056085 PMCID: PMC4923114 DOI: 10.1007/s00221-016-4638-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2015] [Accepted: 03/23/2016] [Indexed: 11/04/2022]
Abstract
When in darkness, humans can perceive the direction and magnitude of rotations and of linear translations in the horizontal plane. The current paper addresses the integrated perception of combined translational and rotational motion, as it occurs when moving along a curved trajectory. We questioned whether the perceived motion through the environment follows the predictions of a self-motion perception model (e.g., Merfeld et al. in J Vestib Res 3:141-161, 1993; Newman in A multisensory observer model for human spatial orientation perception, 2009), which assume linear addition of rotational and translational components. For curved motion in darkness, such models predict a non-veridical motion percept, consisting of an underestimation of the perceived rotation, a distortion of the perceived travelled path, and a bias in the perceived heading (i.e., the perceived instantaneous direction of motion with respect to the body). These model predictions were evaluated in two experiments. In Experiment 1, seven participants were moved along a circular trajectory in darkness while facing the motion direction. They indicated perceived yaw rotation using an online tracking task, and perceived travelled path by drawings. In Experiment 2, the heading was systematically varied, and six participants indicated, in a 2-alternative forced-choice task, whether they perceived facing inward or outward of the circular path. Overall, we found no evidence for the heading bias predicted by the model. This suggests that the sum of the perceived rotational and translational components alone cannot adequately explain the overall perceived motion through the environment. Possibly, knowledge about motion dynamics and familiar stimuli combinations may play an important additional role in shaping the percept.
Collapse
Affiliation(s)
- Suzanne A E Nooij
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.
| | - Alessandro Nesti
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Heinrich H Bülthoff
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.
| | - Paolo Pretto
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
29
|
Nash CJ, Cole DJ, Bigler RS. A review of human sensory dynamics for application to models of driver steering and speed control. BIOLOGICAL CYBERNETICS 2016; 110:91-116. [PMID: 27086133 PMCID: PMC4903114 DOI: 10.1007/s00422-016-0682-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2015] [Accepted: 02/22/2016] [Indexed: 06/05/2023]
Abstract
In comparison with the high level of knowledge about vehicle dynamics which exists nowadays, the role of the driver in the driver-vehicle system is still relatively poorly understood. A large variety of driver models exist for various applications; however, few of them take account of the driver's sensory dynamics, and those that do are limited in their scope and accuracy. A review of the literature has been carried out to consolidate information from previous studies which may be useful when incorporating human sensory systems into the design of a driver model. This includes information on sensory dynamics, delays, thresholds and integration of multiple sensory stimuli. This review should provide a basis for further study into sensory perception during driving.
Collapse
Affiliation(s)
- Christopher J. Nash
- Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ UK
| | - David J. Cole
- Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ UK
| | - Robert S. Bigler
- Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ UK
| |
Collapse
|
30
|
Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction. Sci Rep 2016; 6:26301. [PMID: 27198907 PMCID: PMC4873743 DOI: 10.1038/srep26301] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Accepted: 04/25/2016] [Indexed: 12/01/2022] Open
Abstract
Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept, and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion.
Collapse
|
31
|
Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object. J Neurosci 2016; 35:13599-607. [PMID: 26446214 DOI: 10.1523/jneurosci.2267-15.2015] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion.
Collapse
|
32
|
Indovina I, Mazzarella E, Maffei V, Cesqui B, Passamonti L, Lacquaniti F. Sound-evoked vestibular stimulation affects the anticipation of gravity effects during visual self-motion. Exp Brain Res 2015; 233:2365-71. [DOI: 10.1007/s00221-015-4306-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Accepted: 04/29/2015] [Indexed: 11/29/2022]
|
33
|
de Winkel KN, Katliar M, Bülthoff HH. Forced fusion in multisensory heading estimation. PLoS One 2015; 10:e0127104. [PMID: 25938235 PMCID: PMC4418840 DOI: 10.1371/journal.pone.0127104] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Accepted: 04/10/2015] [Indexed: 11/18/2022] Open
Abstract
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.
Collapse
Affiliation(s)
- Ksander N. de Winkel
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
| | - Mikhail Katliar
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
| | - Heinrich H. Bülthoff
- Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany
- Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713, Korea
- * E-mail:
| |
Collapse
|
34
|
Keshavarz B, Riecke BE, Hettinger LJ, Campos JL. Vection and visually induced motion sickness: how are they related? Front Psychol 2015; 6:472. [PMID: 25941509 PMCID: PMC4403286 DOI: 10.3389/fpsyg.2015.00472] [Citation(s) in RCA: 142] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Accepted: 04/01/2015] [Indexed: 11/13/2022] Open
Abstract
The occurrence of visually induced motion sickness has been frequently linked to the sensation of illusory self-motion (vection), however, the precise nature of this relationship is still not fully understood. To date, it is still a matter of debate as to whether vection is a necessary prerequisite for visually induced motion sickness (VIMS). That is, can there be VIMS without any sensation of self-motion? In this paper, we will describe the possible nature of this relationship, review the literature that addresses this relationship (including theoretical accounts of vection and VIMS), and offer suggestions with respect to operationally defining and reporting these phenomena in future.
Collapse
Affiliation(s)
- Behrang Keshavarz
- Intelligent Design for Adaptation, Participation and Technology (iDAPT), Research Department, Toronto Rehabilitation Institute, University Health Network , Toronto, ON, Canada
| | - Bernhard E Riecke
- School of Interactive Arts and Technology, Simon Fraser University , Surrey, BC, Canada
| | - Lawrence J Hettinger
- Center for Behavioral Sciences, Liberty Mutual Research Institute for Safety , Hopkinton, MA, USA
| | - Jennifer L Campos
- Intelligent Design for Adaptation, Participation and Technology (iDAPT), Research Department, Toronto Rehabilitation Institute, University Health Network , Toronto, ON, Canada ; Department of Psychology, University of Toronto , Toronto, ON, Canada
| |
Collapse
|