1
|
Augière T, Simoneau M, Mercier C. Visuotactile integration in individuals with fibromyalgia. Front Hum Neurosci 2024; 18:1390609. [PMID: 38826615 PMCID: PMC11140151 DOI: 10.3389/fnhum.2024.1390609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 04/29/2024] [Indexed: 06/04/2024] Open
Abstract
Our brain constantly integrates afferent information, such as visual and tactile information, to perceive the world around us. According to the maximum-likelihood estimation (MLE) model, imprecise information will be weighted less than precise, making the multisensory percept as precise as possible. Individuals with fibromyalgia (FM), a chronic pain syndrome, show alterations in the integration of tactile information. This could lead to a decrease in their weight in a multisensory percept or a general disruption of multisensory integration, making it less beneficial. To assess multisensory integration, 15 participants with FM and 18 pain-free controls performed a temporal-order judgment task in which they received pairs of sequential visual, tactile (unisensory conditions), or visuotactile (multisensory condition) stimulations on the index and the thumb of the non-dominant hand and had to determine which finger was stimulated first. The task enabled us to measure the precision and accuracy of the percept in each condition. Results indicate an increase in precision in the visuotactile condition compared to the unimodal conditions in controls only, although we found no intergroup differences. The observed visuotactile precision was correlated to the precision predicted by the MLE model in both groups, suggesting an optimal integration. Finally, the weights of the sensory information were not different between the groups; however, in the group with FM, higher pain intensity was associated with smaller tactile weight. This study shows no alterations of the visuotactile integration in individuals with FM, though pain may influence tactile weight in these participants.
Collapse
Affiliation(s)
- Tania Augière
- Center for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), CIUSSS de la Capitale-Nationale, Quebec, QC, Canada
- School of Rehabilitation Sciences, Faculty of Medicine, Laval University, Quebec, QC, Canada
| | - Martin Simoneau
- Center for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), CIUSSS de la Capitale-Nationale, Quebec, QC, Canada
- Department of Kinesiology, Faculty of Medicine, Laval University, Quebec, QC, Canada
| | - Catherine Mercier
- Center for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), CIUSSS de la Capitale-Nationale, Quebec, QC, Canada
- School of Rehabilitation Sciences, Faculty of Medicine, Laval University, Quebec, QC, Canada
| |
Collapse
|
2
|
Jörges B, Harris LR. The impact of visually simulated self-motion on predicting object motion. PLoS One 2024; 19:e0295110. [PMID: 38483949 PMCID: PMC10939277 DOI: 10.1371/journal.pone.0295110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 02/05/2024] [Indexed: 03/17/2024] Open
Abstract
To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigated this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants were shown a ball moving laterally which disappeared after a certain time. They then indicated by button press when they thought the ball would have hit a target rectangle positioned in the environment. While the ball was visible, participants sometimes experienced simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task was a two-interval forced choice task in which participants judged which of two motions was faster: in one interval they saw the same ball they observed in the first task while in the other they saw a ball cloud whose speed was controlled by a PEST staircase. While observing the single ball, they were again moved visually either in the same or opposite direction as the ball or they remained static. We found the expected biases in estimated time-to-contact, while for the speed estimation task, this was only the case when the ball and observer were moving in opposite directions. Our hypotheses regarding precision were largely unsupported by the data. Overall, we draw several conclusions from this experiment: first, incomplete flow parsing can affect motion prediction. Further, it suggests that time-to-contact estimation and speed judgements are determined by partially different mechanisms. Finally, and perhaps most strikingly, there appear to be certain compensatory mechanisms at play that allow for much higher-than-expected precision when observers are experiencing self-motion-even when self-motion is simulated only visually.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, Toronto, Ontario, Canada
| | | |
Collapse
|
3
|
Geno O, Critelli K, Arduino C, Crane BT, Anson ER. Psychometrics of inertial heading perception. J Vestib Res 2024; 34:83-92. [PMID: 38640182 DOI: 10.3233/ves-230077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/21/2024]
Abstract
BACKGROUND Inertial self-motion perception is thought to depend primarily on otolith cues. Recent evidence demonstrated that vestibular perceptual thresholds (including inertial heading) are adaptable, suggesting novel clinical approaches for treating perceptual impairments resulting from vestibular disease. OBJECTIVE Little is known about the psychometric properties of perceptual estimates of inertial heading like test-retest reliability. Here we investigate the psychometric properties of a passive inertial heading perceptual test. METHODS Forty-seven healthy subjects participated across two visits, performing in an inertial heading discrimination task. The point of subjective equality (PSE) and thresholds for heading discrimination were identified for the same day and across day tests. Paired t-tests determined if the PSE or thresholds significantly changed and a mixed interclass correlation coefficient (ICC) model examined test-retest reliability. Minimum detectable change (MDC) was calculated for PSE and threshold for heading discrimination. RESULTS Within a testing session, the heading discrimination PSE score test-retest reliability was good (ICC = 0. 80) and did not change (t(1,36) = -1.23, p = 0.23). Heading discrimination thresholds were moderately reliable (ICC = 0.67) and also stable (t(1,36) = 0.10, p = 0.92). Across testing sessions, heading direction PSE scores were moderately correlated (ICC = 0.59) and stable (t(1,46) = -0.44, p = 0.66). Heading direction thresholds had poor reliability (ICC = 0.03) and were significantly smaller at the second visit (t(1,46) = 2.8, p = 0.008). MDC for heading direction PSE ranged from 6-9 degrees across tests. CONCLUSION The current results indicate moderate reliability for heading perception PSE and provide clinical context for interpreting change in inertial vestibular self-motion perception over time or after an intervention.
Collapse
Affiliation(s)
- Olivia Geno
- Department of Neuroscience, University of Rochester, Rochester NY, USA
| | - Kyle Critelli
- Department of Otolaryngology, University of Rochester, Rochester NY, USA
| | - Cesar Arduino
- Department of Otolaryngology, University of Rochester, Rochester NY, USA
| | - Benjamin T Crane
- Department of Neuroscience, University of Rochester, Rochester NY, USA
- Department of Otolaryngology, University of Rochester, Rochester NY, USA
| | - Eric R Anson
- Department of Neuroscience, University of Rochester, Rochester NY, USA
- Department of Otolaryngology, University of Rochester, Rochester NY, USA
| |
Collapse
|
4
|
Schenberg L, Palou A, Simon F, Bonnard T, Barton CE, Fricker D, Tagliabue M, Llorens J, Beraneck M. Multisensory gaze stabilization in response to subchronic alteration of vestibular type I hair cells. eLife 2023; 12:RP88819. [PMID: 38019267 PMCID: PMC10686621 DOI: 10.7554/elife.88819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2023] Open
Abstract
The functional complementarity of the vestibulo-ocular reflex (VOR) and optokinetic reflex (OKR) allows for optimal combined gaze stabilization responses (CGR) in light. While sensory substitution has been reported following complete vestibular loss, the capacity of the central vestibular system to compensate for partial peripheral vestibular loss remains to be determined. Here, we first demonstrate the efficacy of a 6-week subchronic ototoxic protocol in inducing transient and partial vestibular loss which equally affects the canal- and otolith-dependent VORs. Immunostaining of hair cells in the vestibular sensory epithelia revealed that organ-specific alteration of type I, but not type II, hair cells correlates with functional impairments. The decrease in VOR performance is paralleled with an increase in the gain of the OKR occurring in a specific range of frequencies where VOR normally dominates gaze stabilization, compatible with a sensory substitution process. Comparison of unimodal OKR or VOR versus bimodal CGR revealed that visuo-vestibular interactions remain reduced despite a significant recovery in the VOR. Modeling and sweep-based analysis revealed that the differential capacity to optimally combine OKR and VOR correlates with the reproducibility of the VOR responses. Overall, these results shed light on the multisensory reweighting occurring in pathologies with fluctuating peripheral vestibular malfunction.
Collapse
Affiliation(s)
- Louise Schenberg
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition CenterParisFrance
| | - Aïda Palou
- Departament de Ciències Fisiològiques, Universitat de BarcelonaBarcelonaSpain
- Institut de Neurociènces, Universitat de BarcelonaBarcelonaSpain
- Institut d'Investigació Biomèdica de Bellvitge (IDIBELL)l’Hospitalet de LlobregatSpain
| | - François Simon
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition CenterParisFrance
- Department of Paediatric Otolaryngology, Hôpital Necker-Enfants MaladesParisFrance
| | - Tess Bonnard
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition CenterParisFrance
| | - Charles-Elliot Barton
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition CenterParisFrance
| | - Desdemona Fricker
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition CenterParisFrance
| | - Michele Tagliabue
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition CenterParisFrance
| | - Jordi Llorens
- Departament de Ciències Fisiològiques, Universitat de BarcelonaBarcelonaSpain
- Institut de Neurociènces, Universitat de BarcelonaBarcelonaSpain
- Institut d'Investigació Biomèdica de Bellvitge (IDIBELL)l’Hospitalet de LlobregatSpain
| | - Mathieu Beraneck
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition CenterParisFrance
| |
Collapse
|
5
|
Newman PM, Qi Y, Mou W, McNamara TP. Statistically Optimal Cue Integration During Human Spatial Navigation. Psychon Bull Rev 2023; 30:1621-1642. [PMID: 37038031 DOI: 10.3758/s13423-023-02254-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/08/2023] [Indexed: 04/12/2023]
Abstract
In 2007, Cheng and colleagues published their influential review wherein they analyzed the literature on spatial cue interaction during navigation through a Bayesian lens, and concluded that models of optimal cue integration often applied in psychophysical studies could explain cue interaction during navigation. Since then, numerous empirical investigations have been conducted to assess the degree to which human navigators are optimal when integrating multiple spatial cues during a variety of navigation-related tasks. In the current review, we discuss the literature on human cue integration during navigation that has been published since Cheng et al.'s original review. Evidence from most studies demonstrate optimal navigation behavior when humans are presented with multiple spatial cues. However, applications of optimal cue integration models vary in their underlying assumptions (e.g., uninformative priors and decision rules). Furthermore, cue integration behavior depends in part on the nature of the cues being integrated and the navigational task (e.g., homing versus non-home goal localization). We discuss the implications of these models and suggest directions for future research.
Collapse
Affiliation(s)
- Phillip M Newman
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA.
| | - Yafei Qi
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Weimin Mou
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Timothy P McNamara
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA
| |
Collapse
|
6
|
Zeng Z, Zhang C, Gu Y. Visuo-vestibular heading perception: a model system to study multi-sensory decision making. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220334. [PMID: 37545303 PMCID: PMC10404926 DOI: 10.1098/rstb.2022.0334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/15/2023] [Indexed: 08/08/2023] Open
Abstract
Integrating noisy signals across time as well as sensory modalities, a process named multi-sensory decision making (MSDM), is an essential strategy for making more accurate and sensitive decisions in complex environments. Although this field is just emerging, recent extraordinary works from different perspectives, including computational theory, psychophysical behaviour and neurophysiology, begin to shed new light onto MSDM. In the current review, we focus on MSDM by using a model system of visuo-vestibular heading. Combining well-controlled behavioural paradigms on virtual-reality systems, single-unit recordings, causal manipulations and computational theory based on spiking activity, recent progress reveals that vestibular signals contain complex temporal dynamics in many brain regions, including unisensory, multi-sensory and sensory-motor association areas. This challenges the brain for cue integration across time and sensory modality such as optic flow which mainly contains a motion velocity signal. In addition, new evidence from the higher-level decision-related areas, mostly in the posterior and frontal/prefrontal regions, helps revise our conventional thought on how signals from different sensory modalities may be processed, converged, and moment-by-moment accumulated through neural circuits for forming a unified, optimal perceptual decision. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Zhao Zeng
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Ce Zhang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| |
Collapse
|
7
|
Sharif M, Saman Y, Burling R, Rea O, Patel R, Barrett DJK, Rea P, Kheradmand A, Arshad Q. Altered visual conscious awareness in patients with vestibular dysfunctions; a cross-sectional observation study. J Neurol Sci 2023; 448:120617. [PMID: 36989587 PMCID: PMC10112837 DOI: 10.1016/j.jns.2023.120617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 03/10/2023] [Accepted: 03/20/2023] [Indexed: 03/30/2023]
Abstract
BACKGROUND Patients with vestibular dysfunctions often experience visual-induced symptoms. Here we asked whether such visual dependence can be related to alterations in visual conscious awareness in these patients. METHODS To measure visual conscious awareness, we used the effect of motion-induced blindness (MIB,) in which the perceptual awareness of the visual stimulus alternates despite its unchanged physical characteristics. In this phenomenon, a salient visual target spontaneously disappears and subsequently reappears from visual perception when presented against a moving visual background. The number of perceptual switches during the experience of the MIB stimulus was measured for 120 s in 15 healthy controls, 15 patients with vestibular migraine, 15 patients with benign positional paroxysmal vertigo (BPPV) and 15 with migraine without vestibular symptoms. RESULTS Patients with vestibular dysfunctions (i.e., both vestibular migraine and BPPV) exhibited increased perceptual fluctuations during MIB compared to healthy controls and migraine patients without vertigo. In VM patients, those with more severe symptoms exhibited higher fluctuations of visual awareness (i.e., positive correlation), whereas, in BPPV patients, those with more severe symptoms had lower fluctuations of visual awareness (i.e., negative correlation). IMPLICATIONS Taken together, these findings show that fluctuations of visual awareness are linked to the severity of visual-induced symptoms in patients with vestibular dysfunctions, and distinct pathophysiological mechanisms may mediate visual vertigo in peripheral versus central vestibular dysfunctions.
Collapse
Affiliation(s)
- Mishaal Sharif
- inAmind Laboratory, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester LE1 7RH, UK
| | - Yougan Saman
- inAmind Laboratory, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester LE1 7RH, UK; Balance Clinic, E.N.T Department, Leicester Royal Infirmary, Leicester LE1 5WW, UK
| | - Rose Burling
- inAmind Laboratory, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester LE1 7RH, UK
| | - Oliver Rea
- inAmind Laboratory, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester LE1 7RH, UK
| | - Rakesh Patel
- Faculty of Health and Life Sciences, De Montfort University, The Gateway, Leicester LE1 9BH, UK
| | - Douglas J K Barrett
- inAmind Laboratory, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester LE1 7RH, UK
| | - Peter Rea
- Balance Clinic, E.N.T Department, Leicester Royal Infirmary, Leicester LE1 5WW, UK
| | - Amir Kheradmand
- Department of Neurology, The Johns Hopkins University, Baltimore, MD, USA; Department of Neuroscience, The Johns Hopkins University, Baltimore, MD, USA.
| | - Qadeer Arshad
- inAmind Laboratory, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester LE1 7RH, UK; Neuro-Otology Unit, Division of Brain Sciences, Charing Cross Hospital Campus, Imperial College London, Fulham Palace Road, London W6 8RF, UK.
| |
Collapse
|
8
|
Lacquaniti F, La Scaleia B, Zago M. Noise and vestibular perception of passive self-motion. Front Neurol 2023; 14:1159242. [PMID: 37181550 PMCID: PMC10169592 DOI: 10.3389/fneur.2023.1159242] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Accepted: 03/29/2023] [Indexed: 05/16/2023] Open
Abstract
Noise defined as random disturbances is ubiquitous in both the external environment and the nervous system. Depending on the context, noise can degrade or improve information processing and performance. In all cases, it contributes to neural systems dynamics. We review some effects of various sources of noise on the neural processing of self-motion signals at different stages of the vestibular pathways and the resulting perceptual responses. Hair cells in the inner ear reduce the impact of noise by means of mechanical and neural filtering. Hair cells synapse on regular and irregular afferents. Variability of discharge (noise) is low in regular afferents and high in irregular units. The high variability of irregular units provides information about the envelope of naturalistic head motion stimuli. A subset of neurons in the vestibular nuclei and thalamus are optimally tuned to noisy motion stimuli that reproduce the statistics of naturalistic head movements. In the thalamus, variability of neural discharge increases with increasing motion amplitude but saturates at high amplitudes, accounting for behavioral violation of Weber's law. In general, the precision of individual vestibular neurons in encoding head motion is worse than the perceptual precision measured behaviorally. However, the global precision predicted by neural population codes matches the high behavioral precision. The latter is estimated by means of psychometric functions for detection or discrimination of whole-body displacements. Vestibular motion thresholds (inverse of precision) reflect the contribution of intrinsic and extrinsic noise to perception. Vestibular motion thresholds tend to deteriorate progressively after the age of 40 years, possibly due to oxidative stress resulting from high discharge rates and metabolic loads of vestibular afferents. In the elderly, vestibular thresholds correlate with postural stability: the higher the threshold, the greater is the postural imbalance and risk of falling. Experimental application of optimal levels of either galvanic noise or whole-body oscillations can ameliorate vestibular function with a mechanism reminiscent of stochastic resonance. Assessment of vestibular thresholds is diagnostic in several types of vestibulopathies, and vestibular stimulation might be useful in vestibular rehabilitation.
Collapse
Affiliation(s)
- Francesco Lacquaniti
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
- Department of Systems Medicine, Centre of Space Bio-medicine, University of Rome Tor Vergata, Rome, Italy
| | - Barbara La Scaleia
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Myrka Zago
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
- Department of Civil Engineering and Computer Science Engineering, Centre of Space Bio-medicine, University of Rome Tor Vergata, Rome, Italy
| |
Collapse
|
9
|
Factors influencing clinical outcome in vestibular neuritis - A focussed review and reanalysis of prospective data. J Neurol Sci 2023; 446:120579. [PMID: 36807973 DOI: 10.1016/j.jns.2023.120579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 12/22/2022] [Accepted: 01/31/2023] [Indexed: 02/05/2023]
Abstract
Following vestibular neuritis (VN), long term prognosis is not dependent on the magnitude of the residual peripheral function as measured with either caloric or the video head-impulse test. Rather, recovery is determined by a combination of visuo-vestibular (visual dependence), psychological (anxiety) and vestibular perceptual factors. Our recent research in healthy individuals has also revealed a strong association between the degree of lateralisation of vestibulo-cortical processing and gating of vestibular signals, anxiety and visual dependence. In the context of several functional brain changes occurring in the interaction between visual, vestibular and emotional cortices, which underpin the aforementioned psycho-physiological features in patients with VN, we re-examined our previously published findings focusing on additional factors impacting long term clinical outcome and function. These included: (i) the role of concomitant neuro-otological dysfunction (i.e. migraine and benign paroxysmal positional vertigo (BPPV)) and (ii) the degree to which brain lateralisation of vestibulo-cortical processing influences gating of vestibular function in the acute stage. We found that migraine and BPPV interfere with symptomatic recovery following VN. That is, dizziness handicap at short-term recovery stage was significantly predicted by migraine (r = 0.523, n = 28, p = .002), BPPV (r = 0.658, n = 31, p < .001) and acute visual dependency (r = 0.504, n = 28, p = .003). Moreover, dizziness handicap in the long-term recovery stage continued to be predicted by migraine (r = 0.640, n = 22, p = .001), BPPV (r = 0.626, n = 24, p = .001) and acute visual dependency (r = 0.667, n = 22, p < .001). Furthermore, surrogate measures of vestibulo-cortical lateralisation were predictive of the amount of cortical suppression exerted over vestibular thresholds. That is, in right-sided VN patients, we observed a positive correlation between visual dependence and acute ipsilesional oculomotor thresholds (R2 0.497; p < .001), but not contralateral thresholds (R2 0.017: p > .05). In left-sided VN patients, we observed a negative correlation between visual dependence and ipsilesional oculomotor thresholds (R2 0.459; p < .001), but not for contralateral thresholds (R2 0.013; p > .05). To surmise, our findings illustrate that in VN, neuro-otological co-morbidities retard recovery, and that measures of the peripheral vestibular system are an aggregate of residual function and cortically mediated gating of vestibular input.
Collapse
|
10
|
Kirsch W, Kunde W. On the Role of Interoception in Body and Object Perception: A Multisensory-Integration Account. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:321-339. [PMID: 35994810 PMCID: PMC10018064 DOI: 10.1177/17456916221096138] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Various "embodied perception" phenomena suggest that what people sense of their body shapes what they perceive of the environment and that what they perceive of the environment shapes what they perceive of their bodies. For example, an observer's own hand can be felt where a fake hand is seen, events produced by own body movements seem to occur earlier than they did, and feeling a heavy weight at an observer's back may prompt hills to look steeper. Here we argue that such and various other phenomena are instances of multisensory integration of interoceptive signals from the body and exteroceptive signals from the environment. This overarching view provides a mechanistic description of what embodiment in perception means and how it works. It suggests new research questions while questioning a special role of the body itself and various phenomenon-specific explanations in terms of ownership, agency, or action-related scaling of visual information.
Collapse
Affiliation(s)
- Wladimir Kirsch
- Wladimir Kirsch, Department of Psychology,
University of Würzburg
| | | |
Collapse
|
11
|
Rineau AL, Bringoux L, Sarrazin JC, Berberian B. Being active over one's own motion: Considering predictive mechanisms in self-motion perception. Neurosci Biobehav Rev 2023; 146:105051. [PMID: 36669748 DOI: 10.1016/j.neubiorev.2023.105051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 01/16/2023] [Accepted: 01/16/2023] [Indexed: 01/19/2023]
Abstract
Self-motion perception is a key element guiding pilots' behavior. Its importance is mostly revealed when impaired, leading in most cases to spatial disorientation which is still today a major factor of accidents occurrence. Self-motion perception is known as mainly based on visuo-vestibular integration and can be modulated by the physical properties of the environment with which humans interact. For instance, several studies have shown that the respective weight of visual and vestibular information depends on their reliability. More recently, it has been suggested that the internal state of an operator can also modulate multisensory integration. Interestingly, the systems' automation can interfere with this internal state through the loss of the intentional nature of movements (i.e., loss of agency) and the modulation of associated predictive mechanisms. In this context, one of the new challenges is to better understand the relationship between automation and self-motion perception. The present review explains how linking the concepts of agency and self-motion is a first approach to address this issue.
Collapse
Affiliation(s)
- Anne-Laure Rineau
- Information Processing and Systems, ONERA, Salon de Provence, Base Aérienne 701, France.
| | | | | | - Bruno Berberian
- Information Processing and Systems, ONERA, Salon de Provence, Base Aérienne 701, France.
| |
Collapse
|
12
|
Kirollos R, Herdman CM. Caloric vestibular stimulation induces vestibular circular vection even with a conflicting visual display presented in a virtual reality headset. Iperception 2023; 14:20416695231168093. [PMID: 37113619 PMCID: PMC10126621 DOI: 10.1177/20416695231168093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 03/06/2023] [Indexed: 04/29/2023] Open
Abstract
This study explored visual-vestibular sensory integration when the vestibular system receives self-motion information using caloric irrigation. The objectives of this study were to (1) determine if measurable vestibular circular vection can be induced in healthy participants using caloric vestibular stimulation and (2) determine if a conflicting visual display could impact vestibular vection. In Experiment 1 (E1), participants had their eyes closed. Air caloric vestibular stimulation cooled the endolymph fluid of the horizontal semi-circular canal inducing vestibular circular vection. Participants reported vestibular circular vection with a potentiometer knob that measured circular vection direction, speed, and duration. In Experiment 2 (E2), participants viewed a stationary display in a virtual reality headset that did not signal self-motion while receiving caloric vestibular stimulation. This produced a visual-vestibular conflict. Participants indicated clockwise vection in the left ear and counter-clockwise vection in right ear in a significant proportion of trials in E1 and E2. Vection was significantly slower and shorter in E2 compared to E1. E2 results demonstrated that during visual-vestibular conflict, visual and vestibular cues are used to determine self-motion rather than one system overriding the other. These results are consistent with optimal cue integration hypothesis.
Collapse
Affiliation(s)
- Ramy Kirollos
- Ramy Kirollos, Defence Research and Development
Canada, Toronto Research Center, 1133 Sheppard Ave. W., Toronto, Ontario, M3 K 2C9,
Canada; Visualization and Simulation Center, Carleton University, 1125 Colonel By Drive,
Ottawa, Ontario, K1S 5B6, Canada.
| | | |
Collapse
|
13
|
Jörges B, Harris LR. The impact of visually simulated self-motion on predicting object motion-A registered report protocol. PLoS One 2023; 18:e0267983. [PMID: 36716328 PMCID: PMC9886253 DOI: 10.1371/journal.pone.0267983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 04/19/2022] [Indexed: 02/01/2023] Open
Abstract
To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigate this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants are shown a ball moving laterally which disappears after a certain time. They then indicate by button press when they think the ball would have hit a target rectangle positioned in the environment. While the ball is visible, participants sometimes experience simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task is a two-interval forced choice task in which participants judge which of two motions is faster: in one interval they see the same ball they observed in the first task while in the other they see a ball cloud whose speed is controlled by a PEST staircase. While observing the single ball, they are again moved visually either in the same or opposite direction as the ball or they remain static. We expect participants to overestimate the speed of a ball that moves opposite to their simulated self-motion (speed estimation task), which should then lead them to underestimate the time it takes the ball to reach the target rectangle (prediction task). Seeing the ball during visually simulated self-motion should increase variability in both tasks. We expect to find performance in both tasks to be correlated, both in accuracy and precision.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, Toronto, Canada
- * E-mail:
| | | |
Collapse
|
14
|
Arshad I, Gallagher M, Ferrè ER. Visuo-vestibular conflicts within the roll plane modulate multisensory verticality perception. Neurosci Lett 2023; 792:136963. [PMID: 36375625 DOI: 10.1016/j.neulet.2022.136963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 10/19/2022] [Accepted: 11/08/2022] [Indexed: 11/13/2022]
Abstract
The integration of visuo-vestibular information is crucial when interacting with the external environment. Under normal circumstances, vision and vestibular signals provide corroborating information, for example regarding the direction and speed of self-motion. However, conflicts in visuo-vestibular signalling, such as optic flow presented to a stationary observer, can change subsequent processing in either modality. While previous studies have demonstrated the impact of sensory conflict on unisensory visual or vestibular percepts, here we investigated whether visuo-vestibular conflicts impact sensitivity to multisensory percepts, specifically verticality. Participants were exposed to a visuo-vestibular conflicting or non-conflicting motion adaptor before completing a Vertical Detection Task. Sensitivity to vertical stimuli was reduced following visuo-vestibular conflict. No significant differences in criterion were found. Our findings suggest that visuo-vestibular conflicts not only modulate processing in unimodal channels, but also broader multisensory percepts, which may have implications for higher-level processing dependent on the integration of visual and vestibular signals.
Collapse
Affiliation(s)
- I Arshad
- Department of Psychology, Royal Holloway University of London, United Kingdom; Department of Psychological Sciences, Birkbeck University of London, United Kingdom
| | - M Gallagher
- School of Psychology, Cardiff University, United Kingdom; School of Psychology, University of Kent, United Kingdom.
| | - E R Ferrè
- Department of Psychological Sciences, Birkbeck University of London, United Kingdom
| |
Collapse
|
15
|
Perceptual Biases as the Side Effect of a Multisensory Adaptive System: Insights from Verticality and Self-Motion Perception. Vision (Basel) 2022; 6:vision6030053. [PMID: 36136746 PMCID: PMC9502132 DOI: 10.3390/vision6030053] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 07/22/2022] [Accepted: 08/04/2022] [Indexed: 11/17/2022] Open
Abstract
Perceptual biases can be interpreted as adverse consequences of optimal processes which otherwise improve system performance. The review presented here focuses on the investigation of inaccuracies in multisensory perception by focusing on the perception of verticality and self-motion, where the vestibular sensory modality has a prominent role. Perception of verticality indicates how the system processes gravity. Thus, it represents an indirect measurement of vestibular perception. Head tilts can lead to biases in perceived verticality, interpreted as the influence of a vestibular prior set at the most common orientation relative to gravity (i.e., upright), useful for improving precision when upright (e.g., fall avoidance). Studies on the perception of verticality across development and in the presence of blindness show that prior acquisition is mediated by visual experience, thus unveiling the fundamental role of visuo-vestibular interconnections across development. Such multisensory interactions can be behaviorally tested with cross-modal aftereffect paradigms which test whether adaptation in one sensory modality induces biases in another, eventually revealing an interconnection between the tested sensory modalities. Such phenomena indicate the presence of multisensory neural mechanisms that constantly function to calibrate self-motion dedicated sensory modalities with each other as well as with the environment. Thus, biases in vestibular perception reveal how the brain optimally adapts to environmental requests, such as spatial navigation and steady changes in the surroundings.
Collapse
|
16
|
Gabriel GA, Harris LR, Henriques DYP, Pandi M, Campos JL. Multisensory visual-vestibular training improves visual heading estimation in younger and older adults. Front Aging Neurosci 2022; 14:816512. [PMID: 36092809 PMCID: PMC9452741 DOI: 10.3389/fnagi.2022.816512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 08/01/2022] [Indexed: 11/16/2022] Open
Abstract
Self-motion perception (e.g., when walking/driving) relies on the integration of multiple sensory cues including visual, vestibular, and proprioceptive signals. Changes in the efficacy of multisensory integration have been observed in older adults (OA), which can sometimes lead to errors in perceptual judgments and have been associated with functional declines such as increased falls risk. The objectives of this study were to determine whether passive, visual-vestibular self-motion heading perception could be improved by providing feedback during multisensory training, and whether training-related effects might be more apparent in OAs vs. younger adults (YA). We also investigated the extent to which training might transfer to improved standing-balance. OAs and YAs were passively translated and asked to judge their direction of heading relative to straight-ahead (left/right). Each participant completed three conditions: (1) vestibular-only (passive physical motion in the dark), (2) visual-only (cloud-of-dots display), and (3) bimodal (congruent vestibular and visual stimulation). Measures of heading precision and bias were obtained for each condition. Over the course of 3 days, participants were asked to make bimodal heading judgments and were provided with feedback (“correct”/“incorrect”) on 900 training trials. Post-training, participants’ biases, and precision in all three sensory conditions (vestibular, visual, bimodal), and their standing-balance performance, were assessed. Results demonstrated improved overall precision (i.e., reduced JNDs) in heading perception after training. Pre- vs. post-training difference scores showed that improvements in JNDs were only found in the visual-only condition. Particularly notable is that 27% of OAs initially could not discriminate their heading at all in the visual-only condition pre-training, but subsequently obtained thresholds in the visual-only condition post-training that were similar to those of the other participants. While OAs seemed to show optimal integration pre- and post-training (i.e., did not show significant differences between predicted and observed JNDs), YAs only showed optimal integration post-training. There were no significant effects of training for bimodal or vestibular-only heading estimates, nor standing-balance performance. These results indicate that it may be possible to improve unimodal (visual) heading perception using a multisensory (visual-vestibular) training paradigm. The results may also help to inform interventions targeting tasks for which effective self-motion perception is important.
Collapse
Affiliation(s)
- Grace A. Gabriel
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Laurence R. Harris
- Department of Psychology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| | - Denise Y. P. Henriques
- Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Kinesiology, York University, Toronto, ON, Canada
| | - Maryam Pandi
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Jennifer L. Campos
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
- *Correspondence: Jennifer L. Campos,
| |
Collapse
|
17
|
Mao Y, Pan L, Li W, Xiao S, Qi R, Zhao L, Wang J, Cai Y. Stroboscopic lighting with intensity synchronized to rotation velocity alleviates motion sickness gastrointestinal symptoms and motor disorders in rats. Front Integr Neurosci 2022; 16:941947. [PMID: 35965602 PMCID: PMC9366139 DOI: 10.3389/fnint.2022.941947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 07/06/2022] [Indexed: 11/18/2022] Open
Abstract
Motion sickness (MS) is caused by mismatch between conflicted motion perception produced by motion challenges and expected “internal model” of integrated motion sensory pattern formed under normal condition in the brain. Stroboscopic light could reduce MS nausea symptom via increasing fixation ability for gaze stabilization to reduce visuo-vestibular confliction triggered by distorted vision during locomotion. This study tried to clarify whether MS induced by passive motion could be alleviated by stroboscopic light with emitting rate and intensity synchronized to acceleration–deceleration phase of motion. We observed synchronized and unsynchronized stroboscopic light (SSL: 6 cycle/min; uSSL: 2, 4, and 8 cycle/min) on MS-related gastrointestinal symptoms (conditioned gaping and defecation responses), motor disorders (hypoactivity and balance disturbance), and central Fos protein expression in rats receiving Ferris wheel-like rotation (6 cycle/min). The effects of color temperature and peak light intensity were also examined. We found that SSL (6 cycle/min) significantly reduced rotation-induced conditioned gaping and defecation responses and alleviated rotation-induced decline in spontaneous locomotion activity and disruption in balance beam performance. The efficacy of SSL against MS behavioral responses was affected by peak light intensity but not color temperature. The uSSL (4 and 8 cycle/min) only released defecation but less efficiently than SSL, while uSSL (2 cycle/min) showed no beneficial effect in MS animals. SSL but not uSSL inhibited Fos protein expression in the caudal vestibular nucleus, the nucleus of solitary tract, the parabrachial nucleus, the central nucleus of amygdala, and the paraventricular nucleus of hypothalamus, while uSSL (4 and 8 cycle/min) only decreased Fos expression in the paraventricular nucleus of hypothalamus. These results suggested that stroboscopic light synchronized to motion pattern might alleviate MS gastrointestinal symptoms and motor disorders and inhibit vestibular-autonomic pathways. Our study supports the utilization of motion-synchronous stroboscopic light as a potential countermeasure against MS under abnormal motion condition in future.
Collapse
|
18
|
Jörges B, Harris LR. Object speed perception during lateral visual self-motion. Atten Percept Psychophys 2022; 84:25-46. [PMID: 34704212 PMCID: PMC8547725 DOI: 10.3758/s13414-021-02372-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2021] [Indexed: 11/08/2022]
Abstract
Judging object speed during observer self-motion requires disambiguating retinal stimulation from two sources: self-motion and object motion. According to the Flow Parsing hypothesis, observers estimate their own motion, then subtract the retinal corresponding motion from the total retinal stimulation and interpret the remaining stimulation as pertaining to object motion. Subtracting noisier self-motion information from retinal input should lead to a decrease in precision. Furthermore, when self-motion is only simulated visually, self-motion is likely to be underestimated, yielding an overestimation of target speed when target and observer move in opposite directions and an underestimation when they move in the same direction. We tested this hypothesis with a two-alternative forced-choice task in which participants judged which of two motions, presented in an immersive 3D environment, was faster. One motion interval contained a ball cloud whose speed was selected dynamically according to a PEST staircase, while the other contained one big target travelling laterally at a fixed speed. While viewing the big target, participants were either static or experienced visually simulated lateral self-motion in the same or opposite direction of the target. Participants were not significantly biased in either motion profile, and precision was only significantly lower when participants moved visually in the direction opposite to the target. We conclude that, when immersed in an ecologically valid 3D environment with rich self-motion cues, participants perceive an object's speed accurately at a small precision cost, even when self-motion is simulated only visually.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON M3J 1P3 Canada
| | - Laurence R. Harris
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON M3J 1P3 Canada
| |
Collapse
|
19
|
Hong F, Badde S, Landy MS. Causal inference regulates audiovisual spatial recalibration via its influence on audiovisual perception. PLoS Comput Biol 2021; 17:e1008877. [PMID: 34780469 PMCID: PMC8629398 DOI: 10.1371/journal.pcbi.1008877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 11/29/2021] [Accepted: 10/26/2021] [Indexed: 11/23/2022] Open
Abstract
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli. Audiovisual recalibration of spatial perception occurs when we receive audiovisual stimuli with a systematic spatial discrepancy. The brain must determine to which extent both modalities should be recalibrated. In this study, we scrutinized the mechanisms the brain employs to do so. To this aim, we conducted a classical audiovisual recalibration experiment in which participants were adapted to spatially discrepant audiovisual stimuli. The visual component of the bimodal stimulus was either less, equally, or more reliable than the auditory component. We measured the amount of recalibration by computing the difference between participants’ unimodal localization responses before and after the audiovisual recalibration. Across participants, the influence of visual reliability on auditory recalibration varied fundamentally. We compared three models of recalibration. Only a causal-inference model of recalibration captured the diverse influences of cue reliability on recalibration found in our study, this model is also able to replicate contradictory results found in previous studies. In this model, recalibration depends on the discrepancy between a sensory measurement and the perceptual estimate for the same sensory modality. Cue reliability, perceptual biases, and the degree to which participants infer that the two cues come from a common source govern audiovisual perception and therefore audiovisual recalibration.
Collapse
Affiliation(s)
- Fangfang Hong
- Department of Psychology, New York University, New York City, New York, United States of America
- * E-mail:
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, Massachusetts, United States of America
| | - Michael S. Landy
- Department of Psychology, New York University, New York City, New York, United States of America
- Center for Neural Science, New York University, New York City, New York, United States of America
| |
Collapse
|
20
|
Denquin F, Foucher J, Pla S, Sarrazin JC, Bardy BG. Optical and gravito-inertial contributions to the perception and control of height in a simulated Low-Altitude Flight context. ERGONOMICS 2021; 64:1297-1309. [PMID: 33863267 DOI: 10.1080/00140139.2021.1914352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Accepted: 03/31/2021] [Indexed: 06/12/2023]
Abstract
Low-Altitude Flight (LAF) is a flight formation consisting of rapid close ground flight. Perception and control of self-motion, allowing for optimal information collection and rapid adaptation, are of fundamental importance during LAF, but remain largely unexplored. This study aimed to analyse the impact of visuo-vestibular stimuli on the monitoring of height in a motion-based simulated LAF context. Thirteen non-pilots were tested in different environmental conditions, in which optical and gravito-inertial (GI) information were manipulated. The visual environment, displayed with a VR headset, was a low-textured landscape with identical and equally spaced trees throughout the trials. The GI environment was designed thanks to a motion-based simulator. Results showed that participants had better performances in a visuo-vestibular environment than in a visual-only setting, indicating that multi-sensory information was picked-up faster than a mono-sensory structure. Additionally, we found differences in the contribution of vestibular inputs depending on the kind of task. Practitioner summary: Low-Altitude-Flight (LAF) manoeuvres require delicate aircraft control. Two experiments using a large flight simulator investigated how visual and vestibular stimulation contribute to LAF perception and control. Results suggest that both sources of stimulation need to be combined for accurate performance, with consequences for simulator-based training scenarios. Abbreviations: LAF: low altitude flight; GI: gravito-inertial; 1/2/3D: 1/2/3 dimensions; VR: virtual reality; Mvt: movement; GVE: good visual environment; DVE: degraded visual environment; SSQ: simulator motion sickness questionnary; RT: reaction time; DIMSS: dynamic interface modelling and simulation system metric; corrAcf: maximum correlation coefficient; corrLag: maximum correlation lag; DFT: deviation from target; StdJ: standard deviation of the joytick value; NCR: number of control reversal.
Collapse
Affiliation(s)
- Francois Denquin
- EuroMov Digital Health in Motion, University of Montpellier, IMT Mines Ales, Montpellier, France
- Information Processing and Systems Department, ONERA, Salon-de-Provence, France
| | - Jamilah Foucher
- EuroMov Digital Health in Motion, University of Montpellier, IMT Mines Ales, Montpellier, France
| | - Simon Pla
- EuroMov Digital Health in Motion, University of Montpellier, IMT Mines Ales, Montpellier, France
| | | | - Benoit G Bardy
- EuroMov Digital Health in Motion, University of Montpellier, IMT Mines Ales, Montpellier, France
| |
Collapse
|
21
|
Newman PM, McNamara TP. Integration of visual landmark cues in spatial memory. PSYCHOLOGICAL RESEARCH 2021; 86:1636-1654. [PMID: 34420070 PMCID: PMC8380114 DOI: 10.1007/s00426-021-01581-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 08/11/2021] [Indexed: 11/25/2022]
Abstract
Over the past two decades, much research has been conducted to investigate whether humans are optimal when integrating sensory cues during spatial memory and navigational tasks. Although this work has consistently demonstrated optimal integration of visual cues (e.g., landmarks) with body-based cues (e.g., path integration) during human navigation, little work has investigated how cues of the same sensory type are integrated in spatial memory. A few recent studies have reported mixed results, with some showing very little benefit to having access to more than one landmark, and others showing that multiple landmarks can be optimally integrated in spatial memory. In the current study, we employed a combination of immersive and non-immersive virtual reality spatial memory tasks to test adult humans' ability to integrate multiple landmark cues across six experiments. Our results showed that optimal integration of multiple landmark cues depends on the difficulty of the task, and that the presence of multiple landmarks can elicit an additional latent cue when estimating locations from a ground-level perspective, but not an aerial perspective.
Collapse
Affiliation(s)
- Phillip M Newman
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37212, USA.
| | - Timothy P McNamara
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37212, USA
| |
Collapse
|
22
|
Skerritt-Davis B, Elhilali M. Neural Encoding of Auditory Statistics. J Neurosci 2021; 41:6726-6739. [PMID: 34193552 PMCID: PMC8336711 DOI: 10.1523/jneurosci.1887-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 05/19/2021] [Accepted: 05/26/2021] [Indexed: 11/21/2022] Open
Abstract
The human brain extracts statistical regularities embedded in real-world scenes to sift through the complexity stemming from changing dynamics and entwined uncertainty along multiple perceptual dimensions (e.g., pitch, timbre, location). While there is evidence that sensory dynamics along different auditory dimensions are tracked independently by separate cortical networks, how these statistics are integrated to give rise to unified objects remains unknown, particularly in dynamic scenes that lack conspicuous coupling between features. Using tone sequences with stochastic regularities along spectral and spatial dimensions, this study examines behavioral and electrophysiological responses from human listeners (male and female) to changing statistics in auditory sequences and uses a computational model of predictive Bayesian inference to formulate multiple hypotheses for statistical integration across features. Neural responses reveal multiplexed brain responses reflecting both local statistics along individual features in frontocentral networks, together with global (object-level) processing in centroparietal networks. Independent tracking of local surprisal along each acoustic feature reveals linear modulation of neural responses, while global melody-level statistics follow a nonlinear integration of statistical beliefs across features to guide perception. Near identical results are obtained in separate experiments along spectral and spatial acoustic dimensions, suggesting a common mechanism for statistical inference in the brain. Potential variations in statistical integration strategies and memory deployment shed light on individual variability between listeners in terms of behavioral efficacy and fidelity of neural encoding of stochastic change in acoustic sequences.SIGNIFICANCE STATEMENT The world around us is complex and ever changing: in everyday listening, sound sources evolve along multiple dimensions, such as pitch, timbre, and spatial location, and they exhibit emergent statistical properties that change over time. In the face of this complexity, the brain builds an internal representation of the external world by collecting statistics from the sensory input along multiple dimensions. Using a Bayesian predictive inference model, this work considers alternative hypotheses for how statistics are combined across sensory dimensions. Behavioral and neural responses from human listeners show the brain multiplexes two representations, where local statistics along each feature linearly affect neural responses, and global statistics nonlinearly combine statistical beliefs across dimensions to shape perception of stochastic auditory sequences.
Collapse
|
23
|
Rodriguez R, Crane BT. Effect of timing delay between visual and vestibular stimuli on heading perception. J Neurophysiol 2021; 126:304-312. [PMID: 34191637 DOI: 10.1152/jn.00351.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Heading direction is perceived based on visual and inertial cues. The current study examined the effect of their relative timing on the ability of offset visual headings to influence inertial perception. Seven healthy human subjects experienced 2 s of translation along a heading of 0°, ±35°, ±70°, ±105°, or ±140°. These inertial headings were paired with 2-s duration visual headings that were presented at relative offsets of 0°, ±30°, ±60°, ±90°, or ±120°. The visual stimuli were also presented at 17 temporal delays ranging from -500 ms (visual lead) to 2,000 ms (visual delay) relative to the inertial stimulus. After each stimulus, subjects reported the direction of the inertial stimulus using a dial. The bias of the inertial heading toward the visual heading was robust at ±250 ms when examined across subjects during this period: 8.0° ± 0.5° with a 30° offset, 12.2° ± 0.5° with a 60° offset, 11.7° ± 0.6° with a 90° offset, and 9.8° ± 0.7° with a 120° offset (mean bias toward visual ± SE). The mean bias was much diminished with temporal misalignments of ±500 ms, and there was no longer any visual influence on the inertial heading when the visual stimulus was delayed by 1,000 ms or more. Although the amount of bias varied between subjects, the effect of delay was similar.NEW & NOTEWORTHY The effect of timing on visual-inertial integration on heading perception has not been previously examined. This study finds that visual direction influence inertial heading perception when timing differences are within 250 ms. This suggests visual-inertial stimuli can be integrated over a wider range than reported for visual-auditory integration and may be due to the unique nature of inertial sensation, which can only sense acceleration while the visual system senses position but encodes velocity.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Biomedical Engineering, University of Rochester, Rochester, New York
| | - Benjamin T Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York.,Department of Otolaryngology, University of Rochester, Rochester, New York.,Department of Neuroscience, University of Rochester, Rochester, New York
| |
Collapse
|
24
|
Billock VA, Kinney MJ, Schnupp JW, Meredith MA. A simple vector-like law for perceptual information combination is also followed by a class of cortical multisensory bimodal neurons. iScience 2021; 24:102527. [PMID: 34142039 PMCID: PMC8188495 DOI: 10.1016/j.isci.2021.102527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Revised: 01/10/2021] [Accepted: 05/05/2021] [Indexed: 11/25/2022] Open
Abstract
An interdisciplinary approach to sensory information combination shows a correspondence between perceptual and neural measures of nonlinear multisensory integration. In psychophysics, sensory information combinations are often characterized by the Minkowski formula, but the neural substrates of many psychophysical multisensory interactions are unknown. We show that audiovisual interactions - for both psychophysical detection threshold data and cortical bimodal neurons - obey similar vector-like Minkowski models, suggesting that cortical bimodal neurons could underlie multisensory perceptual sensitivity. An alternative Bayesian model is not a good predictor of cortical bimodal response. In contrast to cortex, audiovisual data from superior colliculus resembles the 'City-Block' combination rule used in perceptual similarity metrics. Previous work found a simple power law amplification rule is followed for perceptual appearance measures and by cortical subthreshold multisensory neurons. The two most studied neural cell classes in cortical multisensory interactions may provide neural substrates for two important perceptual modes: appearance-based and performance-based perception.
Collapse
Affiliation(s)
- Vincent A. Billock
- Naval Aerospace Medical Research Laboratory, NAMRU-D, Wright-Patterson Air Force Base, OH 45433, USA
| | - Micah J. Kinney
- Naval Aerospace Medical Research Laboratory, NAMRU-D, Wright-Patterson Air Force Base, OH 45433, USA
- Naval Air Warfare Center, NAWCAD, Patuxent River, MD 20670, USA
| | - Jan W.H. Schnupp
- Department of Neuroscience, City University of Hong Kong, Kowloon Tong, Hong Kong, China
| | - M. Alex Meredith
- Department of Anatomy and Neurobiology, Virginia Commonwealth University, Richmond, VA 23298, USA
| |
Collapse
|
25
|
Dynamics of Heading and Choice-Related Signals in the Parieto-Insular Vestibular Cortex of Macaque Monkeys. J Neurosci 2021; 41:3254-3265. [PMID: 33622780 DOI: 10.1523/jneurosci.2275-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 01/20/2021] [Accepted: 02/17/2021] [Indexed: 02/06/2023] Open
Abstract
Perceptual decision-making is increasingly being understood to involve an interaction between bottom-up sensory-driven signals and top-down choice-driven signals, but how these signals interact to mediate perception is not well understood. The parieto-insular vestibular cortex (PIVC) is an area with prominent vestibular responsiveness, and previous work has shown that inactivating PIVC impairs vestibular heading judgments. To investigate the nature of PIVC's contribution to heading perception, we recorded extracellularly from PIVC neurons in two male rhesus macaques during a heading discrimination task, and compared findings with data from previous studies of dorsal medial superior temporal (MSTd) and ventral intraparietal (VIP) areas using identical stimuli. By computing partial correlations between neural responses, heading, and choice, we find that PIVC activity reflects a dynamically changing combination of sensory and choice signals. In addition, the sensory and choice signals are more balanced in PIVC, in contrast to the sensory dominance in MSTd and choice dominance in VIP. Interestingly, heading and choice signals in PIVC are negatively correlated during the middle portion of the stimulus epoch, reflecting a mismatch in the polarity of heading and choice signals. We anticipate that these results will help unravel the mechanisms of interaction between bottom-up sensory signals and top-down choice signals in perceptual decision-making, leading to more comprehensive models of self-motion perception.SIGNIFICANCE STATEMENT Vestibular information is important for our perception of self-motion, and various cortical regions in primates show vestibular heading selectivity. Inactivation of the macaque vestibular cortex substantially impairs the precision of vestibular heading discrimination, more so than inactivation of other multisensory areas. Here, we record for the first time from the vestibular cortex while monkeys perform a forced-choice heading discrimination task, and we compare results with data collected previously from other multisensory cortical areas. We find that vestibular cortex activity reflects a dynamically changing combination of sensory and choice signals, with both similarities and notable differences with other multisensory areas.
Collapse
|
26
|
Wild B, Treue S. Primate extrastriate cortical area MST: a gateway between sensation and cognition. J Neurophysiol 2021; 125:1851-1882. [PMID: 33656951 DOI: 10.1152/jn.00384.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Primate visual cortex consists of dozens of distinct brain areas, each providing a highly specialized component to the sophisticated task of encoding the incoming sensory information and creating a representation of our visual environment that underlies our perception and action. One such area is the medial superior temporal cortex (MST), a motion-sensitive, direction-selective part of the primate visual cortex. It receives most of its input from the middle temporal (MT) area, but MST cells have larger receptive fields and respond to more complex motion patterns. The finding that MST cells are tuned for optic flow patterns has led to the suggestion that the area plays an important role in the perception of self-motion. This hypothesis has received further support from studies showing that some MST cells also respond selectively to vestibular cues. Furthermore, the area is part of a network that controls the planning and execution of smooth pursuit eye movements and its activity is modulated by cognitive factors, such as attention and working memory. This review of more than 90 studies focuses on providing clarity of the heterogeneous findings on MST in the macaque cortex and its putative homolog in the human cortex. From this analysis of the unique anatomical and functional position in the hierarchy of areas and processing steps in primate visual cortex, MST emerges as a gateway between perception, cognition, and action planning. Given this pivotal role, this area represents an ideal model system for the transition from sensation to cognition.
Collapse
Affiliation(s)
- Benedict Wild
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Goettingen Graduate Center for Neurosciences, Biophysics, and Molecular Biosciences (GGNB), University of Goettingen, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany
| |
Collapse
|
27
|
Mavrogiorgou P, Peitzmeier N, Enzi B, Flasbeck V, Juckel G. Pareidolias and Creativity in Patients with Mental Disorders. Psychopathology 2021; 54:59-69. [PMID: 33657568 DOI: 10.1159/000512129] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 09/29/2020] [Indexed: 11/19/2022]
Abstract
OBJECTIVE Pareidolias are ilusionary misjudgments and are seen as the result of deliberately or unconsciously caused misinterpretations by the human brain, which tends to complete diffuse and apparently incomplete perceptual images. The psychopathological value of pareidolia in the context of neuropsychiatric diseases has, however, been little researched so far. METHODS In this pilot study, a total of 25 patients (mean age 43.3 years, SD 16.2) with an affective disorder or schizophrenic disease (ICD-10: F3.X or F2.X) and 25 healthy volunteers (mean age 46.1 years, SD 15.4) were compared for sociodemographic factors and psychometric findings, as well as pareidolias and creativity. RESULTS We found that the patients identified significantly fewer pareidolias than healthy controls (p = 0.002) and that patients with schizophrenia, in particular, had a significantly lower hit rate (p = 0.005). Across the whole group, there were clear positive correlations between pareidolia and high creativity, as well as personality traits such as impulsiveness/spontaneity, extraversion, and conscientiousness. CONCLUSIONS Unexpectedly, having less nosology-specific features than individual specific properties such as creativity and extraversion, and especially openness and verbal intelligence, in patients with affective disorder or schizophrenia promotes the recognition of pareidolia as a specific form of illusionary misperception.
Collapse
Affiliation(s)
- Paraskevi Mavrogiorgou
- Department of Psychiatry, Ruhr University Bochum, LWL-Universitätsklinikum, Bochum, Germany
| | - Nils Peitzmeier
- Department of Psychiatry, Ruhr University Bochum, LWL-Universitätsklinikum, Bochum, Germany
| | - Björn Enzi
- Department of Psychiatry, Ruhr University Bochum, LWL-Universitätsklinikum, Bochum, Germany
| | - Vera Flasbeck
- Department of Psychiatry, Ruhr University Bochum, LWL-Universitätsklinikum, Bochum, Germany
| | - Georg Juckel
- Department of Psychiatry, Ruhr University Bochum, LWL-Universitätsklinikum, Bochum, Germany,
| |
Collapse
|
28
|
Burlingham CS, Heeger DJ. Heading perception depends on time-varying evolution of optic flow. Proc Natl Acad Sci U S A 2020; 117:33161-33169. [PMID: 33328275 PMCID: PMC7776640 DOI: 10.1073/pnas.2022984117] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
There is considerable support for the hypothesis that perception of heading in the presence of rotation is mediated by instantaneous optic flow. This hypothesis, however, has never been tested. We introduce a method, termed "nonvarying phase motion," for generating a stimulus that conveys a single instantaneous optic flow field, even though the stimulus is presented for an extended period of time. In this experiment, observers viewed stimulus videos and performed a forced-choice heading discrimination task. For nonvarying phase motion, observers made large errors in heading judgments. This suggests that instantaneous optic flow is insufficient for heading perception in the presence of rotation. These errors were mostly eliminated when the velocity of phase motion was varied over time to convey the evolving sequence of optic flow fields corresponding to a particular heading. This demonstrates that heading perception in the presence of rotation relies on the time-varying evolution of optic flow. We hypothesize that the visual system accurately computes heading, despite rotation, based on optic acceleration, the temporal derivative of optic flow.
Collapse
Affiliation(s)
| | - David J Heeger
- Department of Psychology, New York University, New York, NY 10003;
- Center for Neural Science, New York University, New York, NY 10003
| |
Collapse
|
29
|
Emara AK, Ng MK, Cruickshank JA, Kampert MW, Piuzzi NS, Schaffer JL, King D. Gamer's Health Guide: Optimizing Performance, Recognizing Hazards, and Promoting Wellness in Esports. Curr Sports Med Rep 2020; 19:537-545. [PMID: 33306517 DOI: 10.1249/jsr.0000000000000787] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Electronic sports (esports), or competitive video gaming, is a rapidly growing industry and phenomenon. While around 90% of American children play video games recreationally, the average professional esports athlete spends 5.5 to 10 h gaming daily. These times and efforts parallel those of traditional sports activities where individuals can participate at the casual to the professional level with the respective time commitments. Given the rapid growth in esports, greater emphasis has been placed on identification, management, and prevention of common health hazards that are associated with esports participation while also focusing on the importance of health promotion for this group of athletes. This review outlines a three-point framework for sports medicine providers, trainers, and coaches to provide a holistic approach for the care of the esports athlete. This esports framework includes awareness and management of common musculoskeletal and health hazards, opportunities for health promotion, and recommendations for performance optimization.
Collapse
Affiliation(s)
- Ahmed K Emara
- Esports Medicine Program Cleveland Clinic Foundation, Cleveland, OH
| | | | | | | | | | | | | |
Collapse
|
30
|
Abstract
Research has shown that consistent stereoscopic information improves the vection (i.e. illusions of self-motion) induced in stationary observers. This study investigates the effects of placing stereoscopic information into direct conflict with monocular motion signals by swapping the observer's left and right eye views to reverse disparity. Experiments compared the vection induced by stereo-consistent, stereo-reversed and flat-stereo patterns of: (1) same-size optic flow, which contained monocular motion perspective information about self-motion, and (2) changing-size optic flow, which provided additional monocular information about motion-in-depth based on local changes in object image sizes. As expected, consistent stereoscopic information improved the vection-in-depth induced by both changing-size and same-size patterns of optic flow. Unexpectedly, stereo-reversed patterns of same-size optic flow also induced stronger vection-in-depth than flat-stereo patterns of same-size optic flow. The effects of stereo-consistent and stereo-reversed information on vection strength were found to correlate reliably with their effects on perceived motion-in-depth and motion after-effect durations, but not with their effects on perceived scene depth. This suggests that stereo-consistent and stereo-reversed advantages for vection were both due to effects on perceived motion-in-depth. The current findings clearly demonstrate that stereoscopic information does not need to be consistent with monocular motion signals in order to improve vection. When taken together with past findings, they suggest that stereoscopic information only needs to be dynamic (as opposed to static) in order to improve vection-in-depth.
Collapse
|
31
|
Nguyen NT, Takakura H, Nishijo H, Ueda N, Ito S, Fujisaka M, Akaogi K, Shojaku H. Cerebral Hemodynamic Responses to the Sensory Conflict Between Visual and Rotary Vestibular Stimuli: An Analysis With a Multichannel Near-Infrared Spectroscopy (NIRS) System. Front Hum Neurosci 2020; 14:125. [PMID: 32372931 PMCID: PMC7187689 DOI: 10.3389/fnhum.2020.00125] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2019] [Accepted: 03/19/2020] [Indexed: 12/11/2022] Open
Abstract
Sensory conflict among visual, vestibular, and somatosensory information induces vertiginous sensation and postural instability. To elucidate the cognitive mechanisms of the integration between the visual and vestibular cues in humans, we analyzed the cortical hemodynamic responses during sensory conflict between visual and horizontal rotatory vestibular stimulation using a multichannel near-infrared spectroscopy (NIRS) system. The subjects sat on a rotatory chair that was accelerated at 3°/s2 for 20 s to the right or left, kept rotating at 60°/s for 80 s, and then decelerated at 3°/s2 for 20 s. The subjects were instructed to watch white stripes projected on a screen surrounding the chair during the acceleration and deceleration periods. The white stripes moved in two ways; in the "congruent" condition, the stripes moved in the opposite direction of chair rotation at 3°/s2 (i.e., natural visual stimulation), whereas in the "incongruent" condition, the stripes moved in the same direction of chair rotation at 3°/s2 (i.e., conflicted visual stimulation). The cortical hemodynamic activity was recorded from the bilateral temporoparietal regions. Statistical analyses using NIRS-SPM software indicated that hemodynamic activity increased in the bilateral temporoparietal junctions (TPJs) and human MT+ complex, including the medial temporal (MT) area and medial superior temporal (MST) area in the incongruent condition. Furthermore, the subjective strength of the vertiginous sensation was negatively correlated with hemodynamic activity in the dorsal part of the supramarginal gyrus (SMG) in and around the intraparietal sulcus (IPS). These results suggest that sensory conflict between the visual and vestibular stimuli promotes cortical cognitive processes in the cortical network consisting of the TPJ, the medial temporal gyrus (MTG), and IPS, which might contribute to self-motion perception to maintain a sense of balance or equilibrioception during sensory conflict.
Collapse
Affiliation(s)
- Nghia Trong Nguyen
- Department of Otorhinolaryngology, Head and Neck Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Hiromasa Takakura
- Department of Otorhinolaryngology, Head and Neck Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Hisao Nishijo
- System Emotional Science Laboratory, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Naoko Ueda
- Department of Otorhinolaryngology, Head and Neck Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Shinsuke Ito
- Department of Otorhinolaryngology, Head and Neck Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Michiro Fujisaka
- Department of Otorhinolaryngology, Head and Neck Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Katsuichi Akaogi
- Department of Otorhinolaryngology, Toyama Red Cross Hospital, Toyama, Japan
| | - Hideo Shojaku
- Department of Otorhinolaryngology, Head and Neck Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| |
Collapse
|
32
|
Yakubovich S, Israeli-Korn S, Halperin O, Yahalom G, Hassin-Baer S, Zaidel A. Visual self-motion cues are impaired yet overweighted during visual-vestibular integration in Parkinson's disease. Brain Commun 2020; 2:fcaa035. [PMID: 32954293 PMCID: PMC7425426 DOI: 10.1093/braincomms/fcaa035] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 02/17/2020] [Accepted: 03/11/2020] [Indexed: 11/25/2022] Open
Abstract
Parkinson's disease is prototypically a movement disorder. Although perceptual and motor functions are highly interdependent, much less is known about perceptual deficits in Parkinson's disease, which are less observable by nature, and might go unnoticed if not tested directly. It is therefore imperative to seek and identify these, to fully understand the challenges facing patients with Parkinson's disease. Also, perceptual deficits may be related to motor symptoms. Posture, gait and balance, affected in Parkinson's disease, rely on veridical perception of one's own motion (self-motion) in space. Yet it is not known whether self-motion perception is impaired in Parkinson's disease. Using a well-established multisensory paradigm of heading discrimination (that has not been previously applied to Parkinson's disease), we tested unisensory visual and vestibular self-motion perception, as well as multisensory integration of visual and vestibular cues, in 19 Parkinson's disease, 23 healthy age-matched and 20 healthy young-adult participants. After experiencing vestibular (on a motion platform), visual (optic flow) or multisensory (combined visual-vestibular) self-motion stimuli at various headings, participants reported whether their perceived heading was to the right or left of straight ahead. Parkinson's disease participants and age-matched controls were tested twice (Parkinson's disease participants on and off medication). Parkinson's disease participants demonstrated significantly impaired visual self-motion perception compared with age-matched controls on both visits, irrespective of medication status. Young controls performed slightly (but not significantly) better than age-matched controls and significantly better than the Parkinson's disease group. The visual self-motion perception impairment in Parkinson's disease correlated significantly with clinical disease severity. By contrast, vestibular performance was unimpaired in Parkinson's disease. Remarkably, despite impaired visual self-motion perception, Parkinson's disease participants significantly overweighted the visual cues during multisensory (visual-vestibular ) integration (compared with Bayesian predictions of optimal integration) and significantly more than controls. These findings indicate that self-motion perception in Parkinson's disease is affected by impaired visual cues and by suboptimal visual-vestibular integration (overweighting of visual cues). Notably, vestibular self-motion perception was unimpaired. Thus, visual self-motion perception is specifically impaired in early-stage Parkinson's disease. This can impact Parkinson's disease diagnosis and subtyping. Overweighting of visual cues could reflect a general multisensory integration deficit in Parkinson's disease, or specific overestimation of visual cue reliability. Finally, impaired self-motion perception in Parkinson's disease may contribute to impaired balance and gait control. Future investigation into this connection might open up new avenues of alternative therapies to better treat these difficult symptoms.
Collapse
Affiliation(s)
- Sol Yakubovich
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Simon Israeli-Korn
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- The Neurology and Neurosurgery Department, The Sackler School of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Orly Halperin
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Gilad Yahalom
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- Department of Neurology, Movement Disorders Clinic, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Sharon Hassin-Baer
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- The Neurology and Neurosurgery Department, The Sackler School of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| |
Collapse
|
33
|
Rodriguez R, Crane BT. Common causation and offset effects in human visual-inertial heading direction integration. J Neurophysiol 2020; 123:1369-1379. [PMID: 32130052 DOI: 10.1152/jn.00019.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Movement direction can be determined from a combination of visual and inertial cues. Visual motion (optic flow) can represent self-motion through a fixed environment or environmental motion relative to an observer. Simultaneous visual and inertial heading cues present the question of whether the cues have a common cause (i.e., should be integrated) or whether they should be considered independent. This was studied in eight healthy human subjects who experienced 12 visual and inertial headings in the horizontal plane divided in 30° increments. The headings were estimated in two unisensory and six multisensory trial blocks. Each unisensory block included 72 stimulus presentations, while each multisensory block included 144 stimulus presentations, including every possible combination of visual and inertial headings in random order. After each multisensory stimulus, subjects reported their perception of visual and inertial headings as congruous (i.e., having common causation) or not. In the multisensory trial blocks, subjects also reported visual or inertial heading direction (3 trial blocks for each). For aligned visual-inertial headings, the rate of common causation was higher during alignment in cardinal than noncardinal directions. When visual and inertial stimuli were separated by 30°, the rate of reported common causation remained >50%, but it decreased to 15% or less for separation of ≥90°. The inertial heading was biased toward the visual heading by 11-20° for separations of 30-120°. Thus there was sensory integration even in conditions without reported common causation. The visual heading was minimally influenced by inertial direction. When trials with common causation perception were compared with those without, inertial heading perception had a stronger bias toward visual stimulus direction.NEW & NOTEWORTHY Optic flow ambiguously represents self-motion or environmental motion. When these are in different directions, it is uncertain whether these are integrated into a common perception or not. This study looks at that issue by determining whether the two modalities are consistent and by measuring their perceived directions to get a degree of influence. The visual stimulus can have significant influence on the inertial stimulus even when they are perceived as inconsistent.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Biomedical Engineering, University of Rochester, Rochester, New York
| | - Benjamin T Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York.,Department of Otolaryngology, University of Rochester, Rochester, New York.,Department of Neuroscience, University of Rochester, Rochester, New York
| |
Collapse
|
34
|
Shayman CS, Peterka RJ, Gallun FJ, Oh Y, Chang NYN, Hullar TE. Frequency-dependent integration of auditory and vestibular cues for self-motion perception. J Neurophysiol 2020; 123:936-944. [PMID: 31940239 DOI: 10.1152/jn.00307.2019] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Recent evidence has shown that auditory information may be used to improve postural stability, spatial orientation, navigation, and gait, suggesting an auditory component of self-motion perception. To determine how auditory and other sensory cues integrate for self-motion perception, we measured motion perception during yaw rotations of the body and the auditory environment. Psychophysical thresholds in humans were measured over a range of frequencies (0.1-1.0 Hz) during self-rotation without spatial auditory stimuli, rotation of a sound source around a stationary listener, and self-rotation in the presence of an earth-fixed sound source. Unisensory perceptual thresholds and the combined multisensory thresholds were found to be frequency dependent. Auditory thresholds were better at lower frequencies, and vestibular thresholds were better at higher frequencies. Expressed in terms of peak angular velocity, multisensory vestibular and auditory thresholds ranged from 0.39°/s at 0.1 Hz to 0.95°/s at 1.0 Hz and were significantly better over low frequencies than either the auditory-only (0.54°/s to 2.42°/s at 0.1 and 1.0 Hz, respectively) or vestibular-only (2.00°/s to 0.75°/s at 0.1 and 1.0 Hz, respectively) unisensory conditions. Monaurally presented auditory cues were less effective than binaural cues in lowering multisensory thresholds. Frequency-independent thresholds were derived, assuming that vestibular thresholds depended on a weighted combination of velocity and acceleration cues, whereas auditory thresholds depended on displacement and velocity cues. These results elucidate fundamental mechanisms for the contribution of audition to balance and help explain previous findings, indicating its significance in tasks requiring self-orientation.NEW & NOTEWORTHY Auditory information can be integrated with visual, proprioceptive, and vestibular signals to improve balance, orientation, and gait, but this process is poorly understood. Here, we show that auditory cues significantly improve sensitivity to self-motion perception below 0.5 Hz, whereas vestibular cues contribute more at higher frequencies. Motion thresholds are determined by a weighted combination of displacement, velocity, and acceleration information. These findings may help understand and treat imbalance, particularly in people with sensory deficits.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon.,School of Medicine, University of Utah, Salt Lake City, Utah
| | - Robert J Peterka
- Department of Neurology, Oregon Health and Science University, Portland, Oregon.,National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon
| | - Frederick J Gallun
- National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon.,Oregon Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon
| | - Yonghee Oh
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida
| | - Nai-Yuan N Chang
- Department of Preventive and Restorative Dental Sciences-Division of Bioengineering and Biomaterials, University of California, San Francisco, San Francisco, California
| | - Timothy E Hullar
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon.,Department of Neurology, Oregon Health and Science University, Portland, Oregon.,National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon
| |
Collapse
|
35
|
Miwa T, Hisakata R, Kaneko H. Effects of the gravity direction in the environment and the visual polarity and body direction on the perception of object motion. Vision Res 2019; 164:12-23. [DOI: 10.1016/j.visres.2019.08.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Revised: 07/26/2019] [Accepted: 08/10/2019] [Indexed: 10/26/2022]
|
36
|
Bronstein AM. A conceptual model of the visual control of posture. PROGRESS IN BRAIN RESEARCH 2019; 248:285-302. [PMID: 31239139 DOI: 10.1016/bs.pbr.2019.04.023] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
In order to isolate the visual contribution to the control of postural balance, experiments in which subjects are exposed to large-field visual motion (optokinetic) stimuli are reviewed. In these situations, at motion onset, the visual stimulus signals subject self-motion but inertial (vestibulo-proprioceptive) cues do not. Visually evoked postural responses (VEPR) thus induced can be quickly suppressed by cognitive status or simple repetition of the stimulus, if the inertial self-motion cues available to the subject are reliable. In the conceptual model presented here, the process of assessing the reliability, and degree of matching, of visual and inertial signals is carried out by a General comparator; in turn able to access the Gain control mechanism of the visuo-postural system. Complexity and congruency in the visual stimulus itself are assessed by a Visual comparator, e.g., the presence of motion parallax in the visual stimulus can reverse the sway response direction. VEPR can also be re-oriented according to the position of the eyes in the head and the head on the trunk. This indicates that ocular and cervical proprioceptors must also access the gain control mechanism so that visual stimuli can recruit and silence different postural muscles appropriately. The overall gain of the visuo-postural system is also influenced by less easily defined idiosyncratic factors, such as visual dependence and psychological traits; interestingly both these factors have been found to be associated with poor long term outcome in vestibular disorders. The experimental results and model presented illustrate that the visuo-postural system is a wonderful example of interaction between physics (e.g., stimuli geometry, body dynamics), neuroscience and the border zone between neurology and psycho-somatic medicine.
Collapse
Affiliation(s)
- Adolfo M Bronstein
- Neuro-Otology Unit, Division of Brain Sciences, Imperial College London, Charing Cross Hospital, London, United Kingdom.
| |
Collapse
|
37
|
Arshad Q, Ortega MC, Goga U, Lobo R, Siddiqui S, Mediratta S, Bednarczuk NF, Kaski D, Bronstein AM. Interhemispheric control of sensory cue integration and self-motion perception. Neuroscience 2019; 408:378-387. [PMID: 31026563 DOI: 10.1016/j.neuroscience.2019.04.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 04/10/2019] [Accepted: 04/11/2019] [Indexed: 10/27/2022]
Abstract
Spatial orientation necessitates the integration of visual and vestibular sensory cues, in-turn facilitating self-motion perception. However, the neural mechanisms underpinning sensory integration remain unknown. Recently we have illustrated that spatial orientation and vestibular thresholds are influenced by interhemispheric asymmetries associated with the posterior parietal cortices (PPC) that predominantly house the vestibulo-cortical network. Given that sensory integration is a prerequisite to both spatial orientation and motion perception, we hypothesized that sensory integration is similarly subject to interhemispheric influences. Accordingly, we explored the relationship between vestibulo-cortical dominance - assessed using a biomarker, the degree of vestibular-nystagmus suppression following transcranial direct current stimulation over the PPC - with visual dependence measures obtained during performance of a sensory integration task (the rod-and-disk task). We observed that the degree of visual dependence was correlated with vestibulo-cortical dominance. Specifically, individuals with greater right hemispheric vestibulo-cortical dominance had reduced visual dependence. We proceeded to assess the significance of such dominance on behavior by correlating measures of visual dependence with self-motion perception in healthy subjects. We observed that right-handed individuals experienced illusionary self-motion (vection) quicker than left-handers and that the degree of vestibular cortical dominance was correlated with the time taken to experience vection, only during conditions that induced interhemispheric conflict. To conclude, we demonstrate that interhemispheric asymmetries associated with vestibulo-cortical processing in the PPC functionally and mechanistically link sensory integration and self-motion perception, facilitating spatial orientation. Our findings highlight the importance of dynamic interhemispheric competition upon control of vestibular behavior in humans.
Collapse
Affiliation(s)
- Qadeer Arshad
- Division of Brain Sciences, Charing Cross Hospital Campus, Imperial College London, Fulham Palace Road, London, W6 8RF, UK; Department of Neuroscience, Psychology and Behaviour, University of Leicester, University Road, Leicester, LE1 7RH, UK.
| | - Marta Casanovas Ortega
- Division of Brain Sciences, Charing Cross Hospital Campus, Imperial College London, Fulham Palace Road, London, W6 8RF, UK
| | - Usman Goga
- Division of Brain Sciences, Charing Cross Hospital Campus, Imperial College London, Fulham Palace Road, London, W6 8RF, UK
| | - Rhannon Lobo
- Division of Brain Sciences, Charing Cross Hospital Campus, Imperial College London, Fulham Palace Road, London, W6 8RF, UK
| | - Shuaib Siddiqui
- Division of Brain Sciences, Charing Cross Hospital Campus, Imperial College London, Fulham Palace Road, London, W6 8RF, UK
| | - Saniya Mediratta
- Division of Brain Sciences, Charing Cross Hospital Campus, Imperial College London, Fulham Palace Road, London, W6 8RF, UK
| | - Nadja F Bednarczuk
- Division of Brain Sciences, Charing Cross Hospital Campus, Imperial College London, Fulham Palace Road, London, W6 8RF, UK
| | - Diego Kaski
- Department of Neuro-otology, Royal National Throat Nose and Ear Hospital, University College London, London, WC1X 8DA, UK
| | - Adolfo M Bronstein
- Division of Brain Sciences, Charing Cross Hospital Campus, Imperial College London, Fulham Palace Road, London, W6 8RF, UK
| |
Collapse
|
38
|
Serino A. Peripersonal space (PPS) as a multisensory interface between the individual and the environment, defining the space of the self. Neurosci Biobehav Rev 2019; 99:138-159. [DOI: 10.1016/j.neubiorev.2019.01.016] [Citation(s) in RCA: 112] [Impact Index Per Article: 22.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 12/23/2018] [Accepted: 01/14/2019] [Indexed: 11/25/2022]
|
39
|
Rodriguez R, Crane BT. Effect of range of heading differences on human visual-inertial heading estimation. Exp Brain Res 2019; 237:1227-1237. [PMID: 30847539 DOI: 10.1007/s00221-019-05506-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2018] [Accepted: 03/01/2019] [Indexed: 11/29/2022]
Abstract
Both visual and inertial cues are salient in heading determination. However, optic flow can ambiguously represent self-motion or environmental motion. It is unclear how visual and inertial heading cues are determined to have common cause and integrated vs perceived independently. In four experiments visual and inertial headings were presented simultaneously with ten subjects reporting visual or inertial headings in separate trial blocks. Experiment 1 examined inertial headings within 30° of straight-ahead and visual headings that were offset by up to 60°. Perception of the inertial heading was shifted in the direction of the visual stimulus by as much as 35° by the 60° offset, while perception of the visual stimulus remained largely uninfluenced. Experiment 2 used ± 140° range of inertial headings with up to 120° visual offset. This experiment found variable behavior between subjects with most perceiving the sensory stimuli to be shifted towards an intermediate heading but a few perceiving the headings independently. The visual and inertial headings influenced each other even at the largest offsets. Experiments 3 and 4 had similar inertial headings to experiments 1 and 2, respectively, except subjects reported environmental motion direction. Experiment 4 displayed similar perceptual influences as experiment 2, but in experiment 3 percepts were independent. Results suggested that perception of visual and inertial stimuli tend to be perceived as having common causation in most subjects with offsets up to 90° although with significant variation in perception between individuals. Limiting the range of inertial headings caused the visual heading to dominate the perception.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Bioengineering, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA
| | - Benjamin T Crane
- Department of Bioengineering, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA. .,Department of Otolaryngology, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA. .,Department of Neuroscience, University of Rochester, 601 Elmwood Avenue, Box 629, Rochester, NY, 14642, USA.
| |
Collapse
|
40
|
Britton Z, Arshad Q. Vestibular and Multi-Sensory Influences Upon Self-Motion Perception and the Consequences for Human Behavior. Front Neurol 2019; 10:63. [PMID: 30899238 PMCID: PMC6416181 DOI: 10.3389/fneur.2019.00063] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Accepted: 01/17/2019] [Indexed: 11/16/2022] Open
Abstract
In this manuscript, we comprehensively review both the human and animal literature regarding vestibular and multi-sensory contributions to self-motion perception. This covers the anatomical basis and how and where the signals are processed at all levels from the peripheral vestibular system to the brainstem and cerebellum and finally to the cortex. Further, we consider how and where these vestibular signals are integrated with other sensory cues to facilitate self-motion perception. We conclude by demonstrating the wide-ranging influences of the vestibular system and self-motion perception upon behavior, namely eye movement, postural control, and spatial awareness as well as new discoveries that such perception can impact upon numerical cognition, human affect, and bodily self-consciousness.
Collapse
Affiliation(s)
- Zelie Britton
- Department of Neuro-Otology, Charing Cross Hospital, Imperial College London, London, United Kingdom
| | - Qadeer Arshad
- Department of Neuro-Otology, Charing Cross Hospital, Imperial College London, London, United Kingdom
| |
Collapse
|
41
|
Macuga KL. Multisensory Influences on Driver Steering During Curve Navigation. HUMAN FACTORS 2019; 61:337-347. [PMID: 30320509 DOI: 10.1177/0018720818805898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVE The effects of inertial (vestibular and somatosensory) information on driver steering during curve navigation were investigated, using an electric four-wheel mobility vehicle outfitted with a steering wheel and a portable virtual reality system. BACKGROUND When driving, multiple sources of perceptual information are available. Researchers have focused on visual information, which plays a critical role in steering control. However, it is not yet well established how inertial information might contribute. METHODS I biased inertial cues by varying visual/inertial gains (doubled, halved, reversed), as drivers negotiated curving paths, and measured steering accuracy and efficiency. I also assessed whether being exposed to inertial biases had an impact on postbias steering by comparing pre- and posttest session performance measures. RESULTS Doubling or halving inertial cues had little effect on steering performance. Inertial information only disrupted steering when it was reversed with respect to visual information. Over time, the influence of this extreme inertial bias was reduced though not eliminated. Postbias curve navigation performance was not impacted, likely because participants had learned to disregard, rather than integrate, biased inertial cues. CONCLUSION Results suggest that biased inertial information has little influence on curve navigation performance when visual information is available. APPLICATION Though inertial cues may be important for open-loop steering, when visual cues are unavailable, their role in closed-loop steering seems less influential. This has implications for driving simulation and suggests that inertial discrepancies due to limitations in motion-cuing capabilities may not be all that problematic for the simulation of closed-loop curve steering tasks.
Collapse
|
42
|
Genetically eliminating Purkinje neuron GABAergic neurotransmission increases their response gain to vestibular motion. Proc Natl Acad Sci U S A 2019; 116:3245-3250. [PMID: 30723151 DOI: 10.1073/pnas.1818819116] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Purkinje neurons in the caudal cerebellar vermis combine semicircular canal and otolith signals to segregate linear and gravitational acceleration, evidence for how the cerebellum creates internal models of body motion. However, it is not known which cerebellar circuit connections are necessary to perform this computation. We first showed that this computation is evolutionarily conserved and represented across multiple lobules of the rodent vermis. Then we tested whether Purkinje neuron GABAergic output is required for accurately differentiating linear and gravitational movements through a conditional genetic silencing approach. By using extracellular recordings from lobules VI through X in awake mice, we show that silencing Purkinje neuron output significantly alters their baseline simple spike variability. Moreover, the cerebellum of genetically manipulated mice continues to distinguish linear from gravitational acceleration, suggesting that the underlying computations remain intact. However, response gain is significantly increased in the mutant mice over littermate controls. Altogether, these data argue that Purkinje neuron feedback regulates gain control within the cerebellar circuit.
Collapse
|
43
|
Abstract
Research has previously shown that adding consistent stereoscopic information to self-motion displays can improve the vection in depth induced in physically stationary observers. In some past studies, the simulated eye-separation was always close to the observer's actual eye-separation, as the aim was to examine vection under ecological viewing conditions that provided consistent binocular and monocular self-motion information. The present study investigated whether large discrepancies between the observer's simulated and physical eye-separations would alter the vection-inducing potential of stereoscopic optic flow (either helping, hindering, or preventing the induction of vection). Our self-motion displays simulated eye-separations of 0 cm (the non-stereoscopic control), 3.25 cm (reduced from normal), 6.5 cm (approximately normal), and 13 cm (exaggerated relative to normal). The rated strength of vection in depth was found to increase systematically with the simulated eye-separation. While vection was the strongest in the 13-cm condition (stronger than even the 6.5-cm condition), the 3.25-cm condition still produced superior vection to the 0-cm control (i.e., it had significantly stronger vection ratings and shorter onset latencies). Perceptions of scene depth and object motion-in-depth speed were also found to increase with the simulated eye-separation. As expected based on the findings of previous studies, correlational analyses suggested that the stereoscopic advantage for vection (found for all of our non-zero eye-separation conditions) was due to the increase in perceived motion-in-depth.
Collapse
|
44
|
Intra-auditory integration between pitch and loudness in humans: Evidence of super-optimal integration at moderate uncertainty in auditory signals. Sci Rep 2018; 8:13708. [PMID: 30209342 PMCID: PMC6135783 DOI: 10.1038/s41598-018-31792-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Accepted: 08/21/2018] [Indexed: 11/08/2022] Open
Abstract
When a person plays a musical instrument, sound is produced and the integrated frequency and intensity produced are perceived aurally. The central nervous system (CNS) receives defective afferent signals from auditory systems and delivers imperfect efferent signals to the motor system due to the noise in both systems. However, it is still little known about auditory-motor interactions for successful performance. Here, we investigated auditory-motor interactions as multi-sensory input and multi-motor output system. Subjects performed a constant force production task using four fingers in three different auditory feedback conditions, where either the frequency (F), intensity (I), or both frequency and intensity (FI) of an auditory tone changed with sum of finger forces. Four levels of uncertainty (high, moderate-high, moderate-low, and low) were conditioned by manipulating the feedback gain of the produced force. We observed performance enhancement under the FI condition compared to either F or I alone at moderate-high uncertainty. Interestingly, the performance enhancement was greater than the prediction of the Bayesian model, suggesting super-optimality. We also observed deteriorated synergistic multi-finger interactions as the level of uncertainty increased, suggesting that the CNS responded to increased uncertainty by changing control strategy of multi-finger actions.
Collapse
|
45
|
Abstract
Visual motion processing can be conceptually divided into two levels. In the lower level, local motion signals are detected by spatiotemporal-frequency-selective sensors and then integrated into a motion vector flow. Although the model based on V1-MT physiology provides a good computational framework for this level of processing, it needs to be updated to fully explain psychophysical findings about motion perception, including complex motion signal interactions in the spatiotemporal-frequency and space domains. In the higher level, the velocity map is interpreted. Although there are many motion interpretation processes, we highlight the recent progress in research on the perception of material (e.g., specular reflection, liquid viscosity) and on animacy perception. We then consider possible linking mechanisms of the two levels and propose intrinsic flow decomposition as the key problem. To provide insights into computational mechanisms of motion perception, in addition to psychophysics and neurosciences, we review machine vision studies seeking to solve similar problems.
Collapse
Affiliation(s)
- Shin'ya Nishida
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| | - Takahiro Kawabe
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| | - Masataka Sawayama
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| | - Taiki Fukiage
- NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa 243-0198, Japan; , , ,
| |
Collapse
|
46
|
Shayman CS, Seo JH, Oh Y, Lewis RF, Peterka RJ, Hullar TE. Relationship between vestibular sensitivity and multisensory temporal integration. J Neurophysiol 2018; 120:1572-1577. [PMID: 30020839 DOI: 10.1152/jn.00379.2018] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A single event can generate asynchronous sensory cues due to variable encoding, transmission, and processing delays. To be interpreted as being associated in time, these cues must occur within a limited time window, referred to as a "temporal binding window" (TBW). We investigated the hypothesis that vestibular deficits could disrupt temporal visual-vestibular integration by determining the relationships between vestibular threshold and TBW in participants with normal vestibular function and with vestibular hypofunction. Vestibular perceptual thresholds to yaw rotation were characterized and compared with the TBWs obtained from participants who judged whether a suprathreshold rotation occurred before or after a brief visual stimulus. Vestibular thresholds ranged from 0.7 to 16.5 deg/s and TBWs ranged from 13.8 to 395 ms. Among all participants, TBW and vestibular thresholds were well correlated ( R2 = 0.674, P < 0.001), with vestibular-deficient patients having higher thresholds and wider TBWs. Participants reported that the rotation onset needed to lead the light flash by an average of 80 ms for the visual and vestibular cues to be perceived as occurring simultaneously. The wide TBWs in vestibular-deficient participants compared with normal functioning participants indicate that peripheral sensory loss can lead to abnormal multisensory integration. A reduced ability to temporally combine sensory cues appropriately may provide a novel explanation for some symptoms reported by patients with vestibular deficits. Even among normal functioning participants, a high correlation between TBW and vestibular thresholds was observed, suggesting that these perceptual measurements are sensitive to small differences in vestibular function. NEW & NOTEWORTHY While spatial visual-vestibular integration has been well characterized, the temporal integration of these cues is not well understood. The relationship between sensitivity to whole body rotation and duration of the temporal window of visual-vestibular integration was examined using psychophysical techniques. These parameters were highly correlated for those with normal vestibular function and for patients with vestibular hypofunction. Reduced temporal integration performance in patients with vestibular hypofunction may explain some symptoms associated with vestibular loss.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University , Portland, Oregon
| | - Jae-Hyun Seo
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University , Portland, Oregon.,Department of Otolaryngology-Head and Neck Surgery, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yonghee Oh
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University , Portland, Oregon
| | - Richard F Lewis
- Department of Otolaryngology, Harvard Medical School , Boston, Massachusetts.,Department of Neurology, Harvard Medical School , Boston, Massachusetts.,Jenks Vestibular Physiology Laboratory, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts
| | - Robert J Peterka
- National Center for Rehabilitative Auditory Research-VA Portland Health Care System , Portland, Oregon.,Department of Neurology, Oregon Health and Science University , Portland, Oregon
| | - Timothy E Hullar
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University , Portland, Oregon
| |
Collapse
|
47
|
Effect of vibration during visual-inertial integration on human heading perception during eccentric gaze. PLoS One 2018; 13:e0199097. [PMID: 29902253 PMCID: PMC6002115 DOI: 10.1371/journal.pone.0199097] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Accepted: 05/31/2018] [Indexed: 11/21/2022] Open
Abstract
Heading direction is determined from visual and inertial cues. Visual headings use retinal coordinates while inertial headings use body coordinates. Thus during eccentric gaze the same heading may be perceived differently by visual and inertial modalities. Stimulus weights depend on the relative reliability of these stimuli, but previous work suggests that the inertial heading may be given more weight than predicted. These experiments only varied the visual stimulus reliability, and it is unclear what occurs with variation in inertial reliability. Five human subjects completed a heading discrimination task using 2s of translation with a peak velocity of 16cm/s. Eye position was ±25° left/right with visual, inertial, or combined motion. The visual motion coherence was 50%. Inertial stimuli included 6 Hz vertical vibration with 0, 0.10, 0.15, or 0.20cm amplitude. Subjects reported perceived heading relative to the midline. With an inertial heading, perception was biased 3.6° towards the gaze direction. Visual headings biased perception 9.6° opposite gaze. The inertial threshold without vibration was 4.8° which increased significantly to 8.8° with vibration but the amplitude of vibration did not influence reliability. With visual-inertial headings, empirical stimulus weights were calculated from the bias and compared with the optimal weight calculated from the threshold. In 2 subjects empirical weights were near optimal while in the remaining 3 subjects the inertial stimuli were weighted greater than optimal predictions. On average the inertial stimulus was weighted greater than predicted. These results indicate multisensory integration may not be a function of stimulus reliability when inertial stimulus reliability is varied.
Collapse
|
48
|
Noel JP, Blanke O, Serino A. From multisensory integration in peripersonal space to bodily self-consciousness: from statistical regularities to statistical inference. Ann N Y Acad Sci 2018; 1426:146-165. [PMID: 29876922 DOI: 10.1111/nyas.13867] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 04/24/2018] [Accepted: 05/02/2018] [Indexed: 01/09/2023]
Abstract
Integrating information across sensory systems is a critical step toward building a cohesive representation of the environment and one's body, and as illustrated by numerous illusions, scaffolds subjective experience of the world and self. In the last years, classic principles of multisensory integration elucidated in the subcortex have been translated into the language of statistical inference understood by the neocortical mantle. Most importantly, a mechanistic systems-level description of multisensory computations via probabilistic population coding and divisive normalization is actively being put forward. In parallel, by describing and understanding bodily illusions, researchers have suggested multisensory integration of bodily inputs within the peripersonal space as a key mechanism in bodily self-consciousness. Importantly, certain aspects of bodily self-consciousness, although still very much a minority, have been recently casted under the light of modern computational understandings of multisensory integration. In doing so, we argue, the field of bodily self-consciousness may borrow mechanistic descriptions regarding the neural implementation of inference computations outlined by the multisensory field. This computational approach, leveraged on the understanding of multisensory processes generally, promises to advance scientific comprehension regarding one of the most mysterious questions puzzling humankind, that is, how our brain creates the experience of a self in interaction with the environment.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience (LNCO), Center for Neuroprosthetics (CNP), Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
- Department of Neurology, University of Geneva, Geneva, Switzerland
| | - Andrea Serino
- MySpace Lab, Department of Clinical Neuroscience, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
49
|
Rohde M, van Dam LCJ, Ernst M. Statistically Optimal Multisensory Cue Integration: A Practical Tutorial. Multisens Res 2018; 29:279-317. [PMID: 29384605 DOI: 10.1163/22134808-00002510] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Humans combine redundant multisensory estimates into a coherent multimodal percept. Experiments in cue integration have shown for many modality pairs and perceptual tasks that multisensory information is fused in a statistically optimal manner: observers take the unimodal sensory reliability into consideration when performing perceptual judgments. They combine the senses according to the rules of Maximum Likelihood Estimation to maximize overall perceptual precision. This tutorial explains in an accessible manner how to design optimal cue integration experiments and how to analyse the results from these experiments to test whether humans follow the predictions of the optimal cue integration model. The tutorial is meant for novices in multisensory integration and requires very little training in formal models and psychophysical methods. For each step in the experimental design and analysis, rules of thumb and practical examples are provided. We also publish Matlab code for an example experiment on cue integration and a Matlab toolbox for data analysis that accompanies the tutorial online. This way, readers can learn about the techniques by trying them out themselves. We hope to provide readers with the tools necessary to design their own experiments on optimal cue integration and enable them to take part in explaining when, why and how humans combine multisensory information optimally.
Collapse
|
50
|
Gallagher M, Ferrè ER. Cybersickness: a Multisensory Integration Perspective. Multisens Res 2018; 31:645-674. [PMID: 31264611 DOI: 10.1163/22134808-20181293] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 01/05/2018] [Indexed: 11/19/2022]
Abstract
In the past decade, there has been a rapid advance in Virtual Reality (VR) technology. Key to the user's VR experience are multimodal interactions involving all senses. The human brain must integrate real-time vision, hearing, vestibular and proprioceptive inputs to produce the compelling and captivating feeling of immersion in a VR environment. A serious problem with VR is that users may develop symptoms similar to motion sickness, a malady called cybersickness. At present the underlying cause of cybersickness is not yet fully understood. Cybersickness may be due to a discrepancy between the sensory signals which provide information about the body's orientation and motion: in many VR applications, optic flow elicits an illusory sensation of motion which tells users that they are moving in a certain direction with certain acceleration. However, since users are not actually moving, their proprioceptive and vestibular organs provide no cues of self-motion. These conflicting signals may lead to sensory discrepancies and eventually cybersickness. Here we review the current literature to develop a conceptual scheme for understanding the neural mechanisms of cybersickness. We discuss an approach to cybersickness based on sensory cue integration, focusing on the dynamic re-weighting of visual and vestibular signals for self-motion.
Collapse
Affiliation(s)
- Maria Gallagher
- Department of Psychology, Royal Holloway University of London, Egham, UK
| | | |
Collapse
|