1
|
Shivkumar S, DeAngelis GC, Haefner RM. Hierarchical motion perception as causal inference. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.11.18.567582. [PMID: 38014023 PMCID: PMC10680834 DOI: 10.1101/2023.11.18.567582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Since motion can only be defined relative to a reference frame, which reference frame guides perception? A century of psychophysical studies has produced conflicting evidence: retinotopic, egocentric, world-centric, or even object-centric. We introduce a hierarchical Bayesian model mapping retinal velocities to perceived velocities. Our model mirrors the structure in the world, in which visual elements move within causally connected reference frames. Friction renders velocities in these reference frames mostly stationary, formalized by an additional delta component (at zero) in the prior. Inverting this model automatically segments visual inputs into groups, groups into supergroups, etc. and "perceives" motion in the appropriate reference frame. Critical model predictions are supported by two new experiments, and fitting our model to the data allows us to infer the subjective set of reference frames used by individual observers. Our model provides a quantitative normative justification for key Gestalt principles providing inspiration for building better models of visual processing in general.
Collapse
Affiliation(s)
- Sabyasachi Shivkumar
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, NY 10027, USA
| | - Gregory C DeAngelis
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Ralf M Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| |
Collapse
|
2
|
Polat L, Harpaz T, Zaidel A. Rats rely on airflow cues for self-motion perception. Curr Biol 2024; 34:4248-4260.e5. [PMID: 39214088 DOI: 10.1016/j.cub.2024.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 07/12/2024] [Accepted: 08/01/2024] [Indexed: 09/04/2024]
Abstract
Self-motion perception is a vital skill for all species. It is an inherently multisensory process that combines inertial (body-based) and relative (with respect to the environment) motion cues. Although extensively studied in human and non-human primates, there is currently no paradigm to test self-motion perception in rodents using both inertial and relative self-motion cues. We developed a novel rodent motion simulator using two synchronized robotic arms to generate inertial, relative, or combined (inertial and relative) cues of self-motion. Eight rats were trained to perform a task of heading discrimination, similar to the popular primate paradigm. Strikingly, the rats relied heavily on airflow for relative self-motion perception, with little contribution from the (limited) optic flow cues provided-performance in the dark was almost as good. Relative self-motion (airflow) was perceived with greater reliability vs. inertial. Disrupting airflow, using a fan or windshield, damaged relative, but not inertial, self-motion perception. However, whiskers were not needed for this function. Lastly, the rats integrated relative and inertial self-motion cues in a reliability-based (Bayesian-like) manner. These results implicate airflow as an important cue for self-motion perception in rats and provide a new domain to investigate the neural bases of self-motion perception and multisensory processing in awake behaving rodents.
Collapse
Affiliation(s)
- Lior Polat
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan 5290002, Israel
| | - Tamar Harpaz
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan 5290002, Israel
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan 5290002, Israel.
| |
Collapse
|
3
|
Rohe T, Hesse K, Ehlis AC, Noppeney U. Multisensory perceptual and causal inference is largely preserved in medicated post-acute individuals with schizophrenia. PLoS Biol 2024; 22:e3002790. [PMID: 39255328 PMCID: PMC11466413 DOI: 10.1371/journal.pbio.3002790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 10/10/2024] [Accepted: 08/06/2024] [Indexed: 09/12/2024] Open
Abstract
Hallucinations and perceptual abnormalities in psychosis are thought to arise from imbalanced integration of prior information and sensory inputs. We combined psychophysics, Bayesian modeling, and electroencephalography (EEG) to investigate potential changes in perceptual and causal inference in response to audiovisual flash-beep sequences in medicated individuals with schizophrenia who exhibited limited psychotic symptoms. Seventeen participants with schizophrenia and 23 healthy controls reported either the number of flashes or the number of beeps of audiovisual sequences that varied in their audiovisual numeric disparity across trials. Both groups balanced sensory integration and segregation in line with Bayesian causal inference rather than resorting to simpler heuristics. Both also showed comparable weighting of prior information regarding the signals' causal structure, although the schizophrenia group slightly overweighted prior information about the number of flashes or beeps. At the neural level, both groups computed Bayesian causal inference through dynamic encoding of independent estimates of the flash and beep counts, followed by estimates that flexibly combine audiovisual inputs. Our results demonstrate that the core neurocomputational mechanisms for audiovisual perceptual and causal inference in number estimation tasks are largely preserved in our limited sample of medicated post-acute individuals with schizophrenia. Future research should explore whether these findings generalize to unmedicated patients with acute psychotic symptoms.
Collapse
Affiliation(s)
- Tim Rohe
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Klaus Hesse
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Ann-Christine Ehlis
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
- Tübingen Center for Mental Health (TüCMH), Tübingen, Germany
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
4
|
Uemura M, Katagiri Y, Imai E, Kawahara Y, Otani Y, Ichinose T, Kondo K, Kowa H. Dorsal Anterior Cingulate Cortex Coordinates Contextual Mental Imagery for Single-Beat Manipulation during Rhythmic Sensorimotor Synchronization. Brain Sci 2024; 14:757. [PMID: 39199452 PMCID: PMC11352649 DOI: 10.3390/brainsci14080757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 07/17/2024] [Accepted: 07/23/2024] [Indexed: 09/01/2024] Open
Abstract
Flexible pulse-by-pulse regulation of sensorimotor synchronization is crucial for voluntarily showing rhythmic behaviors synchronously with external cueing; however, the underpinning neurophysiological mechanisms remain unclear. We hypothesized that the dorsal anterior cingulate cortex (dACC) plays a key role by coordinating both proactive and reactive motor outcomes based on contextual mental imagery. To test our hypothesis, a missing-oddball task in finger-tapping paradigms was conducted in 33 healthy young volunteers. The dynamic properties of the dACC were evaluated by event-related deep-brain activity (ER-DBA), supported by event-related potential (ERP) analysis and behavioral evaluation based on signal detection theory. We found that ER-DBA activation/deactivation reflected a strategic choice of motor control modality in accordance with mental imagery. Reverse ERP traces, as omission responses, confirmed that the imagery was contextual. We found that mental imagery was updated only by environmental changes via perceptual evidence and response-based abductive reasoning. Moreover, stable on-pulse tapping was achievable by maintaining proactive control while creating an imagery of syncopated rhythms from simple beat trains, whereas accuracy was degraded with frequent erroneous tapping for missing pulses. We conclude that the dACC voluntarily regulates rhythmic sensorimotor synchronization by utilizing contextual mental imagery based on experience and by creating novel rhythms.
Collapse
Affiliation(s)
- Maho Uemura
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
- School of Music, Mukogawa Women’s University, Nishinomiya 663-8558, Japan;
| | - Yoshitada Katagiri
- Department of Bioengineering, School of Engineering, The University of Tokyo, Tokyo 113-8655, Japan;
| | - Emiko Imai
- Department of Biophysics, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan;
| | - Yasuhiro Kawahara
- Department of Human life and Health Sciences, Division of Arts and Sciences, The Open University of Japan, Chiba 261-8586, Japan;
| | - Yoshitaka Otani
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
- Faculty of Rehabilitation, Kobe International University, Kobe 658-0032, Japan
| | - Tomoko Ichinose
- School of Music, Mukogawa Women’s University, Nishinomiya 663-8558, Japan;
| | | | - Hisatomo Kowa
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
| |
Collapse
|
5
|
Rohe T. Complex multisensory causal inference in multi-signal scenarios (commentary on Kayser, Debats & Heuer, 2024). Eur J Neurosci 2024; 59:2890-2893. [PMID: 38706126 DOI: 10.1111/ejn.16388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 04/24/2024] [Accepted: 04/25/2024] [Indexed: 05/07/2024]
Affiliation(s)
- Tim Rohe
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
6
|
Zaidel A. Multisensory Calibration: A Variety of Slow and Fast Brain Processes Throughout the Lifespan. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:139-152. [PMID: 38270858 DOI: 10.1007/978-981-99-7611-9_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
From before we are born, throughout development, adulthood, and aging, we are immersed in a multisensory world. At each of these stages, our sensory cues are constantly changing, due to body, brain, and environmental changes. While integration of information from our different sensory cues improves precision, this only improves accuracy if the underlying cues are unbiased. Thus, multisensory calibration is a vital and ongoing process. To meet this grand challenge, our brains have evolved a variety of mechanisms. First, in response to a systematic discrepancy between sensory cues (without external feedback) the cues calibrate one another (unsupervised calibration). Second, multisensory function is calibrated to external feedback (supervised calibration). These two mechanisms superimpose. While the former likely reflects a lower level mechanism, the latter likely reflects a higher level cognitive mechanism. Indeed, neural correlates of supervised multisensory calibration in monkeys were found in higher level multisensory cortical area VIP, but not in the relatively lower level multisensory area MSTd. In addition, even without a cue discrepancy (e.g., when experiencing stimuli from different sensory cues in series) the brain monitors supra-modal statistics of events in the environment and adapts perception cross-modally. This too comprises a variety of mechanisms, including confirmation bias to prior choices, and lower level cross-sensory adaptation. Further research into the neuronal underpinnings of the broad and diverse functions of multisensory calibration, with improved synthesis of theories is needed to attain a more comprehensive understanding of multisensory brain function.
Collapse
Affiliation(s)
- Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, Israel.
| |
Collapse
|
7
|
Jones SA, Noppeney U. Multisensory Integration and Causal Inference in Typical and Atypical Populations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:59-76. [PMID: 38270853 DOI: 10.1007/978-981-99-7611-9_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory perception is critical for effective interaction with the environment, but human responses to multisensory stimuli vary across the lifespan and appear changed in some atypical populations. In this review chapter, we consider multisensory integration within a normative Bayesian framework. We begin by outlining the complex computational challenges of multisensory causal inference and reliability-weighted cue integration, and discuss whether healthy young adults behave in accordance with normative Bayesian models. We then compare their behaviour with various other human populations (children, older adults, and those with neurological or neuropsychiatric disorders). In particular, we consider whether the differences seen in these groups are due only to changes in their computational parameters (such as sensory noise or perceptual priors), or whether the fundamental computational principles (such as reliability weighting) underlying multisensory perception may also be altered. We conclude by arguing that future research should aim explicitly to differentiate between these possibilities.
Collapse
Affiliation(s)
- Samuel A Jones
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
8
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220344. [PMID: 37545300 PMCID: PMC10404925 DOI: 10.1098/rstb.2022.0344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/20/2023] [Indexed: 08/08/2023] Open
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of causal inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief about (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modelling results, we show that humans report targets as stationary and steer towards their initial rather than final position more often when they are themselves moving, suggesting a putative misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results support both of these predictions. Lastly, analysis of eye movements show that, while initial saccades toward targets were largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Johannes Bill
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Department of Psychology, Harvard University, Boston, MA 02115, USA
| | - Haoran Ding
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - John Vastola
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14611, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA
- Tandon School of Engineering, New York University, New York, NY 10003, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Center for Brain Science, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
9
|
Huo H, Liu X, Tang Z, Dong Y, Zhao D, Chen D, Tang M, Qiao X, Du X, Guo J, Wang J, Fan Y. Interhemispheric multisensory perception and Bayesian causal inference. iScience 2023; 26:106706. [PMID: 37250338 PMCID: PMC10214730 DOI: 10.1016/j.isci.2023.106706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 02/07/2023] [Accepted: 04/17/2023] [Indexed: 05/31/2023] Open
Abstract
In daily life, our brain needs to eliminate irrelevant signals and integrate relevant signals to facilitate natural interactions with the surrounding. Previous study focused on paradigms without effect of dominant laterality and found that human observers process multisensory signals consistent with Bayesian causal inference (BCI). However, most human activities are of bilateral interaction involved in processing of interhemispheric sensory signals. It remains unclear whether the BCI framework also fits to such activities. Here, we presented a bilateral hand-matching task to understand the causal structure of interhemispheric sensory signals. In this task, participants were asked to match ipsilateral visual or proprioceptive cues with the contralateral hand. Our results suggest that interhemispheric causal inference is most derived from the BCI framework. The interhemispheric perceptual bias may vary strategy models to estimate the contralateral multisensory signals. The findings help to understand how the brain processes the uncertainty information coming from interhemispheric sensory signals.
Collapse
Affiliation(s)
- Hongqiang Huo
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xiaoyu Liu
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100083, China
| | - Zhili Tang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Ying Dong
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Di Zhao
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Duo Chen
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Min Tang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xiaofeng Qiao
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xin Du
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Jieyi Guo
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Jinghui Wang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Yubo Fan
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
- School of Medical Science and Engineering Medicine, Beihang University, Beijing 100083, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100083, China
| |
Collapse
|
10
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.27.525974. [PMID: 36778376 PMCID: PMC9915492 DOI: 10.1101/2023.01.27.525974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of Bayesian Causal Inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief over (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modeling results, we show that humans report targets as stationary and steer toward their initial rather than final position more often when they are themselves moving, suggesting a misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results confirm both of these predictions. Lastly, analysis of eye-movements show that, while initial saccades toward targets are largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, United States
| | - Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Department of Psychology, Harvard University, Cambridge, MA, United States
| | - Haoran Ding
- Center for Neural Science, New York University, New York City, NY, United States
| | - John Vastola
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, United States
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York City, NY, United States
- Tandon School of Engineering, New York University, New York City, NY, United states
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Center for Brain Science, Harvard University, Boston, MA, United States
| |
Collapse
|
11
|
Preuss Mattsson N, Coppi S, Chancel M, Ehrsson HH. Combination of visuo-tactile and visuo-vestibular correlations in illusory body ownership and self-motion sensations. PLoS One 2022; 17:e0277080. [PMID: 36378668 PMCID: PMC9665377 DOI: 10.1371/journal.pone.0277080] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 10/18/2022] [Indexed: 11/17/2022] Open
Abstract
Previous studies have shown that illusory ownership over a mannequin's body can be induced through synchronous visuo-tactile stimulation as well as through synchronous visuo-vestibular stimulation. The current study aimed to elucidate how three-way combinations of correlated visual, tactile and vestibular signals contribute to the senses of body ownership and self-motion. Visuo-tactile temporal congruence was manipulated by touching the mannequin's body and the participant's unseen real body on the trunk with a small object either synchronously or asynchronously. Visuo-vestibular temporal congruence was manipulated by synchronous or asynchronous presentation of a visual motion cue (the background rotating around the mannequin in one direction) and galvanic stimulation of the vestibular nerve generating a rotation sensation (in the same direction). The illusory experiences were quantified using a questionnaire; threat-evoked skin-conductance responses (SCRs) provided complementary indirect physiological evidence for the illusion. Ratings on the illusion questionnaire statement showed significant main effects of synchronous visuo-vestibular and synchronous visuo-tactile stimulations, suggesting that both of these pairs of bimodal correlations contribute to the ownership illusion. Interestingly, visuo-tactile synchrony dominated because synchronous visuo-tactile stimulation combined with asynchronous visuo-vestibular stimulation elicited a body ownership illusion of similar strength as when both bimodal combinations were synchronous. Moreover, both visuo-tactile and visuo-vestibular synchrony were associated with enhanced self-motion perception; self-motion sensations were even triggered when visuo-tactile synchrony was combined with visuo-vestibular asynchrony, suggesting that ownership enhanced the relevance of visual information as a self-motion cue. Finally, the SCR results suggest that synchronous stimulation of either modality pair led to a stronger illusion compared to the asynchronous conditions. Collectively, the results suggest that visuo-tactile temporal correlations have a stronger influence on body ownership than visuo-vestibular correlations and that ownership boosts self-motion perception. We present a Bayesian causal inference model that can explain how visuo-vestibular and visuo-tactile information are combined in multisensory own-body perception.
Collapse
Affiliation(s)
| | - Sara Coppi
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Marie Chancel
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
- University Grenoble Alpes, CNRS, LPNC, Grenoble, France
| | - H. Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
12
|
Müller-Pinzler L, Czekalla N, Mayer AV, Schröder A, Stolz DS, Paulus FM, Krach S. Neurocomputational mechanisms of affected beliefs. Commun Biol 2022; 5:1241. [PMCID: PMC9663730 DOI: 10.1038/s42003-022-04165-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 10/25/2022] [Indexed: 11/16/2022] Open
Abstract
AbstractThe feedback people receive on their behavior shapes the process of belief formation and self-efficacy in mastering a particular task. However, the neural and computational mechanisms of how the subjective value of self-efficacy beliefs, and the corresponding affect, influence the learning process remain unclear. We investigated these mechanisms during self-efficacy belief formation using fMRI, pupillometry, and computational modeling, and by analyzing individual differences in affective experience. Biases in the formation of self-efficacy beliefs were associated with affect, pupil dilation, and neural activity within the anterior insula, amygdala, ventral tegmental area/ substantia nigra, and mPFC. Specifically, neural and pupil responses mapped the valence of the prediction errors in correspondence with individuals’ experienced affective states and learning biases during self-efficacy belief formation. Together with the functional connectivity dynamics of the anterior insula within this network, our results provide evidence for neural and computational mechanisms of how we arrive at affected beliefs.
Collapse
|
13
|
Zhang J, Huang M, Gu Y, Chen A, Yu Y. Visual-Based Spatial Coordinate Dominates Probabilistic Multisensory Inference in Macaque MST-d Disparity Encoding. Brain Sci 2022; 12:1387. [PMID: 36291320 PMCID: PMC9599195 DOI: 10.3390/brainsci12101387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Revised: 10/07/2022] [Accepted: 10/07/2022] [Indexed: 11/16/2022] Open
Abstract
Numerous studies have demonstrated that animal brains accurately infer whether multisensory stimuli are from a common source or separate sources. Previous work proposed that the multisensory neurons in the dorsal medial superior temporal area (MST-d) serve as integration or separation encoders determined by the tuning-response ratio. However, it remains unclear whether MST-d neurons mainly take a sense input as a spatial coordinate reference for carrying out multisensory integration or separation. Our experimental analysis shows that the preferred tuning response to visual input is generally larger than vestibular according to the Macaque MST-d neuronal recordings. This may be crucial to serving as the base of coordinate reference when the subject perceives moving direction information from two senses. By constructing a flexible Monte-Carlo probabilistic sampling (fMCS) model, we validate this hypothesis that the visual and vestibular cues are more likely to be integrated into a visual-based coordinate rather than vestibular. Furthermore, the property of the tuning gradient also affects decision-making regarding whether the cues should be integrated or not. To a dominant modality, an effective decision is produced by a steep response-tuning gradient of the corresponding neurons, while to a subordinate modality a steep tuning gradient produces a rigid decision with a significant bias to either integration or separation. This work proposes that the tuning response amplitude and tuning gradient jointly modulate which modality serves as the base coordinate for the reference frame and the direction change with which modality is decoded effectively.
Collapse
Affiliation(s)
- Jiawei Zhang
- Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Human Phenome Institute, Shanghai 200433, China
| | - Mingyi Huang
- Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Human Phenome Institute, Shanghai 200433, China
| | - Yong Gu
- Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China
| | - Yuguo Yu
- Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Human Phenome Institute, Shanghai 200433, China
| |
Collapse
|
14
|
Wang G, Yang Y, Wang J, Hao Z, Luo X, Liu J. Dynamic changes of brain networks during standing balance control under visual conflict. Front Neurosci 2022; 16:1003996. [PMID: 36278015 PMCID: PMC9581155 DOI: 10.3389/fnins.2022.1003996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 09/20/2022] [Indexed: 11/13/2022] Open
Abstract
Stance balance control requires a very accurate tuning and combination of visual, vestibular, and proprioceptive inputs, and conflict among these sensory systems may induce posture instability and even falls. Although there are many human mechanics and psychophysical studies for this phenomenon, the effects of sensory conflict on brain networks and its underlying neural mechanisms are still unclear. Here, we combined a rotating platform and a virtual reality (VR) headset to control the participants’ physical and visual motion states, presenting them with incongruous (sensory conflict) or congruous (normal control) physical-visual stimuli. Further, to investigate the effects of sensory conflict on stance stability and brain networks, we recorded and calculated the effective connectivity of source-level electroencephalogram (EEG) and the average velocity of the plantar center of pressure (COP) in healthy subjects (18 subjects: 10 males, 8 females). First, our results showed that sensory conflict did have a detrimental effect on stance posture control [sensor F(1, 17) = 13.34, P = 0.0019], but this effect decreases over time [window*sensor F(2, 34) = 6.72, P = 0.0035]. Humans show a marked adaptation to sensory conflict. In addition, we found that human adaptation to the sensory conflict was associated with changes in the cortical network. At the stimulus onset, congruent and incongruent stimuli had similar effects on brain networks. In both cases, there was a significant increase in information interaction centered on the frontal cortices (p < 0.05). Then, after a time window, synchronized with the restoration of stance stability under conflict, the connectivity of large brain regions, including posterior parietal, visual, somatosensory, and motor cortices, was generally lower in sensory conflict than in controls (p < 0.05). But the influence of the superior temporal lobe on other cortices was significantly increased. Overall, we speculate that a posterior parietal-centered cortical network may play a key role in integrating congruous sensory information. Furthermore, the dissociation of this network may reflect a flexible multisensory interaction strategy that is critical for human posture balance control in complex and changing environments. In addition, the superior temporal lobe may play a key role in processing conflicting sensory information.
Collapse
Affiliation(s)
- Guozheng Wang
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Yi Yang
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, China
| | - Jian Wang
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, China
| | - Zengming Hao
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xin Luo
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, China
| | - Jun Liu
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
- *Correspondence: Jun Liu,
| |
Collapse
|
15
|
Hong F, Badde S, Landy MS. Repeated exposure to either consistently spatiotemporally congruent or consistently incongruent audiovisual stimuli modulates the audiovisual common-cause prior. Sci Rep 2022; 12:15532. [PMID: 36109544 PMCID: PMC9478143 DOI: 10.1038/s41598-022-19041-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 08/23/2022] [Indexed: 11/09/2022] Open
Abstract
AbstractTo estimate an environmental property such as object location from multiple sensory signals, the brain must infer their causal relationship. Only information originating from the same source should be integrated. This inference relies on the characteristics of the measurements, the information the sensory modalities provide on a given trial, as well as on a cross-modal common-cause prior: accumulated knowledge about the probability that cross-modal measurements originate from the same source. We examined the plasticity of this cross-modal common-cause prior. In a learning phase, participants were exposed to a series of audiovisual stimuli that were either consistently spatiotemporally congruent or consistently incongruent; participants’ audiovisual spatial integration was measured before and after this exposure. We fitted several Bayesian causal-inference models to the data; the models differed in the plasticity of the common-source prior. Model comparison revealed that, for the majority of the participants, the common-cause prior changed during the learning phase. Our findings reveal that short periods of exposure to audiovisual stimuli with a consistent causal relationship can modify the common-cause prior. In accordance with previous studies, both exposure conditions could either strengthen or weaken the common-cause prior at the participant level. Simulations imply that the direction of the prior-update might be mediated by the degree of sensory noise, the variability of the measurements of the same signal across trials, during the learning phase.
Collapse
|
16
|
Zhang J, Gu Y, Chen A, Yu Y. Unveiling Dynamic System Strategies for Multisensory Processing: From Neuronal Fixed-Criterion Integration to Population Bayesian Inference. Research (Wash D C) 2022; 2022:9787040. [PMID: 36072271 PMCID: PMC9422331 DOI: 10.34133/2022/9787040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/18/2022] [Indexed: 11/17/2022] Open
Abstract
Multisensory processing is of vital importance for survival in the external world. Brain circuits can both integrate and separate visual and vestibular senses to infer self-motion and the motion of other objects. However, it is largely debated how multisensory brain regions process such multisensory information and whether they follow the Bayesian strategy in this process. Here, we combined macaque physiological recordings in the dorsal medial superior temporal area (MST-d) with modeling of synaptically coupled multilayer continuous attractor neural networks (CANNs) to study the underlying neuronal circuit mechanisms. In contrast to previous theoretical studies that focused on unisensory direction preference, our analysis showed that synaptic coupling induced cooperation and competition in the multisensory circuit and caused single MST-d neurons to switch between sensory integration or separation modes based on the fixed-criterion causal strategy, which is determined by the synaptic coupling strength. Furthermore, the prior of sensory reliability was represented by pooling diversified criteria at the MST-d population level, and the Bayesian strategy was achieved in downstream neurons whose causal inference flexibly changed with the prior. The CANN model also showed that synaptic input balance is the dynamic origin of neuronal direction preference formation and further explained the misalignment between direction preference and inference observed in previous studies. This work provides a computational framework for a new brain-inspired algorithm underlying multisensory computation.
Collapse
Affiliation(s)
- Jiawei Zhang
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, Human Phenome Institute, Shanghai 200433, China
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China
| | - Yuguo Yu
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, Human Phenome Institute, Shanghai 200433, China
| |
Collapse
|
17
|
Barnett SA, Griffiths TL, Hawkins RD. A Pragmatic Account of the Weak Evidence Effect. OPEN MIND 2022; 6:169-182. [DOI: 10.1162/opmi_a_00061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 07/18/2022] [Indexed: 11/04/2022] Open
Abstract
Abstract
Language is not only used for neutral information; we often seek to persuade by arguing in favor of a particular view. Persuasion raises a number of challenges for classical accounts of belief updating, as information cannot be taken at face value. How should listeners account for a speaker’s “hidden agenda” when incorporating new information? Here, we extend recent probabilistic models of recursive social reasoning to allow for persuasive goals and show that our model provides a pragmatic account for why weakly favorable arguments may backfire, a phenomenon known as the weak evidence effect. Critically, this model predicts a systematic relationship between belief updates and expectations about the information source: weak evidence should only backfire when speakers are expected to act under persuasive goals and prefer the strongest evidence. We introduce a simple experimental paradigm called the Stick Contest to measure the extent to which the weak evidence effect depends on speaker expectations, and show that a pragmatic listener model accounts for the empirical data better than alternative models. Our findings suggest further avenues for rational models of social reasoning to illuminate classical decision-making phenomena.
Collapse
Affiliation(s)
- Samuel A. Barnett
- Department of Computer Science, Princeton University, Princeton, New Jersey
| | - Thomas L. Griffiths
- Department of Computer Science, Princeton University, Princeton, New Jersey
- Department of Psychology, Princeton University, Princeton, New Jersey
| | - Robert D. Hawkins
- Department of Psychology, Princeton University, Princeton, New Jersey
| |
Collapse
|
18
|
Shams L, Beierholm U. Bayesian causal inference: A unifying neuroscience theory. Neurosci Biobehav Rev 2022; 137:104619. [PMID: 35331819 DOI: 10.1016/j.neubiorev.2022.104619] [Citation(s) in RCA: 37] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 02/21/2022] [Accepted: 03/10/2022] [Indexed: 01/08/2023]
Abstract
Understanding of the brain and the principles governing neural processing requires theories that are parsimonious, can account for a diverse set of phenomena, and can make testable predictions. Here, we review the theory of Bayesian causal inference, which has been tested, refined, and extended in a variety of tasks in humans and other primates by several research groups. Bayesian causal inference is normative and has explained human behavior in a vast number of tasks including unisensory and multisensory perceptual tasks, sensorimotor, and motor tasks, and has accounted for counter-intuitive findings. The theory has made novel predictions that have been tested and confirmed empirically, and recent studies have started to map its algorithms and neural implementation in the human brain. The parsimony, the diversity of the phenomena that the theory has explained, and its illuminating brain function at all three of Marr's levels of analysis make Bayesian causal inference a strong neuroscience theory. This also highlights the importance of collaborative and multi-disciplinary research for the development of new theories in neuroscience.
Collapse
Affiliation(s)
- Ladan Shams
- Departments of Psychology, BioEngineering, and Neuroscience Interdepartmental Program, University of California, Los Angeles, USA.
| | | |
Collapse
|
19
|
Noel JP, Shivkumar S, Dokka K, Haefner RM, Angelaki DE. Aberrant causal inference and presence of a compensatory mechanism in autism spectrum disorder. eLife 2022; 11:71866. [PMID: 35579424 PMCID: PMC9170250 DOI: 10.7554/elife.71866] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 05/15/2022] [Indexed: 12/02/2022] Open
Abstract
Autism spectrum disorder (ASD) is characterized by a panoply of social, communicative, and sensory anomalies. As such, a central goal of computational psychiatry is to ascribe the heterogenous phenotypes observed in ASD to a limited set of canonical computations that may have gone awry in the disorder. Here, we posit causal inference - the process of inferring a causal structure linking sensory signals to hidden world causes - as one such computation. We show that audio-visual integration is intact in ASD and in line with optimal models of cue combination, yet multisensory behavior is anomalous in ASD because this group operates under an internal model favoring integration (vs. segregation). Paradoxically, during explicit reports of common cause across spatial or temporal disparities, individuals with ASD were less and not more likely to report common cause, particularly at small cue disparities. Formal model fitting revealed differences in both the prior probability for common cause (p-common) and choice biases, which are dissociable in implicit but not explicit causal inference tasks. Together, this pattern of results suggests (i) different internal models in attributing world causes to sensory signals in ASD relative to neurotypical individuals given identical sensory cues, and (ii) the presence of an explicit compensatory mechanism in ASD, with these individuals putatively having learned to compensate for their bias to integrate in explicit reports.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, United States
| | | | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| | - Ralf M Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York City, United States.,Department of Neuroscience, Baylor College of Medicine, Houston, United States
| |
Collapse
|
20
|
Pesnot Lerousseau J, Parise CV, Ernst MO, van Wassenhove V. Multisensory correlation computations in the human brain identified by a time-resolved encoding model. Nat Commun 2022; 13:2489. [PMID: 35513362 PMCID: PMC9072402 DOI: 10.1038/s41467-022-29687-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 03/14/2022] [Indexed: 11/09/2022] Open
Abstract
Neural mechanisms that arbitrate between integrating and segregating multisensory information are essential for complex scene analysis and for the resolution of the multisensory correspondence problem. However, these mechanisms and their dynamics remain largely unknown, partly because classical models of multisensory integration are static. Here, we used the Multisensory Correlation Detector, a model that provides a good explanatory power for human behavior while incorporating dynamic computations. Participants judged whether sequences of auditory and visual signals originated from the same source (causal inference) or whether one modality was leading the other (temporal order), while being recorded with magnetoencephalography. First, we confirm that the Multisensory Correlation Detector explains causal inference and temporal order behavioral judgments well. Second, we found strong fits of brain activity to the two outputs of the Multisensory Correlation Detector in temporo-parietal cortices. Finally, we report an asymmetry in the goodness of the fits, which were more reliable during the causal inference task than during the temporal order judgment task. Overall, our results suggest the existence of multisensory correlation detectors in the human brain, which explain why and how causal inference is strongly driven by the temporal correlation of multisensory signals.
Collapse
Affiliation(s)
- Jacques Pesnot Lerousseau
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France.
- Applied Cognitive Psychology, Ulm University, Ulm, Germany.
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, CNRS, Université Paris-Saclay, NeuroSpin, 91191, Gif/Yvette, France.
| | | | - Marc O Ernst
- Applied Cognitive Psychology, Ulm University, Ulm, Germany
| | - Virginie van Wassenhove
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, CNRS, Université Paris-Saclay, NeuroSpin, 91191, Gif/Yvette, France
| |
Collapse
|
21
|
Verhaar E, Medendorp WP, Hunnius S, Stapel JC. Bayesian causal inference in visuotactile integration in children and adults. Dev Sci 2022; 25:e13184. [PMID: 34698430 PMCID: PMC9285718 DOI: 10.1111/desc.13184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 09/01/2021] [Accepted: 10/05/2021] [Indexed: 11/27/2022]
Abstract
If cues from different sensory modalities share the same cause, their information can be integrated to improve perceptual precision. While it is well established that adults exploit sensory redundancy by integrating cues in a Bayes optimal fashion, whether children under 8 years of age combine sensory information in a similar fashion is still under debate. If children differ from adults in the way they infer causality between cues, this may explain mixed findings on the development of cue integration in earlier studies. Here we investigated the role of causal inference in the development of cue integration, by means of a visuotactile localization task. Young children (6-8 years), older children (9.5-12.5 years) and adults had to localize a tactile stimulus, which was presented to the forearm simultaneously with a visual stimulus at either the same or a different location. In all age groups, responses were systematically biased toward the position of the visual stimulus, but relatively more so when the distance between the visual and tactile stimulus was small rather than large. This pattern of results was better captured by a Bayesian causal inference model than by alternative models of forced fusion or full segregation of the two stimuli. Our results suggest that already from a young age the brain implicitly infers the probability that a tactile and a visual cue share the same cause and uses this probability as a weighting factor in visuotactile localization.
Collapse
Affiliation(s)
- Erik Verhaar
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenthe Netherlands
| | | | - Sabine Hunnius
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenthe Netherlands
| | | |
Collapse
|
22
|
Abstract
Navigating by path integration requires continuously estimating one's self-motion. This estimate may be derived from visual velocity and/or vestibular acceleration signals. Importantly, these senses in isolation are ill-equipped to provide accurate estimates, and thus visuo-vestibular integration is an imperative. After a summary of the visual and vestibular pathways involved, the crux of this review focuses on the human and theoretical approaches that have outlined a normative account of cue combination in behavior and neurons, as well as on the systems neuroscience efforts that are searching for its neural implementation. We then highlight a contemporary frontier in our state of knowledge: understanding how velocity cues with time-varying reliabilities are integrated into an evolving position estimate over prolonged time periods. Further, we discuss how the brain builds internal models inferring when cues ought to be integrated versus segregated-a process of causal inference. Lastly, we suggest that the study of spatial navigation has not yet addressed its initial condition: self-location.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA;
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA;
- Tandon School of Engineering, New York University, New York, NY 11201, USA
| |
Collapse
|
23
|
Scarfe P. Experimentally disambiguating models of sensory cue integration. J Vis 2022; 22:5. [PMID: 35019955 PMCID: PMC8762719 DOI: 10.1167/jov.22.1.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sensory cue integration is one of the primary areas in which a normative mathematical framework has been used to define the “optimal” way in which to make decisions based upon ambiguous sensory information and compare these predictions to behavior. The conclusion from such studies is that sensory cues are integrated in a statistically optimal fashion. However, numerous alternative computational frameworks exist by which sensory cues could be integrated, many of which could be described as “optimal” based on different criteria. Existing studies rarely assess the evidence relative to different candidate models, resulting in an inability to conclude that sensory cues are integrated according to the experimenter's preferred framework. The aims of the present paper are to summarize and highlight the implicit assumptions rarely acknowledged in testing models of sensory cue integration, as well as to introduce an unbiased and principled method by which to determine, for a given experimental design, the probability with which a population of observers behaving in accordance with one model of sensory integration can be distinguished from the predictions of a set of alternative models.
Collapse
Affiliation(s)
- Peter Scarfe
- Vision and Haptics Laboratory, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.,
| |
Collapse
|
24
|
Noppeney U. The influence of early audiovisual experience on multisensory integration and causal inference (commentary on Smyre et al., 2021). Eur J Neurosci 2021; 55:637-639. [PMID: 34939247 PMCID: PMC9302648 DOI: 10.1111/ejn.15576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 12/20/2021] [Accepted: 12/21/2021] [Indexed: 12/01/2022]
Affiliation(s)
- Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
25
|
Neurocomputational mechanisms underlying cross-modal associations and their influence on perceptual decisions. Neuroimage 2021; 247:118841. [PMID: 34952232 PMCID: PMC9127393 DOI: 10.1016/j.neuroimage.2021.118841] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 12/07/2021] [Accepted: 12/19/2021] [Indexed: 12/02/2022] Open
Abstract
When exposed to complementary features of information across sensory modalities, our brains formulate cross-modal associations between features of stimuli presented separately to multiple modalities. For example, auditory pitch-visual size associations map high-pitch tones with small-size visual objects, and low-pitch tones with large-size visual objects. Preferential, or congruent, cross-modal associations have been shown to affect behavioural performance, i.e. choice accuracy and reaction time (RT) across multisensory decision-making paradigms. However, the neural mechanisms underpinning such influences in perceptual decision formation remain unclear. Here, we sought to identify when perceptual improvements from associative congruency emerge in the brain during decision formation. In particular, we asked whether such improvements represent ‘early’ sensory processing benefits, or ‘late’ post-sensory changes in decision dynamics. Using a modified version of the Implicit Association Test (IAT), coupled with electroencephalography (EEG), we measured the neural activity underlying the effect of auditory stimulus-driven pitch-size associations on perceptual decision formation. Behavioural results showed that participants responded significantly faster during trials when auditory pitch was congruent, rather than incongruent, with its associative visual size counterpart. We used multivariate Linear Discriminant Analysis (LDA) to characterise the spatiotemporal dynamics of EEG activity underpinning IAT performance. We found an ‘Early’ component (∼100–110 ms post-stimulus onset) coinciding with the time of maximal discrimination of the auditory stimuli, and a ‘Late’ component (∼330–340 ms post-stimulus onset) underlying IAT performance. To characterise the functional role of these components in decision formation, we incorporated a neurally-informed Hierarchical Drift Diffusion Model (HDDM), revealing that the Late component decreases response caution, requiring less sensory evidence to be accumulated, whereas the Early component increased the duration of sensory-encoding processes for incongruent trials. Overall, our results provide a mechanistic insight into the contribution of ‘early’ sensory processing, as well as ‘late’ post-sensory neural representations of associative congruency to perceptual decision formation.
Collapse
|
26
|
Janeh O, Steinicke F. A Review of the Potential of Virtual Walking Techniques for Gait Rehabilitation. Front Hum Neurosci 2021; 15:717291. [PMID: 34803632 PMCID: PMC8595292 DOI: 10.3389/fnhum.2021.717291] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 10/06/2021] [Indexed: 12/04/2022] Open
Abstract
Virtual reality (VR) technology has emerged as a promising tool for studying and rehabilitating gait disturbances in different cohorts of patients (such as Parkinson's disease, post-stroke, or other neurological disorders) as it allows patients to be engaged in an immersive and artificial environment, which can be designed to address the particular needs of each individual. This review demonstrates the state of the art in applications of virtual walking techniques and related technologies for gait therapy and rehabilitation of people with movement disorders makes recommendations for future research and discusses the use of VR in the clinic. However, the potential for using these techniques in gait rehabilitation is to provide a more personalized approach by simulate the experience of natural walking, while patients with neurological disorders are maintained localized in the real world. The goal of our work is to investigate how the human nervous system controls movement in health and neurodegenerative disease.
Collapse
Affiliation(s)
- Omar Janeh
- Department of Computer Engineering, University of Technology, Baghdad, Iraq
| | - Frank Steinicke
- Human-Computer Interaction, Department of Informatics, Universität Hamburg, Hamburg, Germany
| |
Collapse
|
27
|
Bruschetta M, de Winkel KN, Mion E, Pretto P, Beghi A, Bülthoff HH. Assessing the contribution of active somatosensory stimulation to self-acceleration perception in dynamic driving simulators. PLoS One 2021; 16:e0259015. [PMID: 34793458 PMCID: PMC8601569 DOI: 10.1371/journal.pone.0259015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 10/11/2021] [Indexed: 11/18/2022] Open
Abstract
In dynamic driving simulators, the experience of operating a vehicle is reproduced by combining visual stimuli generated by graphical rendering with inertial stimuli generated by platform motion. Due to inherent limitations of the platform workspace, inertial stimulation is subject to shortcomings in the form of missing cues, false cues, and/or scaling errors, which negatively affect simulation fidelity. In the present study, we aim at quantifying the relative contribution of an active somatosensory stimulation to the perceived intensity of self-motion, relative to other sensory systems. Participants judged the intensity of longitudinal and lateral driving maneuvers in a dynamic driving simulator in passive driving conditions, with and without additional active somatosensory stimulation, as provided by an Active Seat (AS) and Active Belts (AB) integrated system (ASB). The results show that ASB enhances the perceived intensity of sustained decelerations, and increases the precision of acceleration perception overall. Our findings are consistent with models of perception, and indicate that active somatosensory stimulation can indeed be used to improve simulation fidelity.
Collapse
Affiliation(s)
- Mattia Bruschetta
- Department of Information Engineering, University of Padova, Padova, Italy
| | - Ksander N. de Winkel
- TU Delft, Cognitive Robotics Delft, Delft, Netherlands
- Department of Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Enrico Mion
- Department of Information Engineering, University of Padova, Padova, Italy
- Department of Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- * E-mail:
| | | | - Alessandro Beghi
- Department of Information Engineering, University of Padova, Padova, Italy
| | - Heinrich H. Bülthoff
- Department of Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
28
|
Hong F, Badde S, Landy MS. Causal inference regulates audiovisual spatial recalibration via its influence on audiovisual perception. PLoS Comput Biol 2021; 17:e1008877. [PMID: 34780469 PMCID: PMC8629398 DOI: 10.1371/journal.pcbi.1008877] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 11/29/2021] [Accepted: 10/26/2021] [Indexed: 11/23/2022] Open
Abstract
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli. Audiovisual recalibration of spatial perception occurs when we receive audiovisual stimuli with a systematic spatial discrepancy. The brain must determine to which extent both modalities should be recalibrated. In this study, we scrutinized the mechanisms the brain employs to do so. To this aim, we conducted a classical audiovisual recalibration experiment in which participants were adapted to spatially discrepant audiovisual stimuli. The visual component of the bimodal stimulus was either less, equally, or more reliable than the auditory component. We measured the amount of recalibration by computing the difference between participants’ unimodal localization responses before and after the audiovisual recalibration. Across participants, the influence of visual reliability on auditory recalibration varied fundamentally. We compared three models of recalibration. Only a causal-inference model of recalibration captured the diverse influences of cue reliability on recalibration found in our study, this model is also able to replicate contradictory results found in previous studies. In this model, recalibration depends on the discrepancy between a sensory measurement and the perceptual estimate for the same sensory modality. Cue reliability, perceptual biases, and the degree to which participants infer that the two cues come from a common source govern audiovisual perception and therefore audiovisual recalibration.
Collapse
Affiliation(s)
- Fangfang Hong
- Department of Psychology, New York University, New York City, New York, United States of America
- * E-mail:
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, Massachusetts, United States of America
| | - Michael S. Landy
- Department of Psychology, New York University, New York City, New York, United States of America
- Center for Neural Science, New York University, New York City, New York, United States of America
| |
Collapse
|
29
|
Precision control for a flexible body representation. Neurosci Biobehav Rev 2021; 134:104401. [PMID: 34736884 DOI: 10.1016/j.neubiorev.2021.10.023] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 10/20/2021] [Accepted: 10/21/2021] [Indexed: 11/24/2022]
Abstract
Adaptive body representation requires the continuous integration of multisensory inputs within a flexible 'body model' in the brain. The present review evaluates the idea that this flexibility is augmented by the contextual modulation of sensory processing 'top-down'; which can be described as precision control within predictive coding formulations of Bayesian inference. Specifically, I focus on the proposal that an attenuation of proprioception may facilitate the integration of conflicting visual and proprioceptive bodily cues. Firstly, I review empirical work suggesting that the processing of visual vs proprioceptive body position information can be contextualised 'top-down'; for instance, by adopting specific attentional task sets. Building up on this, I review research showing a similar contextualisation of visual vs proprioceptive information processing in the rubber hand illusion and in visuomotor adaptation. Together, the reviewed literature suggests that proprioception, despite its indisputable importance for body perception and action control, can be attenuated top-down (through precision control) to facilitate the contextual adaptation of the brain's body model to novel visual feedback.
Collapse
|
30
|
Ferrari A, Noppeney U. Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biol 2021; 19:e3001465. [PMID: 34793436 PMCID: PMC8639080 DOI: 10.1371/journal.pbio.3001465] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 12/02/2021] [Accepted: 11/01/2021] [Indexed: 11/22/2022] Open
Abstract
To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals' causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.
Collapse
Affiliation(s)
- Ambra Ferrari
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
31
|
Noppeney U. Solving the causal inference problem. Trends Cogn Sci 2021; 25:1013-1014. [PMID: 34561193 DOI: 10.1016/j.tics.2021.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 09/07/2021] [Indexed: 11/26/2022]
Abstract
Perception requires the brain to infer whether signals arise from common causes and should hence be integrated or else be treated independently. Rideaux et al. show that a feedforward network can perform causal inference in visuovestibular motion estimation by reading out activity from neurons tuned to congruent and opposite directions.
Collapse
Affiliation(s)
- Uta Noppeney
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands.
| |
Collapse
|
32
|
Abstract
We perceive our environment through multiple independent sources of sensory input. The brain is tasked with deciding whether multiple signals are produced by the same or different events (i.e., solve the problem of causal inference). Here, we train a neural network to solve causal inference by either combining or separating visual and vestibular inputs in order to estimate self- and scene motion. We find that the network recapitulates key neurophysiological (i.e., congruent and opposite neurons) and behavioral (e.g., reliability-based cue weighting) properties of biological systems. We show how congruent and opposite neurons support motion estimation and how the balance in activity between these subpopulations determines whether to combine or separate multisensory signals. Sitting in a static railway carriage can produce illusory self-motion if the train on an adjoining track moves off. While our visual system registers motion, vestibular signals indicate that we are stationary. The brain is faced with a difficult challenge: is there a single cause of sensations (I am moving) or two causes (I am static, another train is moving)? If a single cause, integrating signals produces a more precise estimate of self-motion, but if not, one cue should be ignored. In many cases, this process of causal inference works without error, but how does the brain achieve it? Electrophysiological recordings show that the macaque medial superior temporal area contains many neurons that encode combinations of vestibular and visual motion cues. Some respond best to vestibular and visual motion in the same direction (“congruent” neurons), while others prefer opposing directions (“opposite” neurons). Congruent neurons could underlie cue integration, but the function of opposite neurons remains a puzzle. Here, we seek to explain this computational arrangement by training a neural network model to solve causal inference for motion estimation. Like biological systems, the model develops congruent and opposite units and recapitulates known behavioral and neurophysiological observations. We show that all units (both congruent and opposite) contribute to motion estimation. Importantly, however, it is the balance between their activity that distinguishes whether visual and vestibular cues should be integrated or separated. This explains the computational purpose of puzzling neural representations and shows how a relatively simple feedforward network can solve causal inference.
Collapse
|
33
|
Rodriguez R, Crane BT. Effect of timing delay between visual and vestibular stimuli on heading perception. J Neurophysiol 2021; 126:304-312. [PMID: 34191637 DOI: 10.1152/jn.00351.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Heading direction is perceived based on visual and inertial cues. The current study examined the effect of their relative timing on the ability of offset visual headings to influence inertial perception. Seven healthy human subjects experienced 2 s of translation along a heading of 0°, ±35°, ±70°, ±105°, or ±140°. These inertial headings were paired with 2-s duration visual headings that were presented at relative offsets of 0°, ±30°, ±60°, ±90°, or ±120°. The visual stimuli were also presented at 17 temporal delays ranging from -500 ms (visual lead) to 2,000 ms (visual delay) relative to the inertial stimulus. After each stimulus, subjects reported the direction of the inertial stimulus using a dial. The bias of the inertial heading toward the visual heading was robust at ±250 ms when examined across subjects during this period: 8.0° ± 0.5° with a 30° offset, 12.2° ± 0.5° with a 60° offset, 11.7° ± 0.6° with a 90° offset, and 9.8° ± 0.7° with a 120° offset (mean bias toward visual ± SE). The mean bias was much diminished with temporal misalignments of ±500 ms, and there was no longer any visual influence on the inertial heading when the visual stimulus was delayed by 1,000 ms or more. Although the amount of bias varied between subjects, the effect of delay was similar.NEW & NOTEWORTHY The effect of timing on visual-inertial integration on heading perception has not been previously examined. This study finds that visual direction influence inertial heading perception when timing differences are within 250 ms. This suggests visual-inertial stimuli can be integrated over a wider range than reported for visual-auditory integration and may be due to the unique nature of inertial sensation, which can only sense acceleration while the visual system senses position but encodes velocity.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Biomedical Engineering, University of Rochester, Rochester, New York
| | - Benjamin T Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York.,Department of Otolaryngology, University of Rochester, Rochester, New York.,Department of Neuroscience, University of Rochester, Rochester, New York
| |
Collapse
|
34
|
Abstract
Adaptive behavior in a complex, dynamic, and multisensory world poses some of the most fundamental computational challenges for the brain, notably inference, decision-making, learning, binding, and attention. We first discuss how the brain integrates sensory signals from the same source to support perceptual inference and decision-making by weighting them according to their momentary sensory uncertainties. We then show how observers solve the binding or causal inference problem-deciding whether signals come from common causes and should hence be integrated or else be treated independently. Next, we describe the multifarious interplay between multisensory processing and attention. We argue that attentional mechanisms are crucial to compute approximate solutions to the binding problem in naturalistic environments when complex time-varying signals arise from myriad causes. Finally, we review how the brain dynamically adapts multisensory processing to a changing world across multiple timescales.
Collapse
Affiliation(s)
- Uta Noppeney
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, The Netherlands;
| |
Collapse
|
35
|
Keshner EA, Lamontagne A. The Untapped Potential of Virtual Reality in Rehabilitation of Balance and Gait in Neurological Disorders. FRONTIERS IN VIRTUAL REALITY 2021; 2:641650. [PMID: 33860281 PMCID: PMC8046008 DOI: 10.3389/frvir.2021.641650] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Dynamic systems theory transformed our understanding of motor control by recognizing the continual interaction between the organism and the environment. Movement could no longer be visualized simply as a response to a pattern of stimuli or as a demonstration of prior intent; movement is context dependent and is continuously reshaped by the ongoing dynamics of the world around us. Virtual reality is one methodological variable that allows us to control and manipulate that environmental context. A large body of literature exists to support the impact of visual flow, visual conditions, and visual perception on the planning and execution of movement. In rehabilitative practice, however, this technology has been employed mostly as a tool for motivation and enjoyment of physical exercise. The opportunity to modulate motor behavior through the parameters of the virtual world is often ignored in practice. In this article we present the results of experiments from our laboratories and from others demonstrating that presenting particular characteristics of the virtual world through different sensory modalities will modify balance and locomotor behavior. We will discuss how movement in the virtual world opens a window into the motor planning processes and informs us about the relative weighting of visual and somatosensory signals. Finally, we discuss how these findings should influence future treatment design.
Collapse
Affiliation(s)
- Emily A. Keshner
- Department of Health and Rehabilitation Sciences, Temple University, Philadelphia, PA, United States
- Correspondence: Emily A. Keshner,
| | - Anouk Lamontagne
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada
- Virtual Reality and Mobility Laboratory, CISSS Laval—Jewish Rehabilitation Hospital Site of the Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Laval, QC, Canada
| |
Collapse
|
36
|
Jones SA, Noppeney U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex 2021; 138:1-23. [PMID: 33676086 DOI: 10.1016/j.cortex.2021.02.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 01/23/2021] [Accepted: 02/02/2021] [Indexed: 11/29/2022]
Abstract
The processing of multisensory signals is crucial for effective interaction with the environment, but our ability to perform this vital function changes as we age. In the first part of this review, we summarise existing research into the effects of healthy ageing on multisensory integration. We note that age differences vary substantially with the paradigms and stimuli used: older adults often receive at least as much benefit (to both accuracy and response times) as younger controls from congruent multisensory stimuli, but are also consistently more negatively impacted by the presence of intersensory conflict. In the second part, we outline a normative Bayesian framework that provides a principled and computationally informed perspective on the key ingredients involved in multisensory perception, and how these are affected by ageing. Applying this framework to the existing literature, we conclude that changes to sensory reliability, prior expectations (together with attentional control), and decisional strategies all contribute to the age differences observed. However, we find no compelling evidence of any age-related changes to the basic inference mechanisms involved in multisensory perception.
Collapse
Affiliation(s)
- Samuel A Jones
- The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
37
|
De Winkel KN, Edel E, Happee R, Bülthoff HH. Multisensory Interactions in Head and Body Centered Perception of Verticality. Front Neurosci 2021; 14:599226. [PMID: 33510611 PMCID: PMC7835726 DOI: 10.3389/fnins.2020.599226] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 12/08/2020] [Indexed: 11/25/2022] Open
Abstract
Percepts of verticality are thought to be constructed as a weighted average of multisensory inputs, but the observed weights differ considerably between studies. In the present study, we evaluate whether this can be explained by differences in how visual, somatosensory and proprioceptive cues contribute to representations of the Head In Space (HIS) and Body In Space (BIS). Participants (10) were standing on a force plate on top of a motion platform while wearing a visualization device that allowed us to artificially tilt their visual surroundings. They were presented with (in)congruent combinations of visual, platform, and head tilt, and performed Rod & Frame Test (RFT) and Subjective Postural Vertical (SPV) tasks. We also recorded postural responses to evaluate the relation between perception and balance. The perception data shows that body tilt, head tilt, and visual tilt affect the HIS and BIS in both experimental tasks. For the RFT task, visual tilt induced considerable biases (≈ 10° for 36° visual tilt) in the direction of the vertical expressed in the visual scene; for the SPV task, participants also adjusted platform tilt to correct for illusory body tilt induced by the visual stimuli, but effects were much smaller (≈ 0.25°). Likewise, postural data from the SPV task indicate participants slightly shifted their weight to counteract visual tilt (0.3° for 36° visual tilt). The data reveal a striking dissociation of visual effects between the two tasks. We find that the data can be explained well using a model where percepts of the HIS and BIS are constructed from direct signals from head and body sensors, respectively, and indirect signals based on body and head signals but corrected for perceived neck tilt. These findings show that perception of the HIS and BIS derive from the same sensory signals, but see profoundly different weighting factors. We conclude that observations of different weightings between studies likely result from querying of distinct latent constructs referenced to the body or head in space.
Collapse
Affiliation(s)
- Ksander N. De Winkel
- Intelligent Vehicles Research Group, Faculty 3mE, Cognitive Robotics Department, Delft University of Technology, Delft, Netherlands
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Ellen Edel
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Riender Happee
- Intelligent Vehicles Research Group, Faculty 3mE, Cognitive Robotics Department, Delft University of Technology, Delft, Netherlands
| | - Heinrich H. Bülthoff
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
38
|
Beierholm U, Rohe T, Ferrari A, Stegle O, Noppeney U. Using the past to estimate sensory uncertainty. eLife 2020; 9:54172. [PMID: 33319749 PMCID: PMC7806269 DOI: 10.7554/elife.54172] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Accepted: 12/13/2020] [Indexed: 01/14/2023] Open
Abstract
To form a more reliable percept of the environment, the brain needs to estimate its own sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus. We evaluated this assumption in four psychophysical experiments, in which human observers localized auditory signals that were presented synchronously with spatially disparate visual signals. Critically, the visual noise changed dynamically over time continuously or with intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory uncertainty estimates that combine information from past and current signals consistent with an optimal Bayesian learner that can be approximated by exponential discounting. Our results challenge leading models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.
Collapse
Affiliation(s)
- Ulrik Beierholm
- Psychology Department, Durham University, Durham, United Kingdom
| | - Tim Rohe
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany.,Department of Psychology, Friedrich-Alexander University Erlangen-Nuernberg, Erlangen, Germany
| | - Ambra Ferrari
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, United Kingdom
| | - Oliver Stegle
- Max Planck Institute for Intelligent Systems, Tübingen, Germany.,European Molecular Biology Laboratory, Genome Biology Unit, Heidelberg, Germany.,Division of Computational Genomics and Systems Genetics, German Cancer Research Center (DKFZ), Heidelberg, Germany, Heidelberg, Germany
| | - Uta Noppeney
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, United Kingdom.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
39
|
Abstract
Visual search, the task of detecting or locating target items among distractor items in a visual scene, is an important function for animals and humans. Different theoretical accounts make differing predictions for the effects of distractor statistics. Here we use a task in which we parametrically vary distractor items, allowing for a simultaneously fine-grained and comprehensive study of distractor statistics. We found effects of target-distractor similarity, distractor variability, and an interaction between the two, although the effect of the interaction on performance differed from the one expected. To explain these findings, we constructed computational process models that make trial-by-trial predictions for behavior based on the stimulus presented. These models, including a Bayesian observer model, provided excellent accounts of both the qualitative and quantitative effects of distractor statistics, as well as of the effect of changing the statistics of the environment (in the form of distractors being drawn from a different distribution). We conclude with a broader discussion of the role of computational process models in the understanding of visual search.
Collapse
Affiliation(s)
- Joshua Calder-Travis
- Department of Experimental Psychology, University of Oxford, Oxford, UK.,Department of Psychology, New York University, New York, NY, USA.,
| | - Wei Ji Ma
- Department of Psychology, New York University, New York, NY, USA.,Center for Neural Science, New York University, New York, NY, USA.,
| |
Collapse
|
40
|
The role of sensory uncertainty in simple contour integration. PLoS Comput Biol 2020; 16:e1006308. [PMID: 33253195 PMCID: PMC7728286 DOI: 10.1371/journal.pcbi.1006308] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2018] [Revised: 12/10/2020] [Accepted: 10/22/2020] [Indexed: 11/29/2022] Open
Abstract
Perceptual organization is the process of grouping scene elements into whole entities. A classic example is contour integration, in which separate line segments are perceived as continuous contours. Uncertainty in such grouping arises from scene ambiguity and sensory noise. Some classic Gestalt principles of contour integration, and more broadly, of perceptual organization, have been re-framed in terms of Bayesian inference, whereby the observer computes the probability that the whole entity is present. Previous studies that proposed a Bayesian interpretation of perceptual organization, however, have ignored sensory uncertainty, despite the fact that accounting for the current level of perceptual uncertainty is one of the main signatures of Bayesian decision making. Crucially, trial-by-trial manipulation of sensory uncertainty is a key test to whether humans perform near-optimal Bayesian inference in contour integration, as opposed to using some manifestly non-Bayesian heuristic. We distinguish between these hypotheses in a simplified form of contour integration, namely judging whether two line segments separated by an occluder are collinear. We manipulate sensory uncertainty by varying retinal eccentricity. A Bayes-optimal observer would take the level of sensory uncertainty into account—in a very specific way—in deciding whether a measured offset between the line segments is due to non-collinearity or to sensory noise. We find that people deviate slightly but systematically from Bayesian optimality, while still performing “probabilistic computation” in the sense that they take into account sensory uncertainty via a heuristic rule. Our work contributes to an understanding of the role of sensory uncertainty in higher-order perception. Our percept of the world is governed not only by the sensory information we have access to, but also by the way we interpret this information. When presented with a visual scene, our visual system undergoes a process of grouping visual elements together to form coherent entities so that we can interpret the scene more readily and meaningfully. For example, when looking at a pile of autumn leaves, one can still perceive and identify a whole leaf even when it is partially covered by another leaf. While Gestalt psychologists have long described perceptual organization with a set of qualitative laws, recent studies offered a statistically-optimal—Bayesian, in statistical jargon—interpretation of this process, whereby the observer chooses the scene configuration with the highest probability given the available sensory inputs. However, these studies drew their conclusions without considering a key actor in this kind of statistically-optimal computations, that is the role of sensory uncertainty. One can easily imagine that our decision on whether two contours belong to the same leaf or different leaves is likely going to change when we move from viewing the pile of leaves at a great distance (high sensory uncertainty), to viewing very closely (low sensory uncertainty). Our study examines whether and how people incorporate uncertainty into contour integration, an elementary form of perceptual organization, by varying sensory uncertainty from trial to trial in a simple contour integration task. We found that people indeed take into account sensory uncertainty, however in a way that subtly deviates from optimal behavior.
Collapse
|
41
|
Mohl JT, Pearson JM, Groh JM. Monkeys and humans implement causal inference to simultaneously localize auditory and visual stimuli. J Neurophysiol 2020; 124:715-727. [PMID: 32727263 DOI: 10.1152/jn.00046.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
The environment is sampled by multiple senses, which are woven together to produce a unified perceptual state. However, optimally unifying such signals requires assigning particular signals to the same or different underlying objects or events. Many prior studies (especially in animals) have assumed fusion of cross-modal information, whereas recent work in humans has begun to probe the appropriateness of this assumption. Here we present results from a novel behavioral task in which both monkeys (Macaca mulatta) and humans localized visual and auditory stimuli and reported their perceived sources through saccadic eye movements. When the locations of visual and auditory stimuli were widely separated, subjects made two saccades, while when the two stimuli were presented at the same location they made only a single saccade. Intermediate levels of separation produced mixed response patterns: a single saccade to an intermediate position on some trials or separate saccades to both locations on others. The distribution of responses was well described by a hierarchical causal inference model that accurately predicted both the explicit "same vs. different" source judgments as well as biases in localization of the source(s) under each of these conditions. The results from this task are broadly consistent with prior work in humans across a wide variety of analogous tasks, extending the study of multisensory causal inference to nonhuman primates and to a natural behavioral task with both a categorical assay of the number of perceived sources and a continuous report of the perceived position of the stimuli.NEW & NOTEWORTHY We developed a novel behavioral paradigm for the study of multisensory causal inference in both humans and monkeys and found that both species make causal judgments in the same Bayes-optimal fashion. To our knowledge, this is the first demonstration of behavioral causal inference in animals, and this cross-species comparison lays the groundwork for future experiments using neuronal recording techniques that are impractical or impossible in human subjects.
Collapse
Affiliation(s)
- Jeff T Mohl
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina
| | - John M Pearson
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Biostatistics and Bioinformatics, Duke University Medical School, Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina
| |
Collapse
|
42
|
Causal Inference in Audiovisual Perception. J Neurosci 2020; 40:6600-6612. [PMID: 32669354 DOI: 10.1523/jneurosci.0051-20.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 06/26/2020] [Accepted: 07/01/2020] [Indexed: 11/21/2022] Open
Abstract
In our natural environment the senses are continuously flooded with a myriad of signals. To form a coherent representation of the world, the brain needs to integrate sensory signals arising from a common cause and segregate signals coming from separate causes. An unresolved question is how the brain solves this binding or causal inference problem and determines the causal structure of the sensory signals. In this functional magnetic resonance imaging (fMRI) study human observers (female and male) were presented with synchronous auditory and visual signals at the same location (i.e., common cause) or different locations (i.e., separate causes). On each trial, observers decided whether signals come from common or separate sources(i.e., "causal decisions"). To dissociate participants' causal inference from the spatial correspondence cues we adjusted the audiovisual disparity of the signals individually for each participant to threshold accuracy. Multivariate fMRI pattern analysis revealed the lateral prefrontal cortex as the only region that encodes predominantly the outcome of observers' causal inference (i.e., common vs separate causes). By contrast, the frontal eye field (FEF) and the intraparietal sulcus (IPS0-4) form a circuitry that concurrently encodes spatial (auditory and visual stimulus locations), decisional (causal inference), and motor response dimensions. These results suggest that the lateral prefrontal cortex plays a key role in inferring and making explicit decisions about the causal structure that generates sensory signals in our environment. By contrast, informed by observers' inferred causal structure, the FEF-IPS circuitry integrates auditory and visual spatial signals into representations that guide motor responses.SIGNIFICANCE STATEMENT In our natural environment, our senses are continuously flooded with a myriad of signals. Transforming this barrage of sensory signals into a coherent percept of the world relies inherently on solving the causal inference problem, deciding whether sensory signals arise from a common cause and should hence be integrated or else be segregated. This functional magnetic resonance imaging study shows that the lateral prefrontal cortex plays a key role in inferring the causal structure of the environment. Crucially, informed by the spatial correspondence cues and the inferred causal structure the frontal eye field and the intraparietal sulcus form a circuitry that integrates auditory and visual spatial signals into representations that guide motor responses.
Collapse
|
43
|
Lorenc ES, Vandenbroucke ARE, Nee DE, de Lange FP, D'Esposito M. Dissociable neural mechanisms underlie currently-relevant, future-relevant, and discarded working memory representations. Sci Rep 2020; 10:11195. [PMID: 32641712 PMCID: PMC7343803 DOI: 10.1038/s41598-020-67634-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2018] [Accepted: 06/09/2020] [Indexed: 11/19/2022] Open
Abstract
In daily life, we use visual working memory (WM) to guide our actions. While attending to currently-relevant information, we must simultaneously maintain future-relevant information, and discard information that is no longer relevant. However, the neural mechanisms by which unattended, but future-relevant, information is maintained in working memory, and future-irrelevant information is discarded, are not well understood. Here, we investigated representations of these different information types, using functional magnetic resonance imaging in combination with multivoxel pattern analysis and computational modeling based on inverted encoding model simulations. We found that currently-relevant WM information in the focus of attention was maintained through representations in visual, parietal and posterior frontal brain regions, whereas deliberate forgetting led to suppression of the discarded representations in early visual cortex. In contrast, future-relevant information was neither inhibited nor actively maintained in these areas. These findings suggest that different neural mechanisms underlie the WM representation of currently- and future-relevant information, as compared to information that is discarded from WM.
Collapse
Affiliation(s)
- Elizabeth S Lorenc
- University of California, Berkeley, CA, USA.
- University of Texas at Austin, Austin, TX, USA.
| | - Annelinde R E Vandenbroucke
- University of California, Berkeley, CA, USA
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, The Netherlands
| | - Derek E Nee
- Florida State University, Tallahassee, FL, USA
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, The Netherlands
| | | |
Collapse
|
44
|
French RL, DeAngelis GC. Multisensory neural processing: from cue integration to causal inference. CURRENT OPINION IN PHYSIOLOGY 2020; 16:8-13. [PMID: 32968701 DOI: 10.1016/j.cophys.2020.04.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Neurophysiological studies of multisensory processing have largely focused on how the brain integrates information from different sensory modalities to form a coherent percept. However, in the natural environment, an important extra step is needed: the brain faces the problem of causal inference, which involves determining whether different sources of sensory information arise from the same environmental cause, such that integrating them is advantageous Behavioral and computational studies have provided a strong foundation for studying causal inference, but studies of its neural basis have only recently been undertaken. This review focuses on recent advances regarding how the brain infers the causes of sensory inputs and uses this information to make robust perceptual estimates.
Collapse
Affiliation(s)
- Ranran L French
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY
| |
Collapse
|
45
|
Yakubovich S, Israeli-Korn S, Halperin O, Yahalom G, Hassin-Baer S, Zaidel A. Visual self-motion cues are impaired yet overweighted during visual-vestibular integration in Parkinson's disease. Brain Commun 2020; 2:fcaa035. [PMID: 32954293 PMCID: PMC7425426 DOI: 10.1093/braincomms/fcaa035] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 02/17/2020] [Accepted: 03/11/2020] [Indexed: 11/25/2022] Open
Abstract
Parkinson's disease is prototypically a movement disorder. Although perceptual and motor functions are highly interdependent, much less is known about perceptual deficits in Parkinson's disease, which are less observable by nature, and might go unnoticed if not tested directly. It is therefore imperative to seek and identify these, to fully understand the challenges facing patients with Parkinson's disease. Also, perceptual deficits may be related to motor symptoms. Posture, gait and balance, affected in Parkinson's disease, rely on veridical perception of one's own motion (self-motion) in space. Yet it is not known whether self-motion perception is impaired in Parkinson's disease. Using a well-established multisensory paradigm of heading discrimination (that has not been previously applied to Parkinson's disease), we tested unisensory visual and vestibular self-motion perception, as well as multisensory integration of visual and vestibular cues, in 19 Parkinson's disease, 23 healthy age-matched and 20 healthy young-adult participants. After experiencing vestibular (on a motion platform), visual (optic flow) or multisensory (combined visual-vestibular) self-motion stimuli at various headings, participants reported whether their perceived heading was to the right or left of straight ahead. Parkinson's disease participants and age-matched controls were tested twice (Parkinson's disease participants on and off medication). Parkinson's disease participants demonstrated significantly impaired visual self-motion perception compared with age-matched controls on both visits, irrespective of medication status. Young controls performed slightly (but not significantly) better than age-matched controls and significantly better than the Parkinson's disease group. The visual self-motion perception impairment in Parkinson's disease correlated significantly with clinical disease severity. By contrast, vestibular performance was unimpaired in Parkinson's disease. Remarkably, despite impaired visual self-motion perception, Parkinson's disease participants significantly overweighted the visual cues during multisensory (visual-vestibular ) integration (compared with Bayesian predictions of optimal integration) and significantly more than controls. These findings indicate that self-motion perception in Parkinson's disease is affected by impaired visual cues and by suboptimal visual-vestibular integration (overweighting of visual cues). Notably, vestibular self-motion perception was unimpaired. Thus, visual self-motion perception is specifically impaired in early-stage Parkinson's disease. This can impact Parkinson's disease diagnosis and subtyping. Overweighting of visual cues could reflect a general multisensory integration deficit in Parkinson's disease, or specific overestimation of visual cue reliability. Finally, impaired self-motion perception in Parkinson's disease may contribute to impaired balance and gait control. Future investigation into this connection might open up new avenues of alternative therapies to better treat these difficult symptoms.
Collapse
Affiliation(s)
- Sol Yakubovich
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Simon Israeli-Korn
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- The Neurology and Neurosurgery Department, The Sackler School of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Orly Halperin
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Gilad Yahalom
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- Department of Neurology, Movement Disorders Clinic, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Sharon Hassin-Baer
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- The Neurology and Neurosurgery Department, The Sackler School of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| |
Collapse
|
46
|
Ma WJ. Bayesian Decision Models: A Primer. Neuron 2020; 104:164-175. [PMID: 31600512 DOI: 10.1016/j.neuron.2019.09.037] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Revised: 09/20/2019] [Accepted: 09/20/2019] [Indexed: 11/26/2022]
Abstract
To understand decision-making behavior in simple, controlled environments, Bayesian models are often useful. First, optimal behavior is always Bayesian. Second, even when behavior deviates from optimality, the Bayesian approach offers candidate models to account for suboptimalities. Third, a realist interpretation of Bayesian models opens the door to studying the neural representation of uncertainty. In this tutorial, we review the principles of Bayesian models of decision making and then focus on five case studies with exercises. We conclude with reflections and future directions.
Collapse
Affiliation(s)
- Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA.
| |
Collapse
|
47
|
Rodriguez R, Crane BT. Common causation and offset effects in human visual-inertial heading direction integration. J Neurophysiol 2020; 123:1369-1379. [PMID: 32130052 DOI: 10.1152/jn.00019.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Movement direction can be determined from a combination of visual and inertial cues. Visual motion (optic flow) can represent self-motion through a fixed environment or environmental motion relative to an observer. Simultaneous visual and inertial heading cues present the question of whether the cues have a common cause (i.e., should be integrated) or whether they should be considered independent. This was studied in eight healthy human subjects who experienced 12 visual and inertial headings in the horizontal plane divided in 30° increments. The headings were estimated in two unisensory and six multisensory trial blocks. Each unisensory block included 72 stimulus presentations, while each multisensory block included 144 stimulus presentations, including every possible combination of visual and inertial headings in random order. After each multisensory stimulus, subjects reported their perception of visual and inertial headings as congruous (i.e., having common causation) or not. In the multisensory trial blocks, subjects also reported visual or inertial heading direction (3 trial blocks for each). For aligned visual-inertial headings, the rate of common causation was higher during alignment in cardinal than noncardinal directions. When visual and inertial stimuli were separated by 30°, the rate of reported common causation remained >50%, but it decreased to 15% or less for separation of ≥90°. The inertial heading was biased toward the visual heading by 11-20° for separations of 30-120°. Thus there was sensory integration even in conditions without reported common causation. The visual heading was minimally influenced by inertial direction. When trials with common causation perception were compared with those without, inertial heading perception had a stronger bias toward visual stimulus direction.NEW & NOTEWORTHY Optic flow ambiguously represents self-motion or environmental motion. When these are in different directions, it is uncertain whether these are integrated into a common perception or not. This study looks at that issue by determining whether the two modalities are consistent and by measuring their perceived directions to get a degree of influence. The visual stimulus can have significant influence on the inertial stimulus even when they are perceived as inconsistent.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Biomedical Engineering, University of Rochester, Rochester, New York
| | - Benjamin T Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York.,Department of Otolaryngology, University of Rochester, Rochester, New York.,Department of Neuroscience, University of Rochester, Rochester, New York
| |
Collapse
|
48
|
Badde S, Navarro KT, Landy MS. Modality-specific attention attenuates visual-tactile integration and recalibration effects by reducing prior expectations of a common source for vision and touch. Cognition 2020; 197:104170. [PMID: 32036027 DOI: 10.1016/j.cognition.2019.104170] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 12/19/2019] [Accepted: 12/20/2019] [Indexed: 10/25/2022]
Abstract
At any moment in time, streams of information reach the brain through the different senses. Given this wealth of noisy information, it is essential that we select information of relevance - a function fulfilled by attention - and infer its causal structure to eventually take advantage of redundancies across the senses. Yet, the role of selective attention during causal inference in cross-modal perception is unknown. We tested experimentally whether the distribution of attention across vision and touch enhances cross-modal spatial integration (visual-tactile ventriloquism effect, Expt. 1) and recalibration (visual-tactile ventriloquism aftereffect, Expt. 2) compared to modality-specific attention, and then used causal-inference modeling to isolate the mechanisms behind the attentional modulation. In both experiments, we found stronger effects of vision on touch under distributed than under modality-specific attention. Model comparison confirmed that participants used Bayes-optimal causal inference to localize visual and tactile stimuli presented as part of a visual-tactile stimulus pair, whereas simultaneously collected unity judgments - indicating whether the visual-tactile pair was perceived as spatially-aligned - relied on a sub-optimal heuristic. The best-fitting model revealed that attention modulated sensory and cognitive components of causal inference. First, distributed attention led to an increase of sensory noise compared to selective attention toward one modality. Second, attending to both modalities strengthened the stimulus-independent expectation that the two signals belong together, the prior probability of a common source for vision and touch. Yet, only the increase in the expectation of vision and touch sharing a common source was able to explain the observed enhancement of visual-tactile integration and recalibration effects with distributed attention. In contrast, the change in sensory noise explained only a fraction of the observed enhancements, as its consequences vary with the overall level of noise and stimulus congruency. Increased sensory noise leads to enhanced integration effects for visual-tactile pairs with a large spatial discrepancy, but reduced integration effects for stimuli with a small or no cross-modal discrepancy. In sum, our study indicates a weak a priori association between visual and tactile spatial signals that can be strengthened by distributing attention across both modalities.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA.
| | - Karen T Navarro
- Department of Psychology, University of Minnesota, 75 E River Rd., Minneapolis, MN, 55455, USA
| | - Michael S Landy
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA
| |
Collapse
|
49
|
Müller-Pinzler L, Czekalla N, Mayer AV, Stolz DS, Gazzola V, Keysers C, Paulus FM, Krach S. Negativity-bias in forming beliefs about own abilities. Sci Rep 2019; 9:14416. [PMID: 31594967 PMCID: PMC6783436 DOI: 10.1038/s41598-019-50821-w] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Accepted: 09/19/2019] [Indexed: 01/06/2023] Open
Abstract
During everyday interactions people constantly receive feedback on their behavior, which shapes their beliefs about themselves. While classic studies in the field of social learning suggest that people have a tendency to learn better from good news (positivity bias) when they perceive little opportunities to immediately improve their own performance, we show updating is biased towards negative information when participants perceive the opportunity to adapt their performance during learning. In three consecutive experiments we applied a computational modeling approach on the subjects' learning behavior and reveal the negativity bias was specific for learning about own compared to others' performances and was modulated by prior beliefs about the self, i.e. stronger negativity bias in individuals lower in self-esteem. Social anxiety affected self-related negativity biases only when individuals were exposed to a judging audience thereby potentially explaining the persistence of negative self-images in socially anxious individuals which commonly surfaces in social settings. Self-related belief formation is therefore surprisingly negatively biased in situations suggesting opportunities to improve and this bias is shaped by trait differences in self-esteem and social anxiety.
Collapse
Affiliation(s)
- Laura Müller-Pinzler
- Department of Psychiatry and Psychotherapy, Social Neuroscience Lab, University of Lübeck, Ratzeburger Allee 160, D-23538, Lübeck, Germany.
- Department of Psychiatry and Psychotherapy, Translational Psychiatry Unit (TPU), University of Lübeck, Ratzeburger Allee 160, D-23538, Lübeck, Germany.
- Social Brain Lab, Netherlands Institute for Neuroscience, KNAW, Meibergdreef 47, NL-1105BA, Amsterdam, The Netherlands.
| | - Nora Czekalla
- Department of Psychiatry and Psychotherapy, Social Neuroscience Lab, University of Lübeck, Ratzeburger Allee 160, D-23538, Lübeck, Germany
- Department of Psychiatry and Psychotherapy, Translational Psychiatry Unit (TPU), University of Lübeck, Ratzeburger Allee 160, D-23538, Lübeck, Germany
| | - Annalina V Mayer
- Department of Psychiatry and Psychotherapy, Social Neuroscience Lab, University of Lübeck, Ratzeburger Allee 160, D-23538, Lübeck, Germany
- Department of Psychiatry and Psychotherapy, Translational Psychiatry Unit (TPU), University of Lübeck, Ratzeburger Allee 160, D-23538, Lübeck, Germany
| | - David S Stolz
- Department of Psychiatry and Psychotherapy, Social Neuroscience Lab, University of Lübeck, Ratzeburger Allee 160, D-23538, Lübeck, Germany
- Department of Psychiatry and Psychotherapy, Translational Psychiatry Unit (TPU), University of Lübeck, Ratzeburger Allee 160, D-23538, Lübeck, Germany
| | - Valeria Gazzola
- Social Brain Lab, Netherlands Institute for Neuroscience, KNAW, Meibergdreef 47, NL-1105BA, Amsterdam, The Netherlands
- Department of Psychology, University of Amsterdam, Nieuwe Achtergracht 116, NL-1018 WV, Amsterdam, The Netherlands
| | - Christian Keysers
- Social Brain Lab, Netherlands Institute for Neuroscience, KNAW, Meibergdreef 47, NL-1105BA, Amsterdam, The Netherlands
- Department of Psychology, University of Amsterdam, Nieuwe Achtergracht 116, NL-1018 WV, Amsterdam, The Netherlands
| | - Frieder M Paulus
- Department of Psychiatry and Psychotherapy, Social Neuroscience Lab, University of Lübeck, Ratzeburger Allee 160, D-23538, Lübeck, Germany
- Department of Psychiatry and Psychotherapy, Translational Psychiatry Unit (TPU), University of Lübeck, Ratzeburger Allee 160, D-23538, Lübeck, Germany
| | - Sören Krach
- Department of Psychiatry and Psychotherapy, Social Neuroscience Lab, University of Lübeck, Ratzeburger Allee 160, D-23538, Lübeck, Germany
- Department of Psychiatry and Psychotherapy, Translational Psychiatry Unit (TPU), University of Lübeck, Ratzeburger Allee 160, D-23538, Lübeck, Germany
| |
Collapse
|
50
|
Norton EH, Acerbi L, Ma WJ, Landy MS. Human online adaptation to changes in prior probability. PLoS Comput Biol 2019; 15:e1006681. [PMID: 31283765 PMCID: PMC6638982 DOI: 10.1371/journal.pcbi.1006681] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Revised: 07/18/2019] [Accepted: 06/20/2019] [Indexed: 12/01/2022] Open
Abstract
Optimal sensory decision-making requires the combination of uncertain sensory signals with prior expectations. The effect of prior probability is often described as a shift in the decision criterion. Can observers track sudden changes in probability? To answer this question, we used a change-point detection paradigm that is frequently used to examine behavior in changing environments. In a pair of orientation-categorization tasks, we investigated the effects of changing probabilities on decision-making. In both tasks, category probability was updated using a sample-and-hold procedure: probability was held constant for a period of time before jumping to another probability state that was randomly selected from a predetermined set of probability states. We developed an ideal Bayesian change-point detection model in which the observer marginalizes over both the current run length (i.e., time since last change) and the current category probability. We compared this model to various alternative models that correspond to different strategies—from approximately Bayesian to simple heuristics—that the observers may have adopted to update their beliefs about probabilities. While a number of models provided decent fits to the data, model comparison favored a model in which probability is estimated following an exponential averaging model with a bias towards equal priors, consistent with a conservative bias, and a flexible variant of the Bayesian change-point detection model with incorrect beliefs. We interpret the former as a simpler, more biologically plausible explanation suggesting that the mechanism underlying change of decision criterion is a combination of on-line estimation of prior probability and a stable, long-term equal-probability prior, thus operating at two very different timescales. We demonstrate how people learn and adapt to changes to the probability of occurrence of one of two categories on decision-making under uncertainty. The study combined psychophysical behavioral tasks with computational modeling. We used two behavioral tasks: a typical forced-choice categorization task as well as one in which the observer specified the decision criterion to use on each trial before the stimulus was displayed. We formulated an ideal Bayesian change-point detection model and compared it to several alternative models. We found that the data are explained best by a model that estimates category probability based on recently observed exemplars with a bias towards equal probability. Our results suggest that the brain takes multiple relevant time scales into account when setting category expectations.
Collapse
Affiliation(s)
- Elyse H. Norton
- Psychology Department, New York University, New York, New York, United States of America
| | - Luigi Acerbi
- Psychology Department, New York University, New York, New York, United States of America
- Center for Neural Science, New York University, New York, New York, United States of America
| | - Wei Ji Ma
- Psychology Department, New York University, New York, New York, United States of America
- Center for Neural Science, New York University, New York, New York, United States of America
| | - Michael S. Landy
- Psychology Department, New York University, New York, New York, United States of America
- Center for Neural Science, New York University, New York, New York, United States of America
- * E-mail:
| |
Collapse
|