1
|
Zhu H, Beierholm U, Shams L. BCI Toolbox: An open-source python package for the Bayesian causal inference model. PLoS Comput Biol 2024; 20:e1011791. [PMID: 38976678 PMCID: PMC11257388 DOI: 10.1371/journal.pcbi.1011791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 07/18/2024] [Accepted: 06/05/2024] [Indexed: 07/10/2024] Open
Abstract
Psychological and neuroscientific research over the past two decades has shown that the Bayesian causal inference (BCI) is a potential unifying theory that can account for a wide range of perceptual and sensorimotor processes in humans. Therefore, we introduce the BCI Toolbox, a statistical and analytical tool in Python, enabling researchers to conveniently perform quantitative modeling and analysis of behavioral data. Additionally, we describe the algorithm of the BCI model and test its stability and reliability via parameter recovery. The present BCI toolbox offers a robust platform for BCI model implementation as well as a hands-on tool for learning and understanding the model, facilitating its widespread use and enabling researchers to delve into the data to uncover underlying cognitive mechanisms.
Collapse
Affiliation(s)
- Haocheng Zhu
- Department of Psychology, University of California, Los Angeles, California, United States of America
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Ulrik Beierholm
- Department of Psychology, University of Durham, Durham, United Kingdom
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, California, United States of America
- Department of Bioengineering, and Neuroscience Interdepartmental Program, University of California, Los Angeles, California, United States of America
| |
Collapse
|
2
|
Rohe T. Complex multisensory causal inference in multi-signal scenarios (commentary on Kayser, Debats & Heuer, 2024). Eur J Neurosci 2024; 59:2890-2893. [PMID: 38706126 DOI: 10.1111/ejn.16388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 04/24/2024] [Accepted: 04/25/2024] [Indexed: 05/07/2024]
Affiliation(s)
- Tim Rohe
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
3
|
Hacohen-Brown S, Gilboa-Schechtman E, Zaidel A. Modality-specific effects of threat on self-motion perception. BMC Biol 2024; 22:120. [PMID: 38783286 PMCID: PMC11119305 DOI: 10.1186/s12915-024-01911-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 05/08/2024] [Indexed: 05/25/2024] Open
Abstract
BACKGROUND Threat and individual differences in threat-processing bias perception of stimuli in the environment. Yet, their effect on perception of one's own (body-based) self-motion in space is unknown. Here, we tested the effects of threat on self-motion perception using a multisensory motion simulator with concurrent threatening or neutral auditory stimuli. RESULTS Strikingly, threat had opposite effects on vestibular and visual self-motion perception, leading to overestimation of vestibular, but underestimation of visual self-motions. Trait anxiety tended to be associated with an enhanced effect of threat on estimates of self-motion for both modalities. CONCLUSIONS Enhanced vestibular perception under threat might stem from shared neural substrates with emotional processing, whereas diminished visual self-motion perception may indicate that a threatening stimulus diverts attention away from optic flow integration. Thus, threat induces modality-specific biases in everyday experiences of self-motion.
Collapse
Affiliation(s)
- Shira Hacohen-Brown
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, 5290002, Ramat Gan, Israel
| | - Eva Gilboa-Schechtman
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, 5290002, Ramat Gan, Israel
- Department of Psychology, Bar-Ilan University, 5290002, Ramat-Gan, Israel
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, 5290002, Ramat Gan, Israel.
| |
Collapse
|
4
|
Zaidel A. Multisensory Calibration: A Variety of Slow and Fast Brain Processes Throughout the Lifespan. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:139-152. [PMID: 38270858 DOI: 10.1007/978-981-99-7611-9_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
From before we are born, throughout development, adulthood, and aging, we are immersed in a multisensory world. At each of these stages, our sensory cues are constantly changing, due to body, brain, and environmental changes. While integration of information from our different sensory cues improves precision, this only improves accuracy if the underlying cues are unbiased. Thus, multisensory calibration is a vital and ongoing process. To meet this grand challenge, our brains have evolved a variety of mechanisms. First, in response to a systematic discrepancy between sensory cues (without external feedback) the cues calibrate one another (unsupervised calibration). Second, multisensory function is calibrated to external feedback (supervised calibration). These two mechanisms superimpose. While the former likely reflects a lower level mechanism, the latter likely reflects a higher level cognitive mechanism. Indeed, neural correlates of supervised multisensory calibration in monkeys were found in higher level multisensory cortical area VIP, but not in the relatively lower level multisensory area MSTd. In addition, even without a cue discrepancy (e.g., when experiencing stimuli from different sensory cues in series) the brain monitors supra-modal statistics of events in the environment and adapts perception cross-modally. This too comprises a variety of mechanisms, including confirmation bias to prior choices, and lower level cross-sensory adaptation. Further research into the neuronal underpinnings of the broad and diverse functions of multisensory calibration, with improved synthesis of theories is needed to attain a more comprehensive understanding of multisensory brain function.
Collapse
Affiliation(s)
- Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, Israel.
| |
Collapse
|
5
|
Wang G, Yang Y, Dong K, Hua A, Wang J, Liu J. Multisensory Conflict Impairs Cortico-Muscular Network Connectivity and Postural Stability: Insights from Partial Directed Coherence Analysis. Neurosci Bull 2024; 40:79-89. [PMID: 37989834 PMCID: PMC10774487 DOI: 10.1007/s12264-023-01143-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Accepted: 07/16/2023] [Indexed: 11/23/2023] Open
Abstract
Sensory conflict impacts postural control, yet its effect on cortico-muscular interaction remains underexplored. We aimed to investigate sensory conflict's influence on the cortico-muscular network and postural stability. We used a rotating platform and virtual reality to present subjects with congruent and incongruent sensory input, recorded EEG (electroencephalogram) and EMG (electromyogram) data, and constructed a directed connectivity network. The results suggest that, compared to sensory congruence, during sensory conflict: (1) connectivity among the sensorimotor, visual, and posterior parietal cortex generally decreases, (2) cortical control over the muscles is weakened, (3) feedback from muscles to the cortex is strengthened, and (4) the range of body sway increases and its complexity decreases. These results underline the intricate effects of sensory conflict on cortico-muscular networks. During the sensory conflict, the brain adaptively decreases the integration of conflicting information. Without this integrated information, cortical control over muscles may be lessened, whereas the muscle feedback may be enhanced in compensation.
Collapse
Affiliation(s)
- Guozheng Wang
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, 310058, China
- Taizhou Key Laboratory of Medical Devices and Advanced Materials, Research Institute of Zhejiang University-Taizhou, Taizhou, 318000, China
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, 310058, China
| | - Yi Yang
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, 310058, China
| | - Kangli Dong
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, 310058, China
| | - Anke Hua
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, 310058, China
| | - Jian Wang
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, 310058, China.
- Center for Psychological Science, Zhejiang University, Hangzhou, 310058, China.
| | - Jun Liu
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, 310058, China.
- Taizhou Key Laboratory of Medical Devices and Advanced Materials, Research Institute of Zhejiang University-Taizhou, Taizhou, 318000, China.
| |
Collapse
|
6
|
Halow SJ, Hamilton A, Folmer E, MacNeilage PR. Impaired stationarity perception is associated with increased virtual reality sickness. J Vis 2023; 23:7. [PMID: 38127329 PMCID: PMC10750839 DOI: 10.1167/jov.23.14.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 11/05/2023] [Indexed: 12/23/2023] Open
Abstract
Stationarity perception refers to the ability to accurately perceive the surrounding visual environment as world-fixed during self-motion. Perception of stationarity depends on mechanisms that evaluate the congruence between retinal/oculomotor signals and head movement signals. In a series of psychophysical experiments, we systematically varied the congruence between retinal/oculomotor and head movement signals to find the range of visual gains that is compatible with perception of a stationary environment. On each trial, human subjects wearing a head-mounted display execute a yaw head movement and report whether the visual gain was perceived to be too slow or fast. A psychometric fit to the data across trials reveals the visual gain most compatible with stationarity (a measure of accuracy) and the sensitivity to visual gain manipulation (a measure of precision). Across experiments, we varied 1) the spatial frequency of the visual stimulus, 2) the retinal location of the visual stimulus (central vs. peripheral), and 3) fixation behavior (scene-fixed vs. head-fixed). Stationarity perception is most precise and accurate during scene-fixed fixation. Effects of spatial frequency and retinal stimulus location become evident during head-fixed fixation, when retinal image motion is increased. Virtual Reality sickness assessed using the Simulator Sickness Questionnaire covaries with perceptual performance. Decreased accuracy is associated with an increase in the nausea subscore, while decreased precision is associated with an increase in the oculomotor and disorientation subscores.
Collapse
Affiliation(s)
| | - Allie Hamilton
- University of Nevada, Reno, Psychology, Reno, Nevada, USA
| | - Eelke Folmer
- University of Nevada, Reno, Computer Science, Reno, Nevada, USA
| | | |
Collapse
|
7
|
Shivkumar S, DeAngelis GC, Haefner RM. Hierarchical motion perception as causal inference. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.18.567582. [PMID: 38014023 PMCID: PMC10680834 DOI: 10.1101/2023.11.18.567582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Since motion can only be defined relative to a reference frame, which reference frame guides perception? A century of psychophysical studies has produced conflicting evidence: retinotopic, egocentric, world-centric, or even object-centric. We introduce a hierarchical Bayesian model mapping retinal velocities to perceived velocities. Our model mirrors the structure in the world, in which visual elements move within causally connected reference frames. Friction renders velocities in these reference frames mostly stationary, formalized by an additional delta component (at zero) in the prior. Inverting this model automatically segments visual inputs into groups, groups into supergroups, etc. and "perceives" motion in the appropriate reference frame. Critical model predictions are supported by two new experiments, and fitting our model to the data allows us to infer the subjective set of reference frames used by individual observers. Our model provides a quantitative normative justification for key Gestalt principles providing inspiration for building better models of visual processing in general.
Collapse
Affiliation(s)
- Sabyasachi Shivkumar
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, NY 10027, USA
| | - Gregory C DeAngelis
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Ralf M Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| |
Collapse
|
8
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220344. [PMID: 37545300 PMCID: PMC10404925 DOI: 10.1098/rstb.2022.0344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/20/2023] [Indexed: 08/08/2023] Open
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of causal inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief about (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modelling results, we show that humans report targets as stationary and steer towards their initial rather than final position more often when they are themselves moving, suggesting a putative misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results support both of these predictions. Lastly, analysis of eye movements show that, while initial saccades toward targets were largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Johannes Bill
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Department of Psychology, Harvard University, Boston, MA 02115, USA
| | - Haoran Ding
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - John Vastola
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14611, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA
- Tandon School of Engineering, New York University, New York, NY 10003, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Center for Brain Science, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
9
|
Johnston WJ, Freedman DJ. Redundant representations are required to disambiguate simultaneously presented complex stimuli. PLoS Comput Biol 2023; 19:e1011327. [PMID: 37556470 PMCID: PMC10442167 DOI: 10.1371/journal.pcbi.1011327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 08/21/2023] [Accepted: 07/04/2023] [Indexed: 08/11/2023] Open
Abstract
A pedestrian crossing a street during rush hour often looks and listens for potential danger. When they hear several different horns, they localize the cars that are honking and decide whether or not they need to modify their motor plan. How does the pedestrian use this auditory information to pick out the corresponding cars in visual space? The integration of distributed representations like these is called the assignment problem, and it must be solved to integrate distinct representations across but also within sensory modalities. Here, we identify and analyze a solution to the assignment problem: the representation of one or more common stimulus features in pairs of relevant brain regions-for example, estimates of the spatial position of cars are represented in both the visual and auditory systems. We characterize how the reliability of this solution depends on different features of the stimulus set (e.g., the size of the set and the complexity of the stimuli) and the details of the split representations (e.g., the precision of each stimulus representation and the amount of overlapping information). Next, we implement this solution in a biologically plausible receptive field code and show how constraints on the number of neurons and spikes used by the code force the brain to navigate a tradeoff between local and catastrophic errors. We show that, when many spikes and neurons are available, representing stimuli from a single sensory modality can be done more reliably across multiple brain regions, despite the risk of assignment errors. Finally, we show that a feedforward neural network can learn the optimal solution to the assignment problem, even when it receives inputs in two distinct representational formats. We also discuss relevant results on assignment errors from the human working memory literature and show that several key predictions of our theory already have support.
Collapse
Affiliation(s)
- W. Jeffrey Johnston
- Graduate Program in Computational Neuroscience and the Department of Neurobiology, The University of Chicago, Chicago, Illinois, United States of America
- Center for Theoretical Neuroscience and Mortimer B. Zuckerman Mind, Brain and Behavior Institute, Columbia University, New York, New York, United States of America
| | - David J. Freedman
- Graduate Program in Computational Neuroscience and the Department of Neurobiology, The University of Chicago, Chicago, Illinois, United States of America
- Neuroscience Institute, The University of Chicago, Chicago, Illinois, United States of America
| |
Collapse
|
10
|
Chancel M, Ehrsson HH. Proprioceptive uncertainty promotes the rubber hand illusion. Cortex 2023; 165:70-85. [PMID: 37269634 PMCID: PMC10284257 DOI: 10.1016/j.cortex.2023.04.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 03/15/2023] [Accepted: 04/17/2023] [Indexed: 06/05/2023]
Abstract
Body ownership is the multisensory perception of a body as one's own. Recently, the emergence of body ownership illusions like the visuotactile rubber hand illusion has been described by Bayesian causal inference models in which the observer computes the probability that visual and tactile signals come from a common source. Given the importance of proprioception for the perception of one's body, proprioceptive information and its relative reliability should impact this inferential process. We used a detection task based on the rubber hand illusion where participants had to report whether the rubber hand felt like their own or not. We manipulated the degree of asynchrony of visual and tactile stimuli delivered to the rubber hand and the real hand under two levels of proprioceptive noise using tendon vibration applied to the lower arm's antagonist extensor and flexor muscles. As hypothesized, the probability of the emergence of the rubber hand illusion increased with proprioceptive noise. Moreover, this result, well fitted by a Bayesian causal inference model, was best described by a change in the a priori probability of a common cause for vision and touch. These results offer new insights into how proprioceptive uncertainty shapes the multisensory perception of one's own body.
Collapse
Affiliation(s)
- Marie Chancel
- Department of Neuroscience, Brain, Body and Self Laboratory, Karolinska Institutet, Sweden; Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France.
| | - H Henrik Ehrsson
- Department of Neuroscience, Brain, Body and Self Laboratory, Karolinska Institutet, Sweden
| |
Collapse
|
11
|
Noel JP, Angelaki DE. A theory of autism bridging across levels of description. Trends Cogn Sci 2023; 27:631-641. [PMID: 37183143 PMCID: PMC10330321 DOI: 10.1016/j.tics.2023.04.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 04/18/2023] [Accepted: 04/19/2023] [Indexed: 05/16/2023]
Abstract
Autism impacts a wide range of behaviors and neural functions. As such, theories of autism spectrum disorder (ASD) are numerous and span different levels of description, from neurocognitive to molecular. We propose how existent behavioral, computational, algorithmic, and neural accounts of ASD may relate to one another. Specifically, we argue that ASD may be cast as a disorder of causal inference (computational level). This computation relies on marginalization, which is thought to be subserved by divisive normalization (algorithmic level). In turn, divisive normalization may be impaired by excitatory-to-inhibitory imbalances (neural implementation level). We also discuss ASD within similar frameworks, those of predictive coding and circular inference. Together, we hope to motivate work unifying the different accounts of ASD.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY, USA.
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA; Tandon School of Engineering, New York University, New York, NY, USA
| |
Collapse
|
12
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.27.525974. [PMID: 36778376 PMCID: PMC9915492 DOI: 10.1101/2023.01.27.525974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of Bayesian Causal Inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief over (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modeling results, we show that humans report targets as stationary and steer toward their initial rather than final position more often when they are themselves moving, suggesting a misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results confirm both of these predictions. Lastly, analysis of eye-movements show that, while initial saccades toward targets are largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, United States
| | - Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Department of Psychology, Harvard University, Cambridge, MA, United States
| | - Haoran Ding
- Center for Neural Science, New York University, New York City, NY, United States
| | - John Vastola
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, United States
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York City, NY, United States
- Tandon School of Engineering, New York University, New York City, NY, United states
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Center for Brain Science, Harvard University, Boston, MA, United States
| |
Collapse
|
13
|
Bill J, Gershman SJ, Drugowitsch J. Visual motion perception as online hierarchical inference. Nat Commun 2022; 13:7403. [PMID: 36456546 PMCID: PMC9715570 DOI: 10.1038/s41467-022-34805-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 11/07/2022] [Indexed: 12/03/2022] Open
Abstract
Identifying the structure of motion relations in the environment is critical for navigation, tracking, prediction, and pursuit. Yet, little is known about the mental and neural computations that allow the visual system to infer this structure online from a volatile stream of visual information. We propose online hierarchical Bayesian inference as a principled solution for how the brain might solve this complex perceptual task. We derive an online Expectation-Maximization algorithm that explains human percepts qualitatively and quantitatively for a diverse set of stimuli, covering classical psychophysics experiments, ambiguous motion scenes, and illusory motion displays. We thereby identify normative explanations for the origin of human motion structure perception and make testable predictions for future psychophysics experiments. The proposed online hierarchical inference model furthermore affords a neural network implementation which shares properties with motion-sensitive cortical areas and motivates targeted experiments to reveal the neural representations of latent structure.
Collapse
Affiliation(s)
- Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA. .,Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Samuel J Gershman
- Department of Psychology, Harvard University, Cambridge, MA, USA.,Center for Brain Science, Harvard University, Cambridge, MA, USA.,Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.,Center for Brain Science, Harvard University, Cambridge, MA, USA
| |
Collapse
|
14
|
French RL, DeAngelis GC. Scene-relative object motion biases depth percepts. Sci Rep 2022; 12:18480. [PMID: 36323845 PMCID: PMC9630409 DOI: 10.1038/s41598-022-23219-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Accepted: 10/27/2022] [Indexed: 11/07/2022] Open
Abstract
An important function of the visual system is to represent 3D scene structure from a sequence of 2D images projected onto the retinae. During observer translation, the relative image motion of stationary objects at different distances (motion parallax) provides potent depth information. However, if an object moves relative to the scene, this complicates the computation of depth from motion parallax since there will be an additional component of image motion related to scene-relative object motion. To correctly compute depth from motion parallax, only the component of image motion caused by self-motion should be used by the brain. Previous experimental and theoretical work on perception of depth from motion parallax has assumed that objects are stationary in the world. Thus, it is unknown whether perceived depth based on motion parallax is biased by object motion relative to the scene. Naïve human subjects viewed a virtual 3D scene consisting of a ground plane and stationary background objects, while lateral self-motion was simulated by optic flow. A target object could be either stationary or moving laterally at different velocities, and subjects were asked to judge the depth of the object relative to the plane of fixation. Subjects showed a far bias when object and observer moved in the same direction, and a near bias when object and observer moved in opposite directions. This pattern of biases is expected if subjects confound image motion due to self-motion with that due to scene-relative object motion. These biases were large when the object was viewed monocularly, and were greatly reduced, but not eliminated, when binocular disparity cues were provided. Our findings establish that scene-relative object motion can confound perceptual judgements of depth during self-motion.
Collapse
Affiliation(s)
- Ranran L. French
- grid.16416.340000 0004 1936 9174Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, USA
| | - Gregory C. DeAngelis
- grid.16416.340000 0004 1936 9174Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, USA
| |
Collapse
|
15
|
Wang G, Yang Y, Wang J, Hao Z, Luo X, Liu J. Dynamic changes of brain networks during standing balance control under visual conflict. Front Neurosci 2022; 16:1003996. [PMID: 36278015 PMCID: PMC9581155 DOI: 10.3389/fnins.2022.1003996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 09/20/2022] [Indexed: 11/13/2022] Open
Abstract
Stance balance control requires a very accurate tuning and combination of visual, vestibular, and proprioceptive inputs, and conflict among these sensory systems may induce posture instability and even falls. Although there are many human mechanics and psychophysical studies for this phenomenon, the effects of sensory conflict on brain networks and its underlying neural mechanisms are still unclear. Here, we combined a rotating platform and a virtual reality (VR) headset to control the participants’ physical and visual motion states, presenting them with incongruous (sensory conflict) or congruous (normal control) physical-visual stimuli. Further, to investigate the effects of sensory conflict on stance stability and brain networks, we recorded and calculated the effective connectivity of source-level electroencephalogram (EEG) and the average velocity of the plantar center of pressure (COP) in healthy subjects (18 subjects: 10 males, 8 females). First, our results showed that sensory conflict did have a detrimental effect on stance posture control [sensor F(1, 17) = 13.34, P = 0.0019], but this effect decreases over time [window*sensor F(2, 34) = 6.72, P = 0.0035]. Humans show a marked adaptation to sensory conflict. In addition, we found that human adaptation to the sensory conflict was associated with changes in the cortical network. At the stimulus onset, congruent and incongruent stimuli had similar effects on brain networks. In both cases, there was a significant increase in information interaction centered on the frontal cortices (p < 0.05). Then, after a time window, synchronized with the restoration of stance stability under conflict, the connectivity of large brain regions, including posterior parietal, visual, somatosensory, and motor cortices, was generally lower in sensory conflict than in controls (p < 0.05). But the influence of the superior temporal lobe on other cortices was significantly increased. Overall, we speculate that a posterior parietal-centered cortical network may play a key role in integrating congruous sensory information. Furthermore, the dissociation of this network may reflect a flexible multisensory interaction strategy that is critical for human posture balance control in complex and changing environments. In addition, the superior temporal lobe may play a key role in processing conflicting sensory information.
Collapse
Affiliation(s)
- Guozheng Wang
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Yi Yang
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, China
| | - Jian Wang
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, China
| | - Zengming Hao
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xin Luo
- Department of Sports Science, College of Education, Zhejiang University, Hangzhou, China
| | - Jun Liu
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
- *Correspondence: Jun Liu,
| |
Collapse
|
16
|
Xing X, Saunders JA. Perception of object motion during self-motion: Correlated biases in judgments of heading direction and object motion. J Vis 2022; 22:8. [PMID: 36223109 PMCID: PMC9583749 DOI: 10.1167/jov.22.11.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
This study investigated the relationship between perceived heading direction and perceived motion of an independently moving object during self-motion. Using a dual task paradigm, we tested whether object motion judgments showed biases consistent with heading perception, both across conditions and from trial to trial. Subjects viewed simulated self-motion and estimated their heading direction (Experiment 1), or walked toward a target in virtual reality with conflicting physical and visual cues (Experiment 2). During self-motion, an independently moving object briefly appeared, with varied horizontal velocity, and observers judged whether the object was moving leftward or rightward. In Experiment 1, heading estimates showed an expected center bias, and object motion judgments showed corresponding biases. Trial-to-trial variations were also correlated: on trials with a more rightward heading bias, object motion judgments were consistent with a more rightward heading, and vice versa. In Experiment 2, we estimated the relative weighting of visual and physical cues in control of walking and object motion judgments. Both were strongly influenced by nonvisual cues, with less weighting for object motion (86% vs. 63%). There were also trial-to-trial correlations between biases in walking direction and object motion judgments. The results provide evidence that shared mechanisms contribute to heading perception and perception of object motion.
Collapse
Affiliation(s)
- Xing Xing
- Department of Psychology, University of Hong Kong, Hong Kong.,
| | | |
Collapse
|
17
|
Causal contribution of optic flow signal in Macaque extrastriate visual cortex for roll perception. Nat Commun 2022; 13:5479. [PMID: 36123363 PMCID: PMC9485245 DOI: 10.1038/s41467-022-33245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 09/08/2022] [Indexed: 11/08/2022] Open
Abstract
Optic flow is a powerful cue for inferring self-motion status which is critical for postural control, spatial orientation, locomotion and navigation. In primates, neurons in extrastriate visual cortex (MSTd) are predominantly modulated by high-order optic flow patterns (e.g., spiral), yet a functional link to direct perception is lacking. Here, we applied electrical microstimulation to selectively manipulate population of MSTd neurons while macaques discriminated direction of rotation around line-of-sight (roll) or direction of linear-translation (heading), two tasks which were orthogonal in 3D spiral coordinate using a four-alternative-forced-choice paradigm. Microstimulation frequently biased animal's roll perception towards coded labeled-lines of the artificial-stimulated neurons in either context with spiral or pure-rotation stimuli. Choice frequency was also altered between roll and translation flow-pattern. Our results provide direct causal-link evidence supporting that roll signals in MSTd, despite often mixed with translation signals, can be extracted by downstream areas for perception of rotation relative to gravity-vertical.
Collapse
|
18
|
Hong F, Badde S, Landy MS. Repeated exposure to either consistently spatiotemporally congruent or consistently incongruent audiovisual stimuli modulates the audiovisual common-cause prior. Sci Rep 2022; 12:15532. [PMID: 36109544 PMCID: PMC9478143 DOI: 10.1038/s41598-022-19041-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 08/23/2022] [Indexed: 11/09/2022] Open
Abstract
AbstractTo estimate an environmental property such as object location from multiple sensory signals, the brain must infer their causal relationship. Only information originating from the same source should be integrated. This inference relies on the characteristics of the measurements, the information the sensory modalities provide on a given trial, as well as on a cross-modal common-cause prior: accumulated knowledge about the probability that cross-modal measurements originate from the same source. We examined the plasticity of this cross-modal common-cause prior. In a learning phase, participants were exposed to a series of audiovisual stimuli that were either consistently spatiotemporally congruent or consistently incongruent; participants’ audiovisual spatial integration was measured before and after this exposure. We fitted several Bayesian causal-inference models to the data; the models differed in the plasticity of the common-source prior. Model comparison revealed that, for the majority of the participants, the common-cause prior changed during the learning phase. Our findings reveal that short periods of exposure to audiovisual stimuli with a consistent causal relationship can modify the common-cause prior. In accordance with previous studies, both exposure conditions could either strengthen or weaken the common-cause prior at the participant level. Simulations imply that the direction of the prior-update might be mediated by the degree of sensory noise, the variability of the measurements of the same signal across trials, during the learning phase.
Collapse
|
19
|
Riem L, Beardsley SA, Obeidat AZ, Schmit BD. Visual oscillation effects on dynamic balance control in people with multiple sclerosis. J Neuroeng Rehabil 2022; 19:90. [PMID: 35978431 PMCID: PMC9382748 DOI: 10.1186/s12984-022-01060-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 06/15/2022] [Indexed: 12/03/2022] Open
Abstract
Background People with multiple sclerosis (PwMS) have balance deficits while ambulating through environments that contain moving objects or visual manipulations to perceived self-motion. However, their ability to parse object from self-movement has not been explored. The purpose of this research was to examine the effect of medial–lateral oscillations of the visual field and of objects within the scene on gait in PwMS and healthy age-matched controls using virtual reality (VR). Methods Fourteen PwMS (mean age 49 ± 11 years, functional gait assessment score of 27.8 ± 1.8, and Berg Balance scale score 54.7 ± 1.5) and eleven healthy controls (mean age: 53 ± 12 years) participated in this study. Dynamic balance control was assessed while participants walked on a treadmill at a self-selected speed while wearing a VR headset that projected an immersive forest scene. Visual conditions consisted of (1) no visual manipulations (speed-matched anterior/posterior optical flow), (2) 0.175 m mediolateral translational oscillations of the scene that consisted of low pairing (0.1 and 0.31 Hz) or (3) high pairing (0.15 and 0.465 Hz) frequencies, (4) 5 degree medial–lateral rotational oscillations of virtual trees at a low frequency pairing (0.1 and 0.31 Hz), and (5) a combination of the tree and scene movements in (3) and (4). Results We found that both PwMS and controls exhibited greater instability and visuomotor entrainment to simulated mediolateral translation of the visual field (scene) during treadmill walking. This was demonstrated by significant (p < 0.05) increases in mean step width and variability and center of mass sway. Visuomotor entrainment was demonstrated by high coherence between center of mass sway and visual motion (magnitude square coherence = ~ 0.5 to 0.8). Only PwMS exhibited significantly greater instability (higher step width variability and center of mass sway) when objects moved within the scene (i.e., swaying trees). Conclusion Results suggest the presence of visual motion processing errors in PwMS that reduced dynamic stability. Specifically, object motion (via tree sway) was not effectively parsed from the observer’s self-motion. Identifying this distinction between visual object motion and self-motion detection in MS provides insight regarding stability control in environments with excessive external movement, such as those encountered in daily life.
Collapse
Affiliation(s)
- Lara Riem
- Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, P.O. Box 1881, Milwaukee, WI, 53201-1881, USA
| | - Scott A Beardsley
- Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, P.O. Box 1881, Milwaukee, WI, 53201-1881, USA
| | - Ahmed Z Obeidat
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Brian D Schmit
- Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, P.O. Box 1881, Milwaukee, WI, 53201-1881, USA.
| |
Collapse
|
20
|
Cortical Mechanisms of Multisensory Linear Self-motion Perception. Neurosci Bull 2022; 39:125-137. [PMID: 35821337 PMCID: PMC9849545 DOI: 10.1007/s12264-022-00916-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 01/22/2023] Open
Abstract
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Collapse
|
21
|
Shams L, Beierholm U. Bayesian causal inference: A unifying neuroscience theory. Neurosci Biobehav Rev 2022; 137:104619. [PMID: 35331819 DOI: 10.1016/j.neubiorev.2022.104619] [Citation(s) in RCA: 37] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 02/21/2022] [Accepted: 03/10/2022] [Indexed: 01/08/2023]
Abstract
Understanding of the brain and the principles governing neural processing requires theories that are parsimonious, can account for a diverse set of phenomena, and can make testable predictions. Here, we review the theory of Bayesian causal inference, which has been tested, refined, and extended in a variety of tasks in humans and other primates by several research groups. Bayesian causal inference is normative and has explained human behavior in a vast number of tasks including unisensory and multisensory perceptual tasks, sensorimotor, and motor tasks, and has accounted for counter-intuitive findings. The theory has made novel predictions that have been tested and confirmed empirically, and recent studies have started to map its algorithms and neural implementation in the human brain. The parsimony, the diversity of the phenomena that the theory has explained, and its illuminating brain function at all three of Marr's levels of analysis make Bayesian causal inference a strong neuroscience theory. This also highlights the importance of collaborative and multi-disciplinary research for the development of new theories in neuroscience.
Collapse
Affiliation(s)
- Ladan Shams
- Departments of Psychology, BioEngineering, and Neuroscience Interdepartmental Program, University of California, Los Angeles, USA.
| | | |
Collapse
|
22
|
Kim HR, Angelaki DE, DeAngelis GC. A neural mechanism for detecting object motion during self-motion. eLife 2022; 11:74971. [PMID: 35642599 PMCID: PMC9159750 DOI: 10.7554/elife.74971] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/17/2022] [Indexed: 11/17/2022] Open
Abstract
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.,Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
23
|
Noel JP, Shivkumar S, Dokka K, Haefner RM, Angelaki DE. Aberrant causal inference and presence of a compensatory mechanism in autism spectrum disorder. eLife 2022; 11:71866. [PMID: 35579424 PMCID: PMC9170250 DOI: 10.7554/elife.71866] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 05/15/2022] [Indexed: 12/02/2022] Open
Abstract
Autism spectrum disorder (ASD) is characterized by a panoply of social, communicative, and sensory anomalies. As such, a central goal of computational psychiatry is to ascribe the heterogenous phenotypes observed in ASD to a limited set of canonical computations that may have gone awry in the disorder. Here, we posit causal inference - the process of inferring a causal structure linking sensory signals to hidden world causes - as one such computation. We show that audio-visual integration is intact in ASD and in line with optimal models of cue combination, yet multisensory behavior is anomalous in ASD because this group operates under an internal model favoring integration (vs. segregation). Paradoxically, during explicit reports of common cause across spatial or temporal disparities, individuals with ASD were less and not more likely to report common cause, particularly at small cue disparities. Formal model fitting revealed differences in both the prior probability for common cause (p-common) and choice biases, which are dissociable in implicit but not explicit causal inference tasks. Together, this pattern of results suggests (i) different internal models in attributing world causes to sensory signals in ASD relative to neurotypical individuals given identical sensory cues, and (ii) the presence of an explicit compensatory mechanism in ASD, with these individuals putatively having learned to compensate for their bias to integrate in explicit reports.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, United States
| | | | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| | - Ralf M Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York City, United States.,Department of Neuroscience, Baylor College of Medicine, Houston, United States
| |
Collapse
|
24
|
Hong F, Badde S, Landy MS. Causal inference regulates audiovisual spatial recalibration via its influence on audiovisual perception. PLoS Comput Biol 2021; 17:e1008877. [PMID: 34780469 PMCID: PMC8629398 DOI: 10.1371/journal.pcbi.1008877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 11/29/2021] [Accepted: 10/26/2021] [Indexed: 11/23/2022] Open
Abstract
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli. Audiovisual recalibration of spatial perception occurs when we receive audiovisual stimuli with a systematic spatial discrepancy. The brain must determine to which extent both modalities should be recalibrated. In this study, we scrutinized the mechanisms the brain employs to do so. To this aim, we conducted a classical audiovisual recalibration experiment in which participants were adapted to spatially discrepant audiovisual stimuli. The visual component of the bimodal stimulus was either less, equally, or more reliable than the auditory component. We measured the amount of recalibration by computing the difference between participants’ unimodal localization responses before and after the audiovisual recalibration. Across participants, the influence of visual reliability on auditory recalibration varied fundamentally. We compared three models of recalibration. Only a causal-inference model of recalibration captured the diverse influences of cue reliability on recalibration found in our study, this model is also able to replicate contradictory results found in previous studies. In this model, recalibration depends on the discrepancy between a sensory measurement and the perceptual estimate for the same sensory modality. Cue reliability, perceptual biases, and the degree to which participants infer that the two cues come from a common source govern audiovisual perception and therefore audiovisual recalibration.
Collapse
Affiliation(s)
- Fangfang Hong
- Department of Psychology, New York University, New York City, New York, United States of America
- * E-mail:
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, Massachusetts, United States of America
| | - Michael S. Landy
- Department of Psychology, New York University, New York City, New York, United States of America
- Center for Neural Science, New York University, New York City, New York, United States of America
| |
Collapse
|
25
|
Ferrari A, Noppeney U. Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biol 2021; 19:e3001465. [PMID: 34793436 PMCID: PMC8639080 DOI: 10.1371/journal.pbio.3001465] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 12/02/2021] [Accepted: 11/01/2021] [Indexed: 11/22/2022] Open
Abstract
To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals' causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.
Collapse
Affiliation(s)
- Ambra Ferrari
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
26
|
Noppeney U. Solving the causal inference problem. Trends Cogn Sci 2021; 25:1013-1014. [PMID: 34561193 DOI: 10.1016/j.tics.2021.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 09/07/2021] [Indexed: 11/26/2022]
Abstract
Perception requires the brain to infer whether signals arise from common causes and should hence be integrated or else be treated independently. Rideaux et al. show that a feedforward network can perform causal inference in visuovestibular motion estimation by reading out activity from neurons tuned to congruent and opposite directions.
Collapse
Affiliation(s)
- Uta Noppeney
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands.
| |
Collapse
|
27
|
Noel JP, Angelaki DE. Cognitive, Systems, and Computational Neurosciences of the Self in Motion. Annu Rev Psychol 2021; 73:103-129. [PMID: 34546803 DOI: 10.1146/annurev-psych-021021-103038] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Navigating by path integration requires continuously estimating one's self-motion. This estimate may be derived from visual velocity and/or vestibular acceleration signals. Importantly, these senses in isolation are ill-equipped to provide accurate estimates, and thus visuo-vestibular integration is an imperative. After a summary of the visual and vestibular pathways involved, the crux of this review focuses on the human and theoretical approaches that have outlined a normative account of cue combination in behavior and neurons, as well as on the systems neuroscience efforts that are searching for its neural implementation. We then highlight a contemporary frontier in our state of knowledge: understanding how velocity cues with time-varying reliabilities are integrated into an evolving position estimate over prolonged time periods. Further, we discuss how the brain builds internal models inferring when cues ought to be integrated versus segregated-a process of causal inference. Lastly, we suggest that the study of spatial navigation has not yet addressed its initial condition: self-location. Expected final online publication date for the Annual Review of Psychology, Volume 73 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA;
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA; .,Tandon School of Engineering, New York University, New York, NY 11201, USA
| |
Collapse
|
28
|
|
29
|
Abstract
We perceive our environment through multiple independent sources of sensory input. The brain is tasked with deciding whether multiple signals are produced by the same or different events (i.e., solve the problem of causal inference). Here, we train a neural network to solve causal inference by either combining or separating visual and vestibular inputs in order to estimate self- and scene motion. We find that the network recapitulates key neurophysiological (i.e., congruent and opposite neurons) and behavioral (e.g., reliability-based cue weighting) properties of biological systems. We show how congruent and opposite neurons support motion estimation and how the balance in activity between these subpopulations determines whether to combine or separate multisensory signals. Sitting in a static railway carriage can produce illusory self-motion if the train on an adjoining track moves off. While our visual system registers motion, vestibular signals indicate that we are stationary. The brain is faced with a difficult challenge: is there a single cause of sensations (I am moving) or two causes (I am static, another train is moving)? If a single cause, integrating signals produces a more precise estimate of self-motion, but if not, one cue should be ignored. In many cases, this process of causal inference works without error, but how does the brain achieve it? Electrophysiological recordings show that the macaque medial superior temporal area contains many neurons that encode combinations of vestibular and visual motion cues. Some respond best to vestibular and visual motion in the same direction (“congruent” neurons), while others prefer opposing directions (“opposite” neurons). Congruent neurons could underlie cue integration, but the function of opposite neurons remains a puzzle. Here, we seek to explain this computational arrangement by training a neural network model to solve causal inference for motion estimation. Like biological systems, the model develops congruent and opposite units and recapitulates known behavioral and neurophysiological observations. We show that all units (both congruent and opposite) contribute to motion estimation. Importantly, however, it is the balance between their activity that distinguishes whether visual and vestibular cues should be integrated or separated. This explains the computational purpose of puzzling neural representations and shows how a relatively simple feedforward network can solve causal inference.
Collapse
|
30
|
Basu H, Schwarz TL. QuoVadoPro, an Autonomous Tool for Measuring Intracellular Dynamics using Temporal Variance. ACTA ACUST UNITED AC 2021; 87:e108. [PMID: 32569415 DOI: 10.1002/cpcb.108] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Trafficking of intracellular cargo is essential to cellular function and can be defective in pathological states including cancer and neurodegeneration. Tools to quantify intracellular traffic are thus necessary for understanding this fundamental cellular process, studying disease mechanisms, and testing the effects of therapeutic pharmaceuticals. In this article we introduce an algorithm called QuoVadoPro that autonomously quantifies the movement of fluorescently tagged intracellular cargo. QuoVadoPro infers the extent of intracellular motility based on the variance of pixel illumination in a series of time-lapse images. The algorithm is an unconventional approach to the automatic measurement of intracellular traffic and is suitable for quantifying movements of intracellular cargo under diverse experimental paradigms. QuoVadoPro is particularly useful to measure intracellular cargo movement in non-neuronal cells, where cargo trafficking occurs as short movements in mixed directions. The algorithm can be applied to images with low temporal or spatial resolutions and to intracellular cargo with varying shapes or sizes, like mitochondria or endoplasmic reticulum: situations in which conventional methods such as kymography and particle tracking cannot be applied. In this article we present a stepwise protocol for using the QuoVadoPro software, illustrate its methodology with common examples, discuss critical parameters for reliable data analysis, and demonstrate its use with a previously published example. © 2020 Wiley Periodicals LLC. Basic Protocol: QuoVadoPro, an autonomous tool for measuring intracellular dynamics using temporal variance.
Collapse
Affiliation(s)
- Himanish Basu
- Kirby Neurobiology Center, Boston Children's Hospital, Boston, Massachusetts.,Division of Medical Sciences, Harvard Medical School, Boston, Massachusetts
| | - Thomas L Schwarz
- Kirby Neurobiology Center, Boston Children's Hospital, Boston, Massachusetts.,Department of Neurobiology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
31
|
Jones SA, Noppeney U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex 2021; 138:1-23. [PMID: 33676086 DOI: 10.1016/j.cortex.2021.02.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 01/23/2021] [Accepted: 02/02/2021] [Indexed: 11/29/2022]
Abstract
The processing of multisensory signals is crucial for effective interaction with the environment, but our ability to perform this vital function changes as we age. In the first part of this review, we summarise existing research into the effects of healthy ageing on multisensory integration. We note that age differences vary substantially with the paradigms and stimuli used: older adults often receive at least as much benefit (to both accuracy and response times) as younger controls from congruent multisensory stimuli, but are also consistently more negatively impacted by the presence of intersensory conflict. In the second part, we outline a normative Bayesian framework that provides a principled and computationally informed perspective on the key ingredients involved in multisensory perception, and how these are affected by ageing. Applying this framework to the existing literature, we conclude that changes to sensory reliability, prior expectations (together with attentional control), and decisional strategies all contribute to the age differences observed. However, we find no compelling evidence of any age-related changes to the basic inference mechanisms involved in multisensory perception.
Collapse
Affiliation(s)
- Samuel A Jones
- The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
32
|
Guterstam A, Larsson DEO, Szczotka J, Ehrsson HH. Duplication of the bodily self: a perceptual illusion of dual full-body ownership and dual self-location. ROYAL SOCIETY OPEN SCIENCE 2020; 7:201911. [PMID: 33489299 PMCID: PMC7813251 DOI: 10.1098/rsos.201911] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Accepted: 11/02/2020] [Indexed: 06/12/2023]
Abstract
Previous research has shown that it is possible to use multisensory stimulation to induce the perceptual illusion of owning supernumerary limbs, such as two right arms. However, it remains unclear whether the coherent feeling of owning a full-body may be duplicated in the same manner and whether such a dual full-body illusion could be used to split the unitary sense of self-location into two. Here, we examined whether healthy human participants can experience simultaneous ownership of two full-bodies, located either close in parallel or in two separate spatial locations. A previously described full-body illusion, based on visuo-tactile stimulation of an artificial body viewed from the first-person perspective (1PP) via head-mounted displays, was adapted to a dual-body setting and quantified in five experiments using questionnaires, a behavioural self-location task and threat-evoked skin conductance responses. The results of experiments 1-3 showed that synchronous visuo-tactile stimulation of two bodies viewed from the 1PP lying in parallel next to each other induced a significant illusion of dual full-body ownership. In experiment 4, we failed to find support for our working hypothesis that splitting the visual scene into two, so that each of the two illusory bodies was placed in distinct spatial environments, would lead to dual self-location. In a final exploratory experiment (no. 5), we found preliminary support for an illusion of dual self-location and dual body ownership by using dynamic changes between the 1PPs of two artificial bodies and/or a common third-person perspective in the ceiling of the testing room. These findings suggest that healthy people, under certain conditions of multisensory perceptual ambiguity, may experience dual body ownership and dual self-location. These findings suggest that the coherent sense of the bodily self located at a single place in space is the result of an active and dynamic perceptual integration process.
Collapse
Affiliation(s)
- Arvid Guterstam
- Department of Psychology, Princeton University, Princeton, NJ, USA
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | | | - Joanna Szczotka
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - H. Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
33
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
34
|
Abstract
During self-motion, an independently moving object generates retinal motion that is the vector sum of its world-relative motion and the optic flow caused by the observer's self-motion. A hypothesized mechanism for the computation of an object's world-relative motion is flow parsing, in which the optic flow field due to self-motion is globally subtracted from the retinal flow field. This subtraction generates a bias in perceived object direction (in retinal coordinates) away from the optic flow vector at the object's location. Despite psychophysical evidence for flow parsing in humans, the neural mechanisms underlying the process are unknown. To build the framework for investigation of the neural basis of flow parsing, we trained macaque monkeys to discriminate the direction of a moving object in the presence of optic flow simulating self-motion. Like humans, monkeys showed biases in object direction perception consistent with subtraction of background optic flow attributable to self-motion. The size of perceptual biases generally depended on the magnitude of the expected optic flow vector at the location of the object, which was contingent on object position and self-motion velocity. There was a modest effect of an object's depth on flow-parsing biases, which reached significance in only one of two subjects. Adding vestibular self-motion signals to optic flow facilitated flow parsing, increasing biases in direction perception. Our findings indicate that monkeys exhibit perceptual hallmarks of flow parsing, setting the stage for the examination of the neural mechanisms underlying this phenomenon.
Collapse
Affiliation(s)
- Nicole E Peltier
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA.,
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, USA.,
| |
Collapse
|
35
|
Mohl JT, Pearson JM, Groh JM. Monkeys and humans implement causal inference to simultaneously localize auditory and visual stimuli. J Neurophysiol 2020; 124:715-727. [PMID: 32727263 DOI: 10.1152/jn.00046.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
The environment is sampled by multiple senses, which are woven together to produce a unified perceptual state. However, optimally unifying such signals requires assigning particular signals to the same or different underlying objects or events. Many prior studies (especially in animals) have assumed fusion of cross-modal information, whereas recent work in humans has begun to probe the appropriateness of this assumption. Here we present results from a novel behavioral task in which both monkeys (Macaca mulatta) and humans localized visual and auditory stimuli and reported their perceived sources through saccadic eye movements. When the locations of visual and auditory stimuli were widely separated, subjects made two saccades, while when the two stimuli were presented at the same location they made only a single saccade. Intermediate levels of separation produced mixed response patterns: a single saccade to an intermediate position on some trials or separate saccades to both locations on others. The distribution of responses was well described by a hierarchical causal inference model that accurately predicted both the explicit "same vs. different" source judgments as well as biases in localization of the source(s) under each of these conditions. The results from this task are broadly consistent with prior work in humans across a wide variety of analogous tasks, extending the study of multisensory causal inference to nonhuman primates and to a natural behavioral task with both a categorical assay of the number of perceived sources and a continuous report of the perceived position of the stimuli.NEW & NOTEWORTHY We developed a novel behavioral paradigm for the study of multisensory causal inference in both humans and monkeys and found that both species make causal judgments in the same Bayes-optimal fashion. To our knowledge, this is the first demonstration of behavioral causal inference in animals, and this cross-species comparison lays the groundwork for future experiments using neuronal recording techniques that are impractical or impossible in human subjects.
Collapse
Affiliation(s)
- Jeff T Mohl
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina
| | - John M Pearson
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Biostatistics and Bioinformatics, Duke University Medical School, Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina
| |
Collapse
|
36
|
Flexible coding of object motion in multiple reference frames by parietal cortex neurons. Nat Neurosci 2020; 23:1004-1015. [PMID: 32541964 PMCID: PMC7474851 DOI: 10.1038/s41593-020-0656-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/14/2020] [Indexed: 12/28/2022]
Abstract
Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. We examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates, but should be ignored when judging object motion in head coordinates. Neural responses in the ventral intraparietal area are modulated by the task reference frame, such that population activity represents object direction in either reference frame. In contrast, responses in the lateral portion of the medial superior temporal area primarily represent object motion in head coordinates. Our findings demonstrate a neural representation of object motion that changes with task requirements.
Collapse
|
37
|
French RL, DeAngelis GC. Multisensory neural processing: from cue integration to causal inference. CURRENT OPINION IN PHYSIOLOGY 2020; 16:8-13. [PMID: 32968701 DOI: 10.1016/j.cophys.2020.04.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Neurophysiological studies of multisensory processing have largely focused on how the brain integrates information from different sensory modalities to form a coherent percept. However, in the natural environment, an important extra step is needed: the brain faces the problem of causal inference, which involves determining whether different sources of sensory information arise from the same environmental cause, such that integrating them is advantageous Behavioral and computational studies have provided a strong foundation for studying causal inference, but studies of its neural basis have only recently been undertaken. This review focuses on recent advances regarding how the brain infers the causes of sensory inputs and uses this information to make robust perceptual estimates.
Collapse
Affiliation(s)
- Ranran L French
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY
| |
Collapse
|
38
|
Premotor cortex implements causal inference in multisensory own-body perception. Proc Natl Acad Sci U S A 2019; 116:19771-19773. [PMID: 31506356 PMCID: PMC6778256 DOI: 10.1073/pnas.1914000116] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
|