1
|
Rohe T, Hesse K, Ehlis AC, Noppeney U. Multisensory perceptual and causal inference is largely preserved in medicated post-acute individuals with schizophrenia. PLoS Biol 2024; 22:e3002790. [PMID: 39255328 DOI: 10.1371/journal.pbio.3002790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 08/06/2024] [Indexed: 09/12/2024] Open
Abstract
Hallucinations and perceptual abnormalities in psychosis are thought to arise from imbalanced integration of prior information and sensory inputs. We combined psychophysics, Bayesian modeling, and electroencephalography (EEG) to investigate potential changes in perceptual and causal inference in response to audiovisual flash-beep sequences in medicated individuals with schizophrenia who exhibited limited psychotic symptoms. Seventeen participants with schizophrenia and 23 healthy controls reported either the number of flashes or the number of beeps of audiovisual sequences that varied in their audiovisual numeric disparity across trials. Both groups balanced sensory integration and segregation in line with Bayesian causal inference rather than resorting to simpler heuristics. Both also showed comparable weighting of prior information regarding the signals' causal structure, although the schizophrenia group slightly overweighted prior information about the number of flashes or beeps. At the neural level, both groups computed Bayesian causal inference through dynamic encoding of independent estimates of the flash and beep counts, followed by estimates that flexibly combine audiovisual inputs. Our results demonstrate that the core neurocomputational mechanisms for audiovisual perceptual and causal inference in number estimation tasks are largely preserved in our limited sample of medicated post-acute individuals with schizophrenia. Future research should explore whether these findings generalize to unmedicated patients with acute psychotic symptoms.
Collapse
Affiliation(s)
- Tim Rohe
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Klaus Hesse
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Ann-Christine Ehlis
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
- Tübingen Center for Mental Health (TüCMH), Tübingen, Germany
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
2
|
Scheller M, Fang H, Sui J. Self as a prior: The malleability of Bayesian multisensory integration to social salience. Br J Psychol 2024; 115:185-205. [PMID: 37747452 DOI: 10.1111/bjop.12683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 08/26/2023] [Accepted: 09/11/2023] [Indexed: 09/26/2023]
Abstract
Our everyday perceptual experiences are grounded in the integration of information within and across our senses. Due to this direct behavioural relevance, cross-modal integration retains a certain degree of contextual flexibility, even to social relevance. However, how social relevance modulates cross-modal integration remains unclear. To investigate possible mechanisms, Experiment 1 tested the principles of audio-visual integration for numerosity estimation by deriving a Bayesian optimal observer model with perceptual prior from empirical data to explain perceptual biases. Such perceptual priors may shift towards locations of high salience in the stimulus space. Our results showed that the tendency to over- or underestimate numerosity, expressed in the frequency and strength of fission and fusion illusions, depended on the actual event numerosity. Experiment 2 replicated the effects of social relevance on multisensory integration from Scheller & Sui, 2022 JEP:HPP, using a lower number of events, thereby favouring the opposite illusion through enhanced influences of the prior. In line with the idea that the self acts like a prior, the more frequently observed illusion (more malleable to prior influences) was modulated by self-relevance. Our findings suggest that the self can influence perception by acting like a prior in cue integration, biasing perceptual estimates towards areas of high self-relevance.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, University of Aberdeen, Aberdeen, UK
- Department of Psychology, Durham University, Durham, UK
| | - Huilin Fang
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| | - Jie Sui
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
3
|
Zhu H, Beierholm U, Shams L. The overlooked role of unisensory precision in multisensory research. Curr Biol 2024; 34:R229-R231. [PMID: 38531310 DOI: 10.1016/j.cub.2024.01.057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/10/2024] [Accepted: 01/23/2024] [Indexed: 03/28/2024]
Abstract
Zhu et al. present an alternative explanation for the weaker multisensory illusions in football goalkeepers compared with outfielders and non-athletes, showing that better unisensory precision in goalkeepers can also account for this effect.
Collapse
Affiliation(s)
- Haocheng Zhu
- Department of Psychology, Soochow University, Suzhou 215031, China
| | - Ulrik Beierholm
- Department of Psychology, University of Durham, Durham DH1 3LE, UK
| | - Ladan Shams
- Department of Psychology, Bioengineering, and Neuroscience Interdepartmental Program, University of California, Los Angeles, Los Angeles, CA 90095, USA.
| |
Collapse
|
4
|
Jones SA, Noppeney U. Older adults preserve audiovisual integration through enhanced cortical activations, not by recruiting new regions. PLoS Biol 2024; 22:e3002494. [PMID: 38319934 PMCID: PMC10871488 DOI: 10.1371/journal.pbio.3002494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 02/16/2024] [Accepted: 01/09/2024] [Indexed: 02/08/2024] Open
Abstract
Effective interactions with the environment rely on the integration of multisensory signals: Our brains must efficiently combine signals that share a common source, and segregate those that do not. Healthy ageing can change or impair this process. This functional magnetic resonance imaging study assessed the neural mechanisms underlying age differences in the integration of auditory and visual spatial cues. Participants were presented with synchronous audiovisual signals at various degrees of spatial disparity and indicated their perceived sound location. Behaviourally, older adults were able to maintain localisation accuracy. At the neural level, they integrated auditory and visual cues into spatial representations along dorsal auditory and visual processing pathways similarly to their younger counterparts but showed greater activations in a widespread system of frontal, temporal, and parietal areas. According to multivariate Bayesian decoding, these areas encoded critical stimulus information beyond that which was encoded in the brain areas commonly activated by both groups. Surprisingly, however, the boost in information provided by these areas with age-related activation increases was comparable across the 2 age groups. This dissociation-between comparable information encoded in brain activation patterns across the 2 age groups, but age-related increases in regional blood-oxygen-level-dependent responses-contradicts the widespread notion that older adults recruit new regions as a compensatory mechanism to encode task-relevant information. Instead, our findings suggest that activation increases in older adults reflect nonspecific or modulatory mechanisms related to less efficient or slower processing, or greater demands on attentional resources.
Collapse
Affiliation(s)
- Samuel A. Jones
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Department of Psychology, Nottingham Trent University, Nottingham, United Kingdom
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
5
|
Yang X, Ying C, Zhu L, Wenjing W. The neural oscillations in delta- and theta-bands contribute to divided attention in audiovisual integration. Perception 2024; 53:44-60. [PMID: 37899595 DOI: 10.1177/03010066231208539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2023]
Abstract
One of key mechanisms implicated in multisensory processing is neural oscillations in distinct frequency band. Many studies explored the modulation of attention by recording the electroencephalography signals when subjects attended one modality, and ignored the other modality input. However, when attention is directed toward one modality, it may be not always possible to shut out completely inputs from a different modality. Since many situations require division of attention between audition and vision, it is imperative to investigate the neural mechanisms underlying processing of concurrent auditory and visual sensory streams. In the present study, we designed a task of audiovisual semantic discrimination, in which the subjects were asked to share attention to both auditory and visual stimuli. We explored the contribution of neural oscillations in lower-frequency to the modulation of divided attention on audiovisual integration. Our results implied that theta-band activity contributes to the early modulation of divided attention, and delta-band activity contributes to the late modulation of divided attention to audiovisual integration. Moreover, the fronto-central delta- and theta-bands activity is likely a marker of divided attention in audiovisual integration, and the neural oscillation on delta- and theta-bands is conducive to allocating attention resources to dual-tasking involving task-coordinating abilities.
Collapse
Affiliation(s)
- Xi Yang
- Northeast Electric Power University, P. R. China
| | - Chen Ying
- Northeast Electric Power University, P. R. China
| | - Lan Zhu
- Northeast Electric Power University, P. R. China
| | - Wang Wenjing
- Northeast Electric Power University, P. R. China
| |
Collapse
|
6
|
O'Donohue M, Lacherez P, Yamamoto N. Audiovisual spatial ventriloquism is reduced in musicians. Hear Res 2023; 440:108918. [PMID: 37992516 DOI: 10.1016/j.heares.2023.108918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 11/14/2023] [Accepted: 11/16/2023] [Indexed: 11/24/2023]
Abstract
There is great scientific and public interest in claims that musical training improves general cognitive and perceptual abilities. While this is controversial, recent and rather convincing evidence suggests that musical training refines the temporal integration of auditory and visual stimuli at a general level. We investigated whether musical training also affects integration in the spatial domain, via an auditory localisation experiment that measured ventriloquism (where localisation is biased towards visual stimuli on audiovisual trials) and recalibration (a unimodal localisation aftereffect). While musicians (n = 22) and non-musicians (n = 22) did not have significantly different unimodal precision or accuracy, musicians were significantly less susceptible than non-musicians to ventriloquism, with large effect sizes. We replicated these results in another experiment with an independent sample of 24 musicians and 21 non-musicians. Across both experiments, spatial recalibration did not significantly differ between the groups even though musicians resisted ventriloquism. Our results suggest that the multisensory expertise afforded by musical training refines spatial integration, a process that underpins multisensory perception.
Collapse
Affiliation(s)
- Matthew O'Donohue
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia.
| | - Philippe Lacherez
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia
| | - Naohide Yamamoto
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia; Queensland University of Technology (QUT), Centre for Vision and Eye Research, Kelvin Grove, QLD 4059, Australia
| |
Collapse
|
7
|
Monti M, Molholm S, Cuppini C. Atypical development of causal inference in autism inferred through a neurocomputational model. Front Comput Neurosci 2023; 17:1258590. [PMID: 37927544 PMCID: PMC10620690 DOI: 10.3389/fncom.2023.1258590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 10/05/2023] [Indexed: 11/07/2023] Open
Abstract
In everyday life, the brain processes a multitude of stimuli from the surrounding environment, requiring the integration of information from different sensory modalities to form a coherent perception. This process, known as multisensory integration, enhances the brain's response to redundant congruent sensory cues. However, it is equally important for the brain to segregate sensory inputs from distinct events, to interact with and correctly perceive the multisensory environment. This problem the brain must face, known as the causal inference problem, is strictly related to multisensory integration. It is widely recognized that the ability to integrate information from different senses emerges during the developmental period, as a function of our experience with multisensory stimuli. Consequently, multisensory integrative abilities are altered in individuals who have atypical experiences with cross-modal cues, such as those on the autistic spectrum. However, no research has been conducted on the developmental trajectories of causal inference and its relationship with experience thus far. Here, we used a neuro-computational model to simulate and investigate the development of causal inference in both typically developing children and those in the autistic spectrum. Our results indicate that higher exposure to cross-modal cues accelerates the acquisition of causal inference abilities, and a minimum level of experience with multisensory stimuli is required to develop fully mature behavior. We then simulated the altered developmental trajectory of causal inference in individuals with autism by assuming reduced multisensory experience during training. The results suggest that causal inference reaches complete maturity much later in these individuals compared to neurotypical individuals. Furthermore, we discuss the underlying neural mechanisms and network architecture involved in these processes, highlighting that the development of causal inference follows the evolution of the mechanisms subserving multisensory integration. Overall, this study provides a computational framework, unifying causal inference and multisensory integration, which allows us to suggest neural mechanisms and provide testable predictions about the development of such abilities in typically developed and autistic children.
Collapse
Affiliation(s)
- Melissa Monti
- Department of Electrical, Electronic, and Information Engineering Guglielmo Marconi, University of Bologna, Bologna, Italy
| | - Sophie Molholm
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - Cristiano Cuppini
- Department of Electrical, Electronic, and Information Engineering Guglielmo Marconi, University of Bologna, Bologna, Italy
| |
Collapse
|
8
|
Wang X, Wu Y, Xing Z, Cui X, Gao M, Tang X. Modal-based attention modulates the redundant-signals effect: Role of unimodal target probability. Perception 2023; 52:97-115. [PMID: 36415087 DOI: 10.1177/03010066221136675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Multisensory integration includes two behavioral manifestations: the modality dominance effect and the redundant-signals effect (RSE). RSE is a multisensory improvement effect in which individuals respond more quickly and accurately to bimodal audiovisual (AV) targets than to unimodal auditory (A) or visual (V) targets. Previous studies have confirmed that RSE is the product of modality interactions between different modalities. The goal of this study was to systematically investigate the effects of the modality dominance manipulated by modal-based attention and unimodal target probability on RSE. The results showed that when paying attention to both the A and V modalities (Exp. 1), RSE was not significantly different between unimodal target probabilities. When selectively paying attention to the A modality (Exp. 2A), RSE was also not significantly different between unimodal target probabilities. However, when selectively paying attention to the V modality (Exp. 2B), the magnitude of RSE showed a significant decreasing trend with the increasing probability of V targets. Our study is the first to reveal that the unimodal target probability significantly modulates RSE in visual selective attention, and this modulatory effect of the unimodal target probability on RSE is opposite to the modulatory effect on the modality dominance effect.
Collapse
Affiliation(s)
| | | | | | | | - Min Gao
- 66523Liaoning Normal University, China
| | | |
Collapse
|
9
|
Fisher VL, Dean CL, Nave CS, Parkins EV, Kerkhoff WG, Kwakye LD. Increases in sensory noise predict attentional disruptions to audiovisual speech perception. Front Hum Neurosci 2023; 16:1027335. [PMID: 36684833 PMCID: PMC9846366 DOI: 10.3389/fnhum.2022.1027335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 12/05/2022] [Indexed: 01/06/2023] Open
Abstract
We receive information about the world around us from multiple senses which combine in a process known as multisensory integration. Multisensory integration has been shown to be dependent on attention; however, the neural mechanisms underlying this effect are poorly understood. The current study investigates whether changes in sensory noise explain the effect of attention on multisensory integration and whether attentional modulations to multisensory integration occur via modality-specific mechanisms. A task based on the McGurk Illusion was used to measure multisensory integration while attention was manipulated via a concurrent auditory or visual task. Sensory noise was measured within modality based on variability in unisensory performance and was used to predict attentional changes to McGurk perception. Consistent with previous studies, reports of the McGurk illusion decreased when accompanied with a secondary task; however, this effect was stronger for the secondary visual (as opposed to auditory) task. While auditory noise was not influenced by either secondary task, visual noise increased with the addition of the secondary visual task specifically. Interestingly, visual noise accounted for significant variability in attentional disruptions to the McGurk illusion. Overall, these results strongly suggest that sensory noise may underlie attentional alterations to multisensory integration in a modality-specific manner. Future studies are needed to determine whether this finding generalizes to other types of multisensory integration and attentional manipulations. This line of research may inform future studies of attentional alterations to sensory processing in neurological disorders, such as Schizophrenia, Autism, and ADHD.
Collapse
Affiliation(s)
- Victoria L. Fisher
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States
- Yale University School of Medicine and the Connecticut Mental Health Center, New Haven, CT, United States
| | - Cassandra L. Dean
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States
- Roche/Genentech Neurodevelopment & Psychiatry Teams Product Development, Neuroscience, South San Francisco, CA, United States
| | - Claire S. Nave
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States
| | - Emma V. Parkins
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States
- Neuroscience Graduate Program, University of Cincinnati, Cincinnati, OH, United States
| | - Willa G. Kerkhoff
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, United States
| | - Leslie D. Kwakye
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States
| |
Collapse
|
10
|
Quintero SI, Shams L, Kamal K. Changing the Tendency to Integrate the Senses. Brain Sci 2022; 12:brainsci12101384. [PMID: 36291318 PMCID: PMC9599885 DOI: 10.3390/brainsci12101384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 10/04/2022] [Accepted: 10/05/2022] [Indexed: 11/16/2022] Open
Abstract
Integration of sensory signals that emanate from the same source, such as the visual of lip articulations and the sound of the voice of a speaking individual, can improve perception of the source signal (e.g., speech). Because momentary sensory inputs are typically corrupted with internal and external noise, there is almost always a discrepancy between the inputs, facing the perceptual system with the problem of determining whether the two signals were caused by the same source or different sources. Thus, whether or not multisensory stimuli are integrated and the degree to which they are bound is influenced by factors such as the prior expectation of a common source. We refer to this factor as the tendency to bind stimuli, or for short, binding tendency. In theory, the tendency to bind sensory stimuli can be learned by experience through the acquisition of the probabilities of the co-occurrence of the stimuli. It can also be influenced by cognitive knowledge of the environment. The binding tendency varies across individuals and can also vary within an individual over time. Here, we review the studies that have investigated the plasticity of binding tendency. We discuss the protocols that have been reported to produce changes in binding tendency, the candidate learning mechanisms involved in this process, the possible neural correlates of binding tendency, and outstanding questions pertaining to binding tendency and its plasticity. We conclude by proposing directions for future research and argue that understanding mechanisms and recipes for increasing binding tendency can have important clinical and translational applications for populations or individuals with a deficiency in multisensory integration.
Collapse
Affiliation(s)
- Saul I Quintero
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
- Department of Bioengineering, University of California, Los Angeles, CA 90089, USA
- Neuroscience Interdepartmental Program, University of California, Los Angeles, CA 90089, USA
| | - Kimia Kamal
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
11
|
Musical training refines audiovisual integration but does not influence temporal recalibration. Sci Rep 2022; 12:15292. [PMID: 36097277 PMCID: PMC9468170 DOI: 10.1038/s41598-022-19665-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 09/01/2022] [Indexed: 11/11/2022] Open
Abstract
When the brain is exposed to a temporal asynchrony between the senses, it will shift its perception of simultaneity towards the previously experienced asynchrony (temporal recalibration). It is unknown whether recalibration depends on how accurately an individual integrates multisensory cues or on experiences they have had over their lifespan. Hence, we assessed whether musical training modulated audiovisual temporal recalibration. Musicians (n = 20) and non-musicians (n = 18) made simultaneity judgements to flash-tone stimuli before and after adaptation to asynchronous (± 200 ms) flash-tone stimuli. We analysed these judgements via an observer model that described the left and right boundaries of the temporal integration window (decisional criteria) and the amount of sensory noise that affected these judgements. Musicians’ boundaries were narrower (closer to true simultaneity) than non-musicians’, indicating stricter criteria for temporal integration, and they also exhibited enhanced sensory precision. However, while both musicians and non-musicians experienced cumulative and rapid recalibration, these recalibration effects did not differ between the groups. Unexpectedly, cumulative recalibration was caused by auditory-leading but not visual-leading adaptation. Overall, these findings suggest that the precision with which observers perceptually integrate audiovisual temporal cues does not predict their susceptibility to recalibration.
Collapse
|
12
|
Semantically congruent audiovisual integration with modal-based attention accelerates auditory short-term memory retrieval. Atten Percept Psychophys 2022; 84:1625-1634. [PMID: 35641858 DOI: 10.3758/s13414-021-02437-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/28/2021] [Indexed: 11/08/2022]
Abstract
Evidence has shown that multisensory integration benefits to unisensory perception performance are asymmetric and that auditory perception performance can receive more multisensory benefits, especially when the attention focus is directed toward a task-irrelevant visual stimulus. At present, whether the benefits of semantically (in)congruent multisensory integration with modal-based attention for subsequent unisensory short-term memory (STM) retrieval are also asymmetric remains unclear. Using a delayed matching-to-sample paradigm, the present study investigated this issue by manipulating the attention focus during multisensory memory encoding. The results revealed that both visual and auditory STM retrieval reaction times were faster under semantically congruent multisensory conditions than under unisensory memory encoding conditions. We suggest that coherent multisensory representation formation might be optimized by restricted multisensory encoding and can be rapidly triggered by subsequent unisensory memory retrieval demands. Crucially, auditory STM retrieval is exclusively accelerated by semantically congruent multisensory memory encoding, indicating that the less effective sensory modality of memory retrieval relies more on the coherent prior formation of a multisensory representation optimized by modal-based attention.
Collapse
|
13
|
Shams L, Beierholm U. Bayesian causal inference: A unifying neuroscience theory. Neurosci Biobehav Rev 2022; 137:104619. [PMID: 35331819 DOI: 10.1016/j.neubiorev.2022.104619] [Citation(s) in RCA: 37] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 02/21/2022] [Accepted: 03/10/2022] [Indexed: 01/08/2023]
Abstract
Understanding of the brain and the principles governing neural processing requires theories that are parsimonious, can account for a diverse set of phenomena, and can make testable predictions. Here, we review the theory of Bayesian causal inference, which has been tested, refined, and extended in a variety of tasks in humans and other primates by several research groups. Bayesian causal inference is normative and has explained human behavior in a vast number of tasks including unisensory and multisensory perceptual tasks, sensorimotor, and motor tasks, and has accounted for counter-intuitive findings. The theory has made novel predictions that have been tested and confirmed empirically, and recent studies have started to map its algorithms and neural implementation in the human brain. The parsimony, the diversity of the phenomena that the theory has explained, and its illuminating brain function at all three of Marr's levels of analysis make Bayesian causal inference a strong neuroscience theory. This also highlights the importance of collaborative and multi-disciplinary research for the development of new theories in neuroscience.
Collapse
Affiliation(s)
- Ladan Shams
- Departments of Psychology, BioEngineering, and Neuroscience Interdepartmental Program, University of California, Los Angeles, USA.
| | | |
Collapse
|
14
|
Hirst RJ, Cassarino M, Kenny RA, Newell FN, Setti A. Urban and rural environments differentially shape multisensory perception in ageing. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2022; 29:197-212. [PMID: 33427038 DOI: 10.1080/13825585.2020.1859084] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Recent studies suggest that the lived environment can affect cognition across the lifespan. We examined, in a large cohort of older adults (n = 3447), whether susceptibility to a multisensory illusion, the Sound-Induced Flash Illusion (SIFI), was influenced by the reported urbanity of current and childhood (at age 14 years) residence. If urban environments help to shape healthy perceptual function, we predicted reduced SIFI susceptibility in urban dwellers. Participants reporting urban, compared with rural, childhood residence were less susceptible to SIFI at longer Stimulus-Onset Asynchronies (SOAs). Those currently residing in urban environments were more susceptible to SIFI at longer SOAs, particularly if they scored low on general cognitive function. These findings held even when controlling for a several covariates, such as age, sex, education, social participation and cognitive ability. Exposure to urban environments in childhood may influence individual differences in perception and offer a multisensory perceptual benefit in older age.
Collapse
Affiliation(s)
- Rebecca J Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.,The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland
| | - Marica Cassarino
- School of Applied Psychology, University College Cork, Cork, Ireland
| | - Rose Anne Kenny
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland.,Mercer Institute for Successful Ageing, St. James Hospital, Dublin, Ireland
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Annalisa Setti
- The Irish Longitudinal Study on Ageing, Trinity College Dublin, Dublin, Ireland.,School of Applied Psychology, University College Cork, Cork, Ireland
| |
Collapse
|
15
|
Yu H, Wang A, Li Q, Liu Y, Yang J, Takahashi S, Ejima Y, Zhang M, Wu J. Semantically Congruent Bimodal Presentation with Divided-Modality Attention Accelerates Unisensory Working Memory Retrieval. Perception 2021; 50:917-932. [PMID: 34841972 DOI: 10.1177/03010066211052943] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Although previous studies have shown that semantic multisensory integration can be differentially modulated by attention focus, it remains unclear whether attentionally mediated multisensory perceptual facilitation could impact further cognitive performance. Using a delayed matching-to-sample paradigm, the present study investigated the effect of semantically congruent bimodal presentation on subsequent unisensory working memory (WM) performance by manipulating attention focus. The results showed that unisensory WM retrieval was faster in the semantically congruent condition than in the incongruent multisensory encoding condition. However, such a result was only found in the divided-modality attention condition. This result indicates that a robust multisensory representation was constructed during semantically congruent multisensory encoding with divided-modality attention; this representation then accelerated unisensory WM performance, especially auditory WM retrieval. Additionally, an overall faster unisensory WM retrieval was observed under the modality-specific selective attention condition compared with the divided-modality condition, indicating that the division of attention to address two modalities demanded more central executive resources to encode and integrate crossmodal information and to maintain a constructed multisensory representation, leaving few resources for WM retrieval. Additionally, the present finding may support the amodal view that WM has an amodal central storage component that is used to maintain modal-based attention-optimized multisensory representations.
Collapse
Affiliation(s)
- Hongtao Yu
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, 12997Okayama University, Japan
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, 12582Soochow University, Suzhou, China
| | | | | | | | | | - Yoshimichi Ejima
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, 12997Okayama University, Japan
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, 12997Okayama University, Japan
| | - Jinglong Wu
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, 12997Okayama University, Japan
| |
Collapse
|
16
|
Kvamme TL, Sarmanlu M, Bailey C, Overgaard M. Neurofeedback Modulation of the Sound-induced Flash Illusion Using Parietal Cortex Alpha Oscillations Reveals Dependency on Prior Multisensory Congruency. Neuroscience 2021; 482:1-17. [PMID: 34838934 DOI: 10.1016/j.neuroscience.2021.11.028] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 11/12/2021] [Accepted: 11/19/2021] [Indexed: 01/27/2023]
Abstract
Spontaneous neural oscillations are key predictors of perceptual decisions to bind multisensory signals into a unified percept. Research links decreased alpha power in the posterior cortices to attention and audiovisual binding in the sound-induced flash illusion (SIFI) paradigm. This suggests that controlling alpha oscillations would be a way of controlling audiovisual binding. In the present feasibility study we used MEG-neurofeedback to train one group of subjects to increase left/right and another to increase right/left alpha power ratios in the parietal cortex. We tested for changes in audiovisual binding in a SIFI paradigm where flashes appeared in both hemifields. Results showed that the neurofeedback induced a significant asymmetry in alpha power for the left/right group, not seen for the right/left group. Corresponding asymmetry changes in audiovisual binding in illusion trials (with 2, 3, and 4 beeps paired with 1 flash) were not apparent. Exploratory analyses showed that neurofeedback training effects were present for illusion trials with the lowest numeric disparity (i.e., 2 beeps and 1 flash trials) only if the previous trial had high congruency (2 beeps and 2 flashes). Our data suggest that the relation between parietal alpha power (an index of attention) and its effect on audiovisual binding is dependent on the learned causal structure in the previous stimulus. The present results suggests that low alpha power biases observers towards audiovisual binding when they have learned that audiovisual signals originate from a common origin, consistent with a Bayesian causal inference account of multisensory perception.
Collapse
Affiliation(s)
- Timo L Kvamme
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark.
| | - Mesud Sarmanlu
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark
| | - Christopher Bailey
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark
| | - Morten Overgaard
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark
| |
Collapse
|
17
|
Precision control for a flexible body representation. Neurosci Biobehav Rev 2021; 134:104401. [PMID: 34736884 DOI: 10.1016/j.neubiorev.2021.10.023] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 10/20/2021] [Accepted: 10/21/2021] [Indexed: 11/24/2022]
Abstract
Adaptive body representation requires the continuous integration of multisensory inputs within a flexible 'body model' in the brain. The present review evaluates the idea that this flexibility is augmented by the contextual modulation of sensory processing 'top-down'; which can be described as precision control within predictive coding formulations of Bayesian inference. Specifically, I focus on the proposal that an attenuation of proprioception may facilitate the integration of conflicting visual and proprioceptive bodily cues. Firstly, I review empirical work suggesting that the processing of visual vs proprioceptive body position information can be contextualised 'top-down'; for instance, by adopting specific attentional task sets. Building up on this, I review research showing a similar contextualisation of visual vs proprioceptive information processing in the rubber hand illusion and in visuomotor adaptation. Together, the reviewed literature suggests that proprioception, despite its indisputable importance for body perception and action control, can be attenuated top-down (through precision control) to facilitate the contextual adaptation of the brain's body model to novel visual feedback.
Collapse
|
18
|
Ferrari A, Noppeney U. Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biol 2021; 19:e3001465. [PMID: 34793436 PMCID: PMC8639080 DOI: 10.1371/journal.pbio.3001465] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 12/02/2021] [Accepted: 11/01/2021] [Indexed: 11/22/2022] Open
Abstract
To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals' causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.
Collapse
Affiliation(s)
- Ambra Ferrari
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
19
|
The impact of joint attention on the sound-induced flash illusions. Atten Percept Psychophys 2021; 83:3056-3068. [PMID: 34561815 PMCID: PMC8550716 DOI: 10.3758/s13414-021-02347-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/30/2021] [Indexed: 11/20/2022]
Abstract
Humans coordinate their focus of attention with others, either by gaze following or prior agreement. Though the effects of joint attention on perceptual and cognitive processing tend to be examined in purely visual environments, they should also show in multisensory settings. According to a prevalent hypothesis, joint attention enhances visual information encoding and processing, over and above individual attention. If two individuals jointly attend to the visual components of an audiovisual event, this should affect the weighing of visual information during multisensory integration. We tested this prediction in this preregistered study, using the well-documented sound-induced flash illusions, where the integration of an incongruent number of visual flashes and auditory beeps results in a single flash being seen as two (fission illusion) and two flashes as one (fusion illusion). Participants were asked to count flashes either alone or together, and expected to be less prone to both fission and fusion illusions when they jointly attended to the visual targets. However, illusions were as frequent when people attended to the flashes alone or with someone else, even though they responded faster during joint attention. Our results reveal the limitations of the theory that joint attention enhances visual processing as it does not affect temporal audiovisual integration.
Collapse
|
20
|
Zulfiqar I, Moerel M, Lage-Castellanos A, Formisano E, De Weerd P. Audiovisual Interactions Among Near-Threshold Oscillating Stimuli in the Far Periphery Are Phase-Dependent. Front Hum Neurosci 2021; 15:642341. [PMID: 34526884 PMCID: PMC8435850 DOI: 10.3389/fnhum.2021.642341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 07/22/2021] [Indexed: 11/30/2022] Open
Abstract
Recent studies have highlighted the possible contributions of direct connectivity between early sensory cortices to audiovisual integration. Anatomical connections between the early auditory and visual cortices are concentrated in visual sites representing the peripheral field of view. Here, we aimed to engage early sensory interactive pathways with simple, far-peripheral audiovisual stimuli (auditory noise and visual gratings). Using a modulation detection task in one modality performed at an 84% correct threshold level, we investigated multisensory interactions by simultaneously presenting weak stimuli from the other modality in which the temporal modulation was barely-detectable (at 55 and 65% correct detection performance). Furthermore, we manipulated the temporal congruence between the cross-sensory streams. We found evidence for an influence of barely-detectable visual stimuli on the response times for auditory stimuli, but not for the reverse effect. These visual-to-auditory influences only occurred for specific phase-differences (at onset) between the modulated audiovisual stimuli. We discuss our findings in the light of a possible role of direct interactions between early visual and auditory areas, along with contributions from the higher-order association cortex. In sum, our results extend the behavioral evidence of audio-visual processing to the far periphery, and suggest - within this specific experimental setting - an asymmetry between the auditory influence on visual processing and the visual influence on auditory processing.
Collapse
Affiliation(s)
- Isma Zulfiqar
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands
| | - Michelle Moerel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Centre (MBIC), Maastricht, Netherlands
| | - Agustin Lage-Castellanos
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Centre (MBIC), Maastricht, Netherlands
| | - Peter De Weerd
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
21
|
The development of visuotactile congruency effects for sequences of events. J Exp Child Psychol 2021; 207:105094. [PMID: 33714049 DOI: 10.1016/j.jecp.2021.105094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 12/11/2020] [Accepted: 01/07/2021] [Indexed: 11/23/2022]
Abstract
Sensitivity to the temporal coherence of visual and tactile signals increases perceptual reliability and is evident during infancy. However, it is not clear how, or whether, bidirectional visuotactile interactions change across childhood. Furthermore, no study has explored whether viewing a body modulates how children perceive visuotactile sequences of events. Here, children aged 5-7 years (n = 19), 8 and 9 years (n = 21), and 10-12 years (n = 24) and adults (n = 20) discriminated the number of target events (one or two) in a task-relevant modality (touch or vision) and ignored distractors (one or two) in the opposing modality. While participants performed the task, an image of either a hand or an object was presented. Children aged 5-7 years and 8 and 9 years showed larger crossmodal interference from visual distractors when discriminating tactile targets than the converse. Across age groups, this was strongest when two visual distractors were presented with one tactile target, implying a "fission-like" crossmodal effect (perceiving one event as two events). There was no influence of visual context (viewing a hand or non-hand image) on visuotactile interactions for any age group. Our results suggest robust interference from discontinuous visual information on tactile discrimination of sequences of events during early and middle childhood. These findings are discussed with respect to age-related changes in sensory dominance, selective attention, and multisensory processing.
Collapse
|
22
|
Jones SA, Noppeney U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex 2021; 138:1-23. [PMID: 33676086 DOI: 10.1016/j.cortex.2021.02.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 01/23/2021] [Accepted: 02/02/2021] [Indexed: 11/29/2022]
Abstract
The processing of multisensory signals is crucial for effective interaction with the environment, but our ability to perform this vital function changes as we age. In the first part of this review, we summarise existing research into the effects of healthy ageing on multisensory integration. We note that age differences vary substantially with the paradigms and stimuli used: older adults often receive at least as much benefit (to both accuracy and response times) as younger controls from congruent multisensory stimuli, but are also consistently more negatively impacted by the presence of intersensory conflict. In the second part, we outline a normative Bayesian framework that provides a principled and computationally informed perspective on the key ingredients involved in multisensory perception, and how these are affected by ageing. Applying this framework to the existing literature, we conclude that changes to sensory reliability, prior expectations (together with attentional control), and decisional strategies all contribute to the age differences observed. However, we find no compelling evidence of any age-related changes to the basic inference mechanisms involved in multisensory perception.
Collapse
Affiliation(s)
- Samuel A Jones
- The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
23
|
Hirst RJ, McGovern DP, Setti A, Shams L, Newell FN. What you see is what you hear: Twenty years of research using the Sound-Induced Flash Illusion. Neurosci Biobehav Rev 2020; 118:759-774. [DOI: 10.1016/j.neubiorev.2020.09.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 07/06/2020] [Accepted: 09/03/2020] [Indexed: 01/17/2023]
|
24
|
Representational momentum in vision and touch: Visual motion information biases tactile spatial localization. Atten Percept Psychophys 2020; 82:2618-2629. [PMID: 32140935 PMCID: PMC7343758 DOI: 10.3758/s13414-020-01989-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
After an object disappears, the vanishing point is shifted in the direction of motion, a phenomenon known as representational momentum. The present study focused on the relationship between motion information and spatial location in a crossmodal setting. In two visuotactile experiments, we studied how motion information in one sensory modality affects the perceived final location of a motion signal (congruent vs. incongruent left-right motion direction) in another modality. The results revealed a unidirectional crossmodal influence of motion information on spatial localization performance. While visual motion information influenced the perceived final location of the tactile stimulus, tactile motion information had no influence on visual localization. These results therefore extend the existing literature on crossmodal influences on spatial location and are discussed in relation to current theories of multisensory perception.
Collapse
|
25
|
Badde S, Navarro KT, Landy MS. Modality-specific attention attenuates visual-tactile integration and recalibration effects by reducing prior expectations of a common source for vision and touch. Cognition 2020; 197:104170. [PMID: 32036027 DOI: 10.1016/j.cognition.2019.104170] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 12/19/2019] [Accepted: 12/20/2019] [Indexed: 10/25/2022]
Abstract
At any moment in time, streams of information reach the brain through the different senses. Given this wealth of noisy information, it is essential that we select information of relevance - a function fulfilled by attention - and infer its causal structure to eventually take advantage of redundancies across the senses. Yet, the role of selective attention during causal inference in cross-modal perception is unknown. We tested experimentally whether the distribution of attention across vision and touch enhances cross-modal spatial integration (visual-tactile ventriloquism effect, Expt. 1) and recalibration (visual-tactile ventriloquism aftereffect, Expt. 2) compared to modality-specific attention, and then used causal-inference modeling to isolate the mechanisms behind the attentional modulation. In both experiments, we found stronger effects of vision on touch under distributed than under modality-specific attention. Model comparison confirmed that participants used Bayes-optimal causal inference to localize visual and tactile stimuli presented as part of a visual-tactile stimulus pair, whereas simultaneously collected unity judgments - indicating whether the visual-tactile pair was perceived as spatially-aligned - relied on a sub-optimal heuristic. The best-fitting model revealed that attention modulated sensory and cognitive components of causal inference. First, distributed attention led to an increase of sensory noise compared to selective attention toward one modality. Second, attending to both modalities strengthened the stimulus-independent expectation that the two signals belong together, the prior probability of a common source for vision and touch. Yet, only the increase in the expectation of vision and touch sharing a common source was able to explain the observed enhancement of visual-tactile integration and recalibration effects with distributed attention. In contrast, the change in sensory noise explained only a fraction of the observed enhancements, as its consequences vary with the overall level of noise and stimulus congruency. Increased sensory noise leads to enhanced integration effects for visual-tactile pairs with a large spatial discrepancy, but reduced integration effects for stimuli with a small or no cross-modal discrepancy. In sum, our study indicates a weak a priori association between visual and tactile spatial signals that can be strengthened by distributing attention across both modalities.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA.
| | - Karen T Navarro
- Department of Psychology, University of Minnesota, 75 E River Rd., Minneapolis, MN, 55455, USA
| | - Michael S Landy
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA
| |
Collapse
|
26
|
Bruns P. The Ventriloquist Illusion as a Tool to Study Multisensory Processing: An Update. Front Integr Neurosci 2019; 13:51. [PMID: 31572136 PMCID: PMC6751356 DOI: 10.3389/fnint.2019.00051] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Accepted: 08/22/2019] [Indexed: 12/02/2022] Open
Abstract
Ventriloquism, the illusion that a voice appears to come from the moving mouth of a puppet rather than from the actual speaker, is one of the classic examples of multisensory processing. In the laboratory, this illusion can be reliably induced by presenting simple meaningless audiovisual stimuli with a spatial discrepancy between the auditory and visual components. Typically, the perceived location of the sound source is biased toward the location of the visual stimulus (the ventriloquism effect). The strength of the visual bias reflects the relative reliability of the visual and auditory inputs as well as prior expectations that the two stimuli originated from the same source. In addition to the ventriloquist illusion, exposure to spatially discrepant audiovisual stimuli results in a subsequent recalibration of unisensory auditory localization (the ventriloquism aftereffect). In the past years, the ventriloquism effect and aftereffect have seen a resurgence as an experimental tool to elucidate basic mechanisms of multisensory integration and learning. For example, recent studies have: (a) revealed top-down influences from the reward and motor systems on cross-modal binding; (b) dissociated recalibration processes operating at different time scales; and (c) identified brain networks involved in the neuronal computations underlying multisensory integration and learning. This mini review article provides a brief overview of established experimental paradigms to measure the ventriloquism effect and aftereffect before summarizing these pathbreaking new advancements. Finally, it is pointed out how the ventriloquism effect and aftereffect could be utilized to address some of the current open questions in the field of multisensory research.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
27
|
Sanders P, Thompson B, Corballis P, Searchfield G. On the Timing of Signals in Multisensory Integration and Crossmodal Interactions: a Scoping Review. Multisens Res 2019; 32:533-573. [PMID: 31137004 DOI: 10.1163/22134808-20191331] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2018] [Accepted: 04/24/2019] [Indexed: 11/19/2022]
Abstract
A scoping review was undertaken to explore research investigating early interactions and integration of auditory and visual stimuli in the human brain. The focus was on methods used to study low-level multisensory temporal processing using simple stimuli in humans, and how this research has informed our understanding of multisensory perception. The study of multisensory temporal processing probes how the relative timing between signals affects perception. Several tasks, illusions, computational models, and neuroimaging techniques were identified in the literature search. Research into early audiovisual temporal processing in special populations was also reviewed. Recent research has continued to provide support for early integration of crossmodal information. These early interactions can influence higher-level factors, and vice versa. Temporal relationships between auditory and visual stimuli influence multisensory perception, and likely play a substantial role in solving the 'correspondence problem' (how the brain determines which sensory signals belong together, and which should be segregated).
Collapse
Affiliation(s)
- Philip Sanders
- 1Section of Audiology, University of Auckland, Auckland, New Zealand.,2Centre for Brain Research, University of Auckland, New Zealand.,3Brain Research New Zealand - Rangahau Roro Aotearoa, New Zealand
| | - Benjamin Thompson
- 2Centre for Brain Research, University of Auckland, New Zealand.,4School of Optometry and Vision Science, University of Auckland, Auckland, New Zealand.,5School of Optometry and Vision Science, University of Waterloo, Waterloo, Canada
| | - Paul Corballis
- 2Centre for Brain Research, University of Auckland, New Zealand.,6Department of Psychology, University of Auckland, Auckland, New Zealand
| | - Grant Searchfield
- 1Section of Audiology, University of Auckland, Auckland, New Zealand.,2Centre for Brain Research, University of Auckland, New Zealand.,3Brain Research New Zealand - Rangahau Roro Aotearoa, New Zealand
| |
Collapse
|
28
|
Liang P, Jiang J, Ding Q, Tang X, Roy S. Memory Load Influences Taste Sensitivities. Front Psychol 2018; 9:2533. [PMID: 30618955 PMCID: PMC6297800 DOI: 10.3389/fpsyg.2018.02533] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2018] [Accepted: 11/27/2018] [Indexed: 11/23/2022] Open
Abstract
Previous literature reports have demonstrated that taste perception would be influenced by different internal brain status or external environment stimulation. Although there are different hypotheses about the cross-modal interactive process, it still remains unclear as of how the brain modulates and processes taste perception, particularly with different memory load. Here in this study we address this question. To do so we assign the participants different memory loads in the form of varying lengths of alphanumerical items, before tasting different concentrations of sweet or bitter tastants. After tasting they were asked to recall the alphanumerical items they were assigned. Our results show that the memory load reduces sweet and bitter taste sensitivities, from sub-threshold level to high concentration. Higher the memory load, less is the taste sensitivity. The study has extended our previous results and supports our previous hypothesis that the cognitive status, such as the general stress of memory load, influences sensory perception.
Collapse
Affiliation(s)
- Pei Liang
- Department of Psychology/Facuty of Education, Hubei University, Hubei, China.,Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei Univeristy, Hubei, China.,The No. 2 Peoples' Hospital of Changshu, Changshu, China.,Changshu Institute of Technology, Changshu, China
| | - Jiayu Jiang
- Department of Psychology/Facuty of Education, Hubei University, Hubei, China.,Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei Univeristy, Hubei, China
| | - Qingguo Ding
- The No. 2 Peoples' Hospital of Changshu, Changshu, China
| | - Xiaoyan Tang
- Changshu Institute of Technology, Changshu, China
| | - Soumyajit Roy
- Eco-Friendly Applied Materials Laboratory, Materials Science Centre, Department of Chemical Sciences, Indian Institute of Science Education and Research, Kolkata, India
| |
Collapse
|
29
|
Debats NB, Heuer H. Explicit knowledge of sensory non-redundancy can reduce the strength of multisensory integration. PSYCHOLOGICAL RESEARCH 2018; 84:890-906. [PMID: 30426210 DOI: 10.1007/s00426-018-1116-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2018] [Accepted: 10/30/2018] [Indexed: 11/26/2022]
Abstract
The brain integrates incoming sensory signals to a degree that depends on the signals' redundancy. Redundancy-which is commonly high when signals originate from a common physical object or event-is estimated by the brain from the signals' spatial and/or temporal correspondence. Here we tested whether verbally instructed knowledge of non-redundancy can also be used to reduce the strength of the sensory integration. We used a cursor-control task in which cursor motions in the frontoparallel plane were controlled by hand movements in the horizontal plane, yet with a small and randomly varying visuomotor rotation that created spatial discrepancies between hand and cursor positions. Consistent with previous studies, we found mutual biases in the hand and cursor position judgments, indicating partial sensory integration. The integration was reduced in strength, but not eliminated, after participants were verbally informed about the non-redundancy (i.e., the spatial discrepancies) in the hand and cursor positions. Comparisons with model predictions excluded confounding bottom-up effects of the non-redundancy instruction. Our findings thus show that participants have top-down control over the degree to which they integrate sensory information. Additionally, we found that the magnitude of this top-down modulatory capability is a reliable individual trait. A comparison between participants with and without video-gaming experience tentatively suggested a relation between top-down modulation of integration strength and attentional control.
Collapse
Affiliation(s)
- Nienke B Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany.
- Cognitive Interaction Technology Center of Excellence (CITEC), Universität Bielefeld, Universitätsstrasse 25, 33615, Bielefeld, Germany.
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
30
|
Barel E. Effects of attention during encoding on sex differences in object location memory. INTERNATIONAL JOURNAL OF PSYCHOLOGY 2018; 54:539-547. [PMID: 29659016 DOI: 10.1002/ijop.12490] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 03/13/2018] [Indexed: 11/09/2022]
Abstract
Attention plays a key role in memory processes and has been widely studied in various memory tasks. The role of attention in sex differences in object location memory is not clearly understood. In the present study, two experiments involving 186 participants and using an object array presented on paper were conducted to examine two encoding conditions: incidental and intentional. In each experiment, the participants were randomly assigned to divided versus full attention conditions. In the first experiment, which involved incidental encoding, women outperformed men in memorising location-exchanged objects in both the full and in the divided attention condition. In the second experiment, which involved intentional encoding, women outperformed men in memorising location-exchanged objects in the full attention condition, but not the divided attention condition. These findings deepen our knowledge regarding the role of attention in object location memory, specifically in terms of the conditions under which females have an advantage for detecting changes in an array of objects.
Collapse
Affiliation(s)
- Efrat Barel
- Department of Psychology, The Max Stern Academic College of Emek Yezreel, Israel
| |
Collapse
|
31
|
Hemispheric asymmetry: Looking for a novel signature of the modulation of spatial attention in multisensory processing. Psychon Bull Rev 2018; 24:690-707. [PMID: 27586002 PMCID: PMC5486865 DOI: 10.3758/s13423-016-1154-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The extent to which attention modulates multisensory processing in a top-down fashion is still a subject of debate among researchers. Typically, cognitive psychologists interested in this question have manipulated the participants’ attention in terms of single/dual tasking or focal/divided attention between sensory modalities. We suggest an alternative approach, one that builds on the extensive older literature highlighting hemispheric asymmetries in the distribution of spatial attention. Specifically, spatial attention in vision, audition, and touch is typically biased preferentially toward the right hemispace, especially under conditions of high perceptual load. We review the evidence demonstrating such an attentional bias toward the right in extinction patients and healthy adults, along with the evidence of such rightward-biased attention in multisensory experimental settings. We then evaluate those studies that have demonstrated either a more pronounced multisensory effect in right than in left hemispace, or else similar effects in the two hemispaces. The results suggest that the influence of rightward-biased attention is more likely to be observed when the crossmodal signals interact at later stages of information processing and under conditions of higher perceptual load—that is, conditions under which attention is perhaps a compulsory enhancer of information processing. We therefore suggest that the spatial asymmetry in attention may provide a useful signature of top-down attentional modulation in multisensory processing.
Collapse
|
32
|
Dissociating explicit and implicit measures of sensed hand position in tool use: Effect of relative frequency of judging different objects. Atten Percept Psychophys 2017; 80:211-221. [PMID: 29075991 DOI: 10.3758/s13414-017-1438-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In a cursor-control task, the sensed positions of cursor and hand are biased toward each other. We previously found different characteristics of implicit and explicit measures of the bias of sensed hand position toward the position of the cursor, suggesting the existence of distinct neural representations. Here we further explored differences between the two types of measure by varying the proportions of trials with explicit hand-position (H) and cursor-position (C) judgments (C20:H80, C50:H50, and C80:H20). In each trial, participants made a reaching movement to a remembered target, with the visual feedback being rotated randomly, and subsequently they judged the hand or the cursor position. Both the explicitly and implicitly measured biases of sensed hand position were stronger with a low proportion (C80:H20) than with a high proportion (C20:H80) of hand-position judgments, suggesting that both measures place more weight on the sensory modality relevant for the more frequent judgment. With balanced proportions of such judgments (C50:H50), the explicitly assessed biases were similar to those observed with a high proportion of cursor-position judgments (C80:H20), whereas the implicitly assessed biases were similar to those observed with a high proportion of hand-position judgments (C20:H80). Because strong weights of cursor-position or hand-position information may be difficult to increase further but are easy to reduce, the findings suggest that the implicit measure of the bias of sensed hand position places a relatively stronger weight on proprioceptive hand-position information, which is increased no further by a high proportion of hand-position judgments. Conversely, the explicit measure places a relatively stronger weight on visual cursor-position information.
Collapse
|
33
|
The Comparison of Divided, Sustained and Selective Attention in Children with Attention Deficit Hyperactivity Disorder, Children with Specific Learning Disorder and Normal Children. RAZAVI INTERNATIONAL JOURNAL OF MEDICINE 2017. [DOI: 10.5812/rijm.12523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
34
|
Odegaard B, Wozny DR, Shams L. A simple and efficient method to enhance audiovisual binding tendencies. PeerJ 2017; 5:e3143. [PMID: 28462016 PMCID: PMC5407282 DOI: 10.7717/peerj.3143] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2016] [Accepted: 03/04/2017] [Indexed: 11/20/2022] Open
Abstract
Individuals vary in their tendency to bind signals from multiple senses. For the same set of sights and sounds, one individual may frequently integrate multisensory signals and experience a unified percept, whereas another individual may rarely bind them and often experience two distinct sensations. Thus, while this binding/integration tendency is specific to each individual, it is not clear how plastic this tendency is in adulthood, and how sensory experiences may cause it to change. Here, we conducted an exploratory investigation which provides evidence that (1) the brain’s tendency to bind in spatial perception is plastic, (2) that it can change following brief exposure to simple audiovisual stimuli, and (3) that exposure to temporally synchronous, spatially discrepant stimuli provides the most effective method to modify it. These results can inform current theories about how the brain updates its internal model of the surrounding sensory world, as well as future investigations seeking to increase integration tendencies.
Collapse
Affiliation(s)
- Brian Odegaard
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
| | - David R Wozny
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States.,Department of Bioengineering, University of California, Los Angeles, Los Angeles, CA, United States.,Neuroscience Interdepartmental Program, University of California-Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
35
|
Chen YC, Spence C. Assessing the Role of the 'Unity Assumption' on Multisensory Integration: A Review. Front Psychol 2017; 8:445. [PMID: 28408890 PMCID: PMC5374162 DOI: 10.3389/fpsyg.2017.00445] [Citation(s) in RCA: 79] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2016] [Accepted: 03/09/2017] [Indexed: 01/20/2023] Open
Abstract
There has been longstanding interest from both experimental psychologists and cognitive neuroscientists in the potential modulatory role of various top-down factors on multisensory integration/perception in humans. One such top-down influence, often referred to in the literature as the 'unity assumption,' is thought to occur in those situations in which an observer considers that various of the unisensory stimuli that they have been presented with belong to one and the same object or event (Welch and Warren, 1980). Here, we review the possible factors that may lead to the emergence of the unity assumption. We then critically evaluate the evidence concerning the consequences of the unity assumption from studies of the spatial and temporal ventriloquism effects, from the McGurk effect, and from the Colavita visual dominance paradigm. The research that has been published to date using these tasks provides support for the claim that the unity assumption influences multisensory perception under at least a subset of experimental conditions. We then consider whether the notion has been superseded in recent years by the introduction of priors in Bayesian causal inference models of human multisensory perception. We suggest that the prior of common cause (that is, the prior concerning whether multisensory signals originate from the same source or not) offers the most useful way to quantify the unity assumption as a continuous cognitive variable.
Collapse
Affiliation(s)
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford UniversityOxford, UK
| |
Collapse
|
36
|
Bosen AK, Fleming JT, Brown SE, Allen PD, O'Neill WE, Paige GD. Comparison of congruence judgment and auditory localization tasks for assessing the spatial limits of visual capture. BIOLOGICAL CYBERNETICS 2016; 110:455-471. [PMID: 27815630 PMCID: PMC5115967 DOI: 10.1007/s00422-016-0706-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2015] [Accepted: 10/19/2016] [Indexed: 06/06/2023]
Abstract
Vision typically has better spatial accuracy and precision than audition and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small, visual capture is likely to occur, and when disparity is large, visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audiovisual disparities over which visual capture was likely to occur was narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner.
Collapse
Affiliation(s)
- Adam K Bosen
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA.
| | - Justin T Fleming
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, USA
| | - Sarah E Brown
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Paul D Allen
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, USA
| | - William E O'Neill
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, USA
| | - Gary D Paige
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, USA
| |
Collapse
|
37
|
Accumulation and decay of visual capture and the ventriloquism aftereffect caused by brief audio-visual disparities. Exp Brain Res 2016; 235:585-595. [PMID: 27837258 DOI: 10.1007/s00221-016-4820-4] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 11/03/2016] [Indexed: 10/20/2022]
Abstract
Visual capture and the ventriloquism aftereffect resolve spatial disparities of incongruent auditory visual (AV) objects by shifting auditory spatial perception to align with vision. Here, we demonstrated the distinct temporal characteristics of visual capture and the ventriloquism aftereffect in response to brief AV disparities. In a set of experiments, subjects localized either the auditory component of AV targets (A within AV) or a second sound presented at varying delays (1-20 s) after AV exposure (A2 after AV). AV targets were trains of brief presentations (1 or 20), covering a ±30° azimuthal range, and with ±8° (R or L) disparity. We found that the magnitude of visual capture generally reached its peak within a single AV pair and did not dissipate with time, while the ventriloquism aftereffect accumulated with repetitions of AV pairs and dissipated with time. Additionally, the magnitude of the auditory shift induced by each phenomenon was uncorrelated across listeners and visual capture was unaffected by subsequent auditory targets, indicating that visual capture and the ventriloquism aftereffect are separate mechanisms with distinct effects on auditory spatial perception. Our results indicate that visual capture is a 'sample-and-hold' process that binds related objects and stores the combined percept in memory, whereas the ventriloquism aftereffect is a 'leaky integrator' process that accumulates with experience and decays with time to compensate for cross-modal disparities.
Collapse
|