1
|
Senkowski D, Engel AK. Multi-timescale neural dynamics for multisensory integration. Nat Rev Neurosci 2024; 25:625-642. [PMID: 39090214 DOI: 10.1038/s41583-024-00845-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/02/2024] [Indexed: 08/04/2024]
Abstract
Carrying out any everyday task, be it driving in traffic, conversing with friends or playing basketball, requires rapid selection, integration and segregation of stimuli from different sensory modalities. At present, even the most advanced artificial intelligence-based systems are unable to replicate the multisensory processes that the human brain routinely performs, but how neural circuits in the brain carry out these processes is still not well understood. In this Perspective, we discuss recent findings that shed fresh light on the oscillatory neural mechanisms that mediate multisensory integration (MI), including power modulations, phase resetting, phase-amplitude coupling and dynamic functional connectivity. We then consider studies that also suggest multi-timescale dynamics in intrinsic ongoing neural activity and during stimulus-driven bottom-up and cognitive top-down neural network processing in the context of MI. We propose a new concept of MI that emphasizes the critical role of neural dynamics at multiple timescales within and across brain networks, enabling the simultaneous integration, segregation, hierarchical structuring and selection of information in different time windows. To highlight predictions from our multi-timescale concept of MI, real-world scenarios in which multi-timescale processes may coordinate MI in a flexible and adaptive manner are considered.
Collapse
Affiliation(s)
- Daniel Senkowski
- Department of Psychiatry and Neurosciences, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| |
Collapse
|
2
|
Tripathy K, Fogarty M, Svoboda AM, Schroeder ML, Rafferty SM, Richter EJ, Tracy C, Mansfield PK, Booth M, Fishell AK, Sherafati A, Markow ZE, Wheelock MD, Arbeláez AM, Schlaggar BL, Smyser CD, Eggebrecht AT, Culver JP. Mapping brain function in adults and young children during naturalistic viewing with high-density diffuse optical tomography. Hum Brain Mapp 2024; 45:e26684. [PMID: 38703090 PMCID: PMC11069306 DOI: 10.1002/hbm.26684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 03/27/2024] [Accepted: 04/03/2024] [Indexed: 05/06/2024] Open
Abstract
Human studies of early brain development have been limited by extant neuroimaging methods. MRI scanners present logistical challenges for imaging young children, while alternative modalities like functional near-infrared spectroscopy have traditionally been limited by image quality due to sparse sampling. In addition, conventional tasks for brain mapping elicit low task engagement, high head motion, and considerable participant attrition in pediatric populations. As a result, typical and atypical developmental trajectories of processes such as language acquisition remain understudied during sensitive periods over the first years of life. We evaluate high-density diffuse optical tomography (HD-DOT) imaging combined with movie stimuli for high resolution optical neuroimaging in awake children ranging from 1 to 7 years of age. We built an HD-DOT system with design features geared towards enhancing both image quality and child comfort. Furthermore, we characterized a library of animated movie clips as a stimulus set for brain mapping and we optimized associated data analysis pipelines. Together, these tools could map cortical responses to movies and contained features such as speech in both adults and awake young children. This study lays the groundwork for future research to investigate response variability in larger pediatric samples and atypical trajectories of early brain development in clinical populations.
Collapse
Affiliation(s)
- Kalyan Tripathy
- Division of Biological and Biomedical SciencesWashington University in St. LouisSt. LouisMissouriUSA
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
- Western Psychiatric HospitalUniversity of Pittsburgh Medical CenterPittsburghPennsylvaniaUSA
| | - Morgan Fogarty
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
- Imaging Science ProgramWashington University in St. LouisSt. LouisMissouriUSA
| | - Alexandra M. Svoboda
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
| | - Mariel L. Schroeder
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
| | - Sean M. Rafferty
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
| | - Edward J. Richter
- Department of Electrical and Systems EngineeringWashington University in St. LouisSt. LouisMissouriUSA
| | - Christopher Tracy
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
| | - Patricia K. Mansfield
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
| | - Madison Booth
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
| | - Andrew K. Fishell
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
| | - Arefeh Sherafati
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
- Department of PhysicsWashington University in St. LouisSt. LouisMissouriUSA
| | - Zachary E. Markow
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
- Department of Biomedical EngineeringWashington University in St. LouisSt. LouisMissouriUSA
| | - Muriah D. Wheelock
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
| | - Ana María Arbeláez
- Department of PediatricsWashington University School of MedicineSt. LouisMissouriUSA
| | - Bradley L. Schlaggar
- Kennedy Krieger InstituteBaltimoreMarylandUSA
- Department of NeurologyJohns Hopkins University School of MedicineBaltimoreMarylandUSA
- Department of PediatricsJohns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Christopher D. Smyser
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
- Department of PediatricsWashington University School of MedicineSt. LouisMissouriUSA
- Department of NeurologyWashington University School of MedicineSt. LouisMissouriUSA
| | - Adam T. Eggebrecht
- Division of Biological and Biomedical SciencesWashington University in St. LouisSt. LouisMissouriUSA
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
- Imaging Science ProgramWashington University in St. LouisSt. LouisMissouriUSA
- Department of Electrical and Systems EngineeringWashington University in St. LouisSt. LouisMissouriUSA
- Department of PhysicsWashington University in St. LouisSt. LouisMissouriUSA
- Department of Biomedical EngineeringWashington University in St. LouisSt. LouisMissouriUSA
| | - Joseph P. Culver
- Mallinckrodt Institute of RadiologyWashington University School of MedicineSt. LouisMissouriUSA
- Imaging Science ProgramWashington University in St. LouisSt. LouisMissouriUSA
- Department of PhysicsWashington University in St. LouisSt. LouisMissouriUSA
- Department of Biomedical EngineeringWashington University in St. LouisSt. LouisMissouriUSA
| |
Collapse
|
3
|
Ostrowski J, Rose M. Increases in pre-stimulus theta and alpha oscillations precede successful encoding of crossmodal associations. Sci Rep 2024; 14:7895. [PMID: 38570599 PMCID: PMC10991485 DOI: 10.1038/s41598-024-58227-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 03/26/2024] [Indexed: 04/05/2024] Open
Abstract
A central aspect of episodic memory is the formation of associations between stimuli from different modalities. Current theoretical approaches assume a functional role of ongoing oscillatory power and phase in the theta band (3-7 Hz) for the encoding of crossmodal associations. Furthermore, ongoing activity in the theta range as well as alpha (8-12 Hz) and low beta activity (13-20 Hz) before the presentation of a stimulus is thought to modulate subsequent cognitive processing, including processes that are related to memory. In this study, we tested the hypothesis that pre-stimulus characteristics of low frequency activity are relevant for the successful formation of crossmodal memory. The experimental design that was used specifically allowed for the investigation of associative memory independent from individual item memory. Participants (n = 51) were required to memorize associations between audiovisual stimulus pairs and distinguish them from newly arranged ones consisting of the same single stimuli in the subsequent recognition task. Our results show significant differences in the state of pre-stimulus theta and alpha power between remembered and not remembered crossmodal associations, clearly relating increased power to successful recognition. These differences were positively correlated with memory performance, suggesting functional relevance for behavioral measures of associative memory. Further analysis revealed similar effects in the low beta frequency ranges, indicating the involvement of different pre-stimulus-related cognitive processes. Phase-based connectivity measures in the theta band did not differ between remembered and not remembered stimulus pairs. The findings support the assumed functional relevance of theta band oscillations for the formation of associative memory and demonstrate that an increase of theta as well as alpha band oscillations in the pre-stimulus period is beneficial for the establishment of crossmodal memory.
Collapse
Affiliation(s)
- Jan Ostrowski
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| | - Michael Rose
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
4
|
Jones SA, Noppeney U. Older adults preserve audiovisual integration through enhanced cortical activations, not by recruiting new regions. PLoS Biol 2024; 22:e3002494. [PMID: 38319934 PMCID: PMC10871488 DOI: 10.1371/journal.pbio.3002494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 02/16/2024] [Accepted: 01/09/2024] [Indexed: 02/08/2024] Open
Abstract
Effective interactions with the environment rely on the integration of multisensory signals: Our brains must efficiently combine signals that share a common source, and segregate those that do not. Healthy ageing can change or impair this process. This functional magnetic resonance imaging study assessed the neural mechanisms underlying age differences in the integration of auditory and visual spatial cues. Participants were presented with synchronous audiovisual signals at various degrees of spatial disparity and indicated their perceived sound location. Behaviourally, older adults were able to maintain localisation accuracy. At the neural level, they integrated auditory and visual cues into spatial representations along dorsal auditory and visual processing pathways similarly to their younger counterparts but showed greater activations in a widespread system of frontal, temporal, and parietal areas. According to multivariate Bayesian decoding, these areas encoded critical stimulus information beyond that which was encoded in the brain areas commonly activated by both groups. Surprisingly, however, the boost in information provided by these areas with age-related activation increases was comparable across the 2 age groups. This dissociation-between comparable information encoded in brain activation patterns across the 2 age groups, but age-related increases in regional blood-oxygen-level-dependent responses-contradicts the widespread notion that older adults recruit new regions as a compensatory mechanism to encode task-relevant information. Instead, our findings suggest that activation increases in older adults reflect nonspecific or modulatory mechanisms related to less efficient or slower processing, or greater demands on attentional resources.
Collapse
Affiliation(s)
- Samuel A. Jones
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Department of Psychology, Nottingham Trent University, Nottingham, United Kingdom
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
5
|
Jones SA, Noppeney U. Multisensory Integration and Causal Inference in Typical and Atypical Populations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:59-76. [PMID: 38270853 DOI: 10.1007/978-981-99-7611-9_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory perception is critical for effective interaction with the environment, but human responses to multisensory stimuli vary across the lifespan and appear changed in some atypical populations. In this review chapter, we consider multisensory integration within a normative Bayesian framework. We begin by outlining the complex computational challenges of multisensory causal inference and reliability-weighted cue integration, and discuss whether healthy young adults behave in accordance with normative Bayesian models. We then compare their behaviour with various other human populations (children, older adults, and those with neurological or neuropsychiatric disorders). In particular, we consider whether the differences seen in these groups are due only to changes in their computational parameters (such as sensory noise or perceptual priors), or whether the fundamental computational principles (such as reliability weighting) underlying multisensory perception may also be altered. We conclude by arguing that future research should aim explicitly to differentiate between these possibilities.
Collapse
Affiliation(s)
- Samuel A Jones
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Karl V, Rohe T. Structural brain changes in emotion recognition across the adult lifespan. Soc Cogn Affect Neurosci 2023; 18:nsad052. [PMID: 37769357 PMCID: PMC10627307 DOI: 10.1093/scan/nsad052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 06/22/2023] [Accepted: 09/19/2023] [Indexed: 09/30/2023] Open
Abstract
Emotion recognition (ER) declines with increasing age, yet little is known whether this observation is based on structural brain changes conveyed by differential atrophy. To investigate whether age-related ER decline correlates with reduced grey matter (GM) volume in emotion-related brain regions, we conducted a voxel-based morphometry analysis using data of the Human Connectome Project-Aging (N = 238, aged 36-87) in which facial ER was tested. We expected to find brain regions that show an additive or super-additive age-related change in GM volume indicating atrophic processes that reduce ER in older adults. The data did not support our hypotheses after correction for multiple comparisons. Exploratory analyses with a threshold of P < 0.001 (uncorrected), however, suggested that relationships between GM volume and age-related general ER may be widely distributed across the cortex. Yet, small effect sizes imply that only a small fraction of the decline of ER in older adults can be attributed to local GM volume changes in single voxels or their multivariate patterns.
Collapse
Affiliation(s)
- Valerie Karl
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen 91054, Germany
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital & Institute of Clinical Medicine, University of Oslo, Oslo 0424, Norway
- PROMENTA Research Center, Department of Psychology, University of Oslo, Oslo 0373, Norway
| | - Tim Rohe
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen 91054, Germany
| |
Collapse
|
7
|
Choi I, Demir I, Oh S, Lee SH. Multisensory integration in the mammalian brain: diversity and flexibility in health and disease. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220338. [PMID: 37545309 PMCID: PMC10404930 DOI: 10.1098/rstb.2022.0338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multisensory integration (MSI) occurs in a variety of brain areas, spanning cortical and subcortical regions. In traditional studies on sensory processing, the sensory cortices have been considered for processing sensory information in a modality-specific manner. The sensory cortices, however, send the information to other cortical and subcortical areas, including the higher association cortices and the other sensory cortices, where the multiple modality inputs converge and integrate to generate a meaningful percept. This integration process is neither simple nor fixed because these brain areas interact with each other via complicated circuits, which can be modulated by numerous internal and external conditions. As a result, dynamic MSI makes multisensory decisions flexible and adaptive in behaving animals. Impairments in MSI occur in many psychiatric disorders, which may result in an altered perception of the multisensory stimuli and an abnormal reaction to them. This review discusses the diversity and flexibility of MSI in mammals, including humans, primates and rodents, as well as the brain areas involved. It further explains how such flexibility influences perceptual experiences in behaving animals in both health and disease. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Ilsong Choi
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
| | - Ilayda Demir
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seungmi Oh
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seung-Hee Lee
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| |
Collapse
|
8
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
9
|
Aston S, Nardini M, Beierholm U. Different types of uncertainty in multisensory perceptual decision making. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220349. [PMID: 37545308 PMCID: PMC10404920 DOI: 10.1098/rstb.2022.0349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
Efficient decision-making requires accounting for sources of uncertainty (noise, or variability). Many studies have shown how the nervous system is able to account for perceptual uncertainty (noise, variability) that arises from limitations in its own abilities to encode perceptual stimuli. However, many other sources of uncertainty exist, reflecting for example variability in the behaviour of other agents or physical processes. Here we review previous studies on decision making under uncertainty as a function of the different types of uncertainty that the nervous system encounters, showing that noise that is intrinsic to the perceptual system can often be accounted for near-optimally (i.e. not statistically different from optimally), whereas accounting for other types of uncertainty can be much more challenging. As an example, we present a study in which participants made decisions about multisensory stimuli with both intrinsic (perceptual) and extrinsic (environmental) uncertainty and show that the nervous system accounts for these differently when making decisions: they account for internal uncertainty but under-account for external. Human perceptual systems may be well equipped to account for intrinsic (perceptual) uncertainty because, in principle, they have access to this. Accounting for external uncertainty is more challenging because this uncertainty must be learned. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Stacey Aston
- Department of Psychology, Durham University, Durham, Durham DH1 3LE, UK
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, Durham DH1 3LE, UK
| | - Ulrik Beierholm
- Department of Psychology, Durham University, Durham, Durham DH1 3LE, UK
| |
Collapse
|
10
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220344. [PMID: 37545300 PMCID: PMC10404925 DOI: 10.1098/rstb.2022.0344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/20/2023] [Indexed: 08/08/2023] Open
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of causal inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief about (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modelling results, we show that humans report targets as stationary and steer towards their initial rather than final position more often when they are themselves moving, suggesting a putative misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results support both of these predictions. Lastly, analysis of eye movements show that, while initial saccades toward targets were largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Johannes Bill
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Department of Psychology, Harvard University, Boston, MA 02115, USA
| | - Haoran Ding
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - John Vastola
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14611, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA
- Tandon School of Engineering, New York University, New York, NY 10003, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Center for Brain Science, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
11
|
Meijer D, Noppeney U. Metacognition in the audiovisual McGurk illusion: perceptual and causal confidence. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220348. [PMID: 37545307 PMCID: PMC10404922 DOI: 10.1098/rstb.2022.0348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 07/02/2023] [Indexed: 08/08/2023] Open
Abstract
Almost all decisions in everyday life rely on multiple sensory inputs that can come from common or independent causes. These situations invoke perceptual uncertainty about environmental properties and the signals' causal structure. Using the audiovisual McGurk illusion, this study investigated how observers formed perceptual and causal confidence judgements in information integration tasks under causal uncertainty. Observers were presented with spoken syllables, their corresponding articulatory lip movements or their congruent and McGurk combinations (e.g. auditory B/P with visual G/K). Observers reported their perceived auditory syllable, the causal structure and confidence for each judgement. Observers were more accurate and confident on congruent than unisensory trials. Their perceptual and causal confidence were tightly related over trials as predicted by the interactive nature of perceptual and causal inference. Further, observers assigned comparable perceptual and causal confidence to veridical 'G/K' percepts on audiovisual congruent trials and their causal and perceptual metamers on McGurk trials (i.e. illusory 'G/K' percepts). Thus, observers metacognitively evaluate the integrated audiovisual percept with limited access to the conflicting unisensory stimulus components on McGurk trials. Collectively, our results suggest that observers form meaningful perceptual and causal confidence judgements about multisensory scenes that are qualitatively consistent with principles of Bayesian causal inference. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- David Meijer
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, 1040, Wien, Austria
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Kapittelweg 29, 6525 EN, Nijmegen, The Netherlands
| |
Collapse
|
12
|
Yang J, Ganea N, Kanazawa S, Yamaguchi MK, Bhattacharya J, Bremner AJ. Cortical signatures of visual body representation develop in human infancy. Sci Rep 2023; 13:14696. [PMID: 37679386 PMCID: PMC10484977 DOI: 10.1038/s41598-023-41604-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 08/28/2023] [Indexed: 09/09/2023] Open
Abstract
Human infants cannot report their experiences, limiting what we can learn about their bodily awareness. However, visual cortical responses to the body, linked to visual awareness and selective attention in adults, can be easily measured in infants and provide a promising marker of bodily awareness in early life. We presented 4- and 8-month-old infants with a flickering (7.5 Hz) video of a hand being stroked and recorded steady-state visual evoked potentials (SSVEPs). In half of the trials, the infants also received tactile stroking synchronously with visual stroking. The 8-month-old, but not the 4-month-old infants, showed a significant enhancement of SSVEP responses when they received tactile stimulation concurrent with the visually observed stroking. Follow-up experiments showed that this enhancement did not occur when the visual hand was presented in an incompatible posture with the infant's own body or when the visual stimulus was a body-irrelevant video. Our findings provide a novel insight into the development of bodily self-awareness in the first year of life.
Collapse
Affiliation(s)
- Jiale Yang
- School of Psychology, Chukyo University, Nagoya, Japan.
| | - Natasa Ganea
- Child Study Center, Yale University, New Haven, CT, USA
| | - So Kanazawa
- Department of Psychology, Japan Women's University, Tokyo, Japan
| | | | | | - Andrew J Bremner
- Centre for Developmental Science, School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
13
|
Seydell-Greenwald A, Wang X, Newport EL, Bi Y, Striem-Amit E. Spoken language processing activates the primary visual cortex. PLoS One 2023; 18:e0289671. [PMID: 37566582 PMCID: PMC10420367 DOI: 10.1371/journal.pone.0289671] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 07/24/2023] [Indexed: 08/13/2023] Open
Abstract
Primary visual cortex (V1) is generally thought of as a low-level sensory area that primarily processes basic visual features. Although there is evidence for multisensory effects on its activity, these are typically found for the processing of simple sounds and their properties, for example spatially or temporally-congruent simple sounds. However, in congenitally blind individuals, V1 is involved in language processing, with no evidence of major changes in anatomical connectivity that could explain this seemingly drastic functional change. This is at odds with current accounts of neural plasticity, which emphasize the role of connectivity and conserved function in determining a neural tissue's role even after atypical early experiences. To reconcile what appears to be unprecedented functional reorganization with known accounts of plasticity limitations, we tested whether V1's multisensory roles include responses to spoken language in sighted individuals. Using fMRI, we found that V1 in normally sighted individuals was indeed activated by comprehensible spoken sentences as compared to an incomprehensible reversed speech control condition, and more strongly so in the left compared to the right hemisphere. Activation in V1 for language was also significant and comparable for abstract and concrete words, suggesting it was not driven by visual imagery. Last, this activation did not stem from increased attention to the auditory onset of words, nor was it correlated with attentional arousal ratings, making general attention accounts an unlikely explanation. Together these findings suggest that V1 responds to spoken language even in sighted individuals, reflecting the binding of multisensory high-level signals, potentially to predict visual input. This capability might be the basis for the strong V1 language activation observed in people born blind, re-affirming the notion that plasticity is guided by pre-existing connectivity and abilities in the typically developed brain.
Collapse
Affiliation(s)
- Anna Seydell-Greenwald
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Elissa L. Newport
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ella Striem-Amit
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| |
Collapse
|
14
|
Coen P, Sit TPH, Wells MJ, Carandini M, Harris KD. Mouse frontal cortex mediates additive multisensory decisions. Neuron 2023; 111:2432-2447.e13. [PMID: 37295419 PMCID: PMC10957398 DOI: 10.1016/j.neuron.2023.05.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 12/02/2022] [Accepted: 05/10/2023] [Indexed: 06/12/2023]
Abstract
The brain can combine auditory and visual information to localize objects. However, the cortical substrates underlying audiovisual integration remain uncertain. Here, we show that mouse frontal cortex combines auditory and visual evidence; that this combination is additive, mirroring behavior; and that it evolves with learning. We trained mice in an audiovisual localization task. Inactivating frontal cortex impaired responses to either sensory modality, while inactivating visual or parietal cortex affected only visual stimuli. Recordings from >14,000 neurons indicated that after task learning, activity in the anterior part of frontal area MOs (secondary motor cortex) additively encodes visual and auditory signals, consistent with the mice's behavioral strategy. An accumulator model applied to these sensory representations reproduced the observed choices and reaction times. These results suggest that frontal cortex adapts through learning to combine evidence across sensory cortices, providing a signal that is transformed into a binary decision by a downstream accumulator.
Collapse
Affiliation(s)
- Philip Coen
- UCL Queen Square Institute of Neurology, University College London, London, UK; UCL Institute of Ophthalmology, University College London, London, UK.
| | - Timothy P H Sit
- Sainsbury-Wellcome Center, University College London, London, UK
| | - Miles J Wells
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Matteo Carandini
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Kenneth D Harris
- UCL Queen Square Institute of Neurology, University College London, London, UK
| |
Collapse
|
15
|
Gilday OD, Mizrahi A. Learning-Induced Odor Modulation of Neuronal Activity in Auditory Cortex. J Neurosci 2023; 43:1375-1386. [PMID: 36650061 PMCID: PMC9987573 DOI: 10.1523/jneurosci.1398-22.2022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 12/15/2022] [Accepted: 12/21/2022] [Indexed: 01/19/2023] Open
Abstract
Sensory cortices, even of primary regions, are not purely unisensory. Rather, cortical neurons in sensory cortex show various forms of multisensory interactions. While some multisensory interactions naturally co-occur, the combination of others will co-occur through experience. In real life, learning and experience will result in conjunction with seemingly disparate sensory information that ultimately becomes behaviorally relevant, impacting perception, cognition, and action. Here we describe a novel auditory discrimination task in mice, designed to manipulate the expectation of upcoming trials using olfactory cues. We show that, after learning, female mice display a transient period of several days during which they exploit odor-mediated expectations for making correct decisions. Using two-photon calcium imaging of single neurons in auditory cortex (ACx) during behavior, we found that the behavioral effects of odor-mediated expectations are accompanied by an odor-induced modulation of neuronal activity. Further, we find that these effects are manifested differentially, based on the response preference of individual cells. A significant portion of effects, but not all, are consistent with a predictive coding framework. Our data show that learning novel odor-sound associations evoke changes in ACx. We suggest that behaviorally relevant multisensory environments mediate contextual effects as early as ACx.SIGNIFICANCE STATEMENT Natural environments are composed of multisensory objects. It remains unclear whether and how animals learn the regularities of congruent multisensory associations and how these may impact behavior and neural activity. We tested how learned odor-sound associations affected single-neuron responses in auditory cortex. We introduce a novel auditory discrimination task for mice in which odors set different contexts of expectation to upcoming trials. We show that, although the task can be solved purely by sounds, odor-mediated expectation impacts performance. We further show that odors cause a modulation of neuronal activity in auditory cortex, which is correlated with behavior. These results suggest that learning prompts an interaction of odor and sound information as early as sensory cortex.
Collapse
Affiliation(s)
- Omri David Gilday
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| | - Adi Mizrahi
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 91904, Israel,
- Department of Neurobiology, The Hebrew University of Jerusalem, Jerusalem, 91904, Israel
| |
Collapse
|
16
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
17
|
Lankinen K, Ahlfors SP, Mamashli F, Blazejewska AI, Raij T, Turpin T, Polimeni JR, Ahveninen J. Cortical depth profiles of auditory and visual 7 T functional MRI responses in human superior temporal areas. Hum Brain Mapp 2023; 44:362-372. [PMID: 35980015 PMCID: PMC9842898 DOI: 10.1002/hbm.26046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Revised: 07/06/2022] [Accepted: 07/16/2022] [Indexed: 02/02/2023] Open
Abstract
Invasive neurophysiological studies in nonhuman primates have shown different laminar activation profiles to auditory vs. visual stimuli in auditory cortices and adjacent polymodal areas. Means to examine the underlying feedforward vs. feedback type influences noninvasively have been limited in humans. Here, using 1-mm isotropic resolution 3D echo-planar imaging at 7 T, we studied the intracortical depth profiles of functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) signals to brief auditory (noise bursts) and visual (checkerboard) stimuli. BOLD percent-signal-changes were estimated at 11 equally spaced intracortical depths, within regions-of-interest encompassing auditory (Heschl's gyrus, Heschl's sulcus, planum temporale, and posterior superior temporal gyrus) and polymodal (middle and posterior superior temporal sulcus) areas. Effects of differing BOLD signal strengths for auditory and visual stimuli were controlled via normalization and statistical modeling. The BOLD depth profile shapes, modeled with quadratic regression, were significantly different for auditory vs. visual stimuli in auditory cortices, but not in polymodal areas. The different depth profiles could reflect sensory-specific feedforward versus cross-sensory feedback influences, previously shown in laminar recordings in nonhuman primates. The results suggest that intracortical BOLD profiles can help distinguish between feedforward and feedback type influences in the human brain. Further experimental studies are still needed to clarify how underlying signal strength influences BOLD depth profiles under different stimulus conditions.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Seppo P. Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Anna I. Blazejewska
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Tommi Raij
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Tori Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Jonathan R. Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
- Division of Health Sciences and TechnologyMassachusetts Institute of TechnologyCambridgeMassachusettsUSA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| |
Collapse
|
18
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.27.525974. [PMID: 36778376 PMCID: PMC9915492 DOI: 10.1101/2023.01.27.525974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of Bayesian Causal Inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief over (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modeling results, we show that humans report targets as stationary and steer toward their initial rather than final position more often when they are themselves moving, suggesting a misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results confirm both of these predictions. Lastly, analysis of eye-movements show that, while initial saccades toward targets are largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, United States
| | - Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Department of Psychology, Harvard University, Cambridge, MA, United States
| | - Haoran Ding
- Center for Neural Science, New York University, New York City, NY, United States
| | - John Vastola
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, United States
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York City, NY, United States
- Tandon School of Engineering, New York University, New York City, NY, United states
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Center for Brain Science, Harvard University, Boston, MA, United States
| |
Collapse
|
19
|
Magnetoencephalography recordings reveal the neural mechanisms of auditory contributions to improved visual detection. Commun Biol 2023; 6:12. [PMID: 36604455 PMCID: PMC9816120 DOI: 10.1038/s42003-022-04335-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 12/01/2022] [Indexed: 01/07/2023] Open
Abstract
Sounds enhance the detection of visual stimuli while concurrently biasing an observer's decisions. To investigate the neural mechanisms that underlie such multisensory interactions, we decoded time-resolved Signal Detection Theory sensitivity and criterion parameters from magneto-encephalographic recordings of participants that performed a visual detection task. We found that sounds improved visual detection sensitivity by enhancing the accumulation and maintenance of perceptual evidence over time. Meanwhile, criterion decoding analyses revealed that sounds induced brain activity patterns that resembled the patterns evoked by an actual visual stimulus. These two complementary mechanisms of audiovisual interplay differed in terms of their automaticity: Whereas the sound-induced enhancement in visual sensitivity depended on participants being actively engaged in a detection task, we found that sounds activated the visual cortex irrespective of task demands, potentially inducing visual illusory percepts. These results challenge the classical assumption that sound-induced increases in false alarms exclusively correspond to decision-level biases.
Collapse
|
20
|
Negen J, Slater H, Bird LA, Nardini M. Internal biases are linked to disrupted cue combination in children and adults. J Vis 2022; 22:14. [DOI: 10.1167/jov.22.12.14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Affiliation(s)
- James Negen
- School of Psychology, Liverpool John Moores University, Liverpool, UK
| | | | - Laura-Ashleigh Bird
- Department of Psychology, Durham University, Durham, UK
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
21
|
Park H, Kayser C. The context of experienced sensory discrepancies shapes multisensory integration and recalibration differently. Cognition 2022; 225:105092. [DOI: 10.1016/j.cognition.2022.105092] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 03/02/2022] [Accepted: 03/07/2022] [Indexed: 11/03/2022]
|
22
|
Noel JP, Shivkumar S, Dokka K, Haefner RM, Angelaki DE. Aberrant causal inference and presence of a compensatory mechanism in autism spectrum disorder. eLife 2022; 11:71866. [PMID: 35579424 PMCID: PMC9170250 DOI: 10.7554/elife.71866] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 05/15/2022] [Indexed: 12/02/2022] Open
Abstract
Autism spectrum disorder (ASD) is characterized by a panoply of social, communicative, and sensory anomalies. As such, a central goal of computational psychiatry is to ascribe the heterogenous phenotypes observed in ASD to a limited set of canonical computations that may have gone awry in the disorder. Here, we posit causal inference - the process of inferring a causal structure linking sensory signals to hidden world causes - as one such computation. We show that audio-visual integration is intact in ASD and in line with optimal models of cue combination, yet multisensory behavior is anomalous in ASD because this group operates under an internal model favoring integration (vs. segregation). Paradoxically, during explicit reports of common cause across spatial or temporal disparities, individuals with ASD were less and not more likely to report common cause, particularly at small cue disparities. Formal model fitting revealed differences in both the prior probability for common cause (p-common) and choice biases, which are dissociable in implicit but not explicit causal inference tasks. Together, this pattern of results suggests (i) different internal models in attributing world causes to sensory signals in ASD relative to neurotypical individuals given identical sensory cues, and (ii) the presence of an explicit compensatory mechanism in ASD, with these individuals putatively having learned to compensate for their bias to integrate in explicit reports.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, United States
| | | | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, United States
| | - Ralf M Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York City, United States.,Department of Neuroscience, Baylor College of Medicine, Houston, United States
| |
Collapse
|
23
|
Michail G, Senkowski D, Holtkamp M, Wächter B, Keil J. Early beta oscillations in multisensory association areas underlie crossmodal performance enhancement. Neuroimage 2022; 257:119307. [PMID: 35577024 DOI: 10.1016/j.neuroimage.2022.119307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/29/2022] [Accepted: 05/10/2022] [Indexed: 11/28/2022] Open
Abstract
The combination of signals from different sensory modalities can enhance perception and facilitate behavioral responses. While previous research described crossmodal influences in a wide range of tasks, it remains unclear how such influences drive performance enhancements. In particular, the neural mechanisms underlying performance-relevant crossmodal influences, as well as the latency and spatial profile of such influences are not well understood. Here, we examined data from high-density electroencephalography (N = 30) recordings to characterize the oscillatory signatures of crossmodal facilitation of response speed, as manifested in the speeding of visual responses by concurrent task-irrelevant auditory information. Using a data-driven analysis approach, we found that individual gains in response speed correlated with larger beta power difference (13-25 Hz) between the audiovisual and the visual condition, starting within 80 ms after stimulus onset in the secondary visual cortex and in multisensory association areas in the parietal cortex. In addition, we examined data from electrocorticography (ECoG) recordings in four epileptic patients in a comparable paradigm. These ECoG data revealed reduced beta power in audiovisual compared with visual trials in the superior temporal gyrus (STG). Collectively, our data suggest that the crossmodal facilitation of response speed is associated with reduced early beta power in multisensory association and secondary visual areas. The reduced early beta power may reflect an auditory-driven feedback signal to improve visual processing through attentional gating. These findings improve our understanding of the neural mechanisms underlying crossmodal response speed facilitation and highlight the critical role of beta oscillations in mediating behaviorally relevant multisensory processing.
Collapse
Affiliation(s)
- Georgios Michail
- Department of Psychiatry and Psychotherapy, Charité Campus Mitte (CCM), Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, Berlin 10117, Germany.
| | - Daniel Senkowski
- Department of Psychiatry and Psychotherapy, Charité Campus Mitte (CCM), Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, Berlin 10117, Germany
| | - Martin Holtkamp
- Epilepsy-Center Berlin-Brandenburg, Institute for Diagnostics of Epilepsy, Berlin 10365, Germany; Department of Neurology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charité Campus Mitte (CCM), Charitéplatz 1, Berlin 10117, Germany
| | - Bettina Wächter
- Epilepsy-Center Berlin-Brandenburg, Institute for Diagnostics of Epilepsy, Berlin 10365, Germany
| | - Julian Keil
- Biological Psychology, Christian-Albrechts-University Kiel, Kiel 24118, Germany
| |
Collapse
|
24
|
Pesnot Lerousseau J, Parise CV, Ernst MO, van Wassenhove V. Multisensory correlation computations in the human brain identified by a time-resolved encoding model. Nat Commun 2022; 13:2489. [PMID: 35513362 PMCID: PMC9072402 DOI: 10.1038/s41467-022-29687-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 03/14/2022] [Indexed: 11/09/2022] Open
Abstract
Neural mechanisms that arbitrate between integrating and segregating multisensory information are essential for complex scene analysis and for the resolution of the multisensory correspondence problem. However, these mechanisms and their dynamics remain largely unknown, partly because classical models of multisensory integration are static. Here, we used the Multisensory Correlation Detector, a model that provides a good explanatory power for human behavior while incorporating dynamic computations. Participants judged whether sequences of auditory and visual signals originated from the same source (causal inference) or whether one modality was leading the other (temporal order), while being recorded with magnetoencephalography. First, we confirm that the Multisensory Correlation Detector explains causal inference and temporal order behavioral judgments well. Second, we found strong fits of brain activity to the two outputs of the Multisensory Correlation Detector in temporo-parietal cortices. Finally, we report an asymmetry in the goodness of the fits, which were more reliable during the causal inference task than during the temporal order judgment task. Overall, our results suggest the existence of multisensory correlation detectors in the human brain, which explain why and how causal inference is strongly driven by the temporal correlation of multisensory signals.
Collapse
Affiliation(s)
- Jacques Pesnot Lerousseau
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France.
- Applied Cognitive Psychology, Ulm University, Ulm, Germany.
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, CNRS, Université Paris-Saclay, NeuroSpin, 91191, Gif/Yvette, France.
| | | | - Marc O Ernst
- Applied Cognitive Psychology, Ulm University, Ulm, Germany
| | - Virginie van Wassenhove
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, CNRS, Université Paris-Saclay, NeuroSpin, 91191, Gif/Yvette, France
| |
Collapse
|
25
|
Sou KL, Say A, Xu H. Unity Assumption in Audiovisual Emotion Perception. Front Neurosci 2022; 16:782318. [PMID: 35310087 PMCID: PMC8931414 DOI: 10.3389/fnins.2022.782318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 02/09/2022] [Indexed: 11/29/2022] Open
Abstract
We experience various sensory stimuli every day. How does this integration occur? What are the inherent mechanisms in this integration? The “unity assumption” proposes a perceiver’s belief of unity in individual unisensory information to modulate the degree of multisensory integration. However, this has yet to be verified or quantified in the context of semantic emotion integration. In the present study, we investigate the ability of subjects to judge the intensities and degrees of similarity in faces and voices of two emotions (angry and happy). We found more similar stimulus intensities to be associated with stronger likelihoods of the face and voice being integrated. More interestingly, multisensory integration in emotion perception was observed to follow a Gaussian distribution as a function of the emotion intensity difference between the face and voice—the optimal cut-off at about 2.50 points difference on a 7-point Likert scale. This provides a quantitative estimation of the multisensory integration function in audio-visual semantic emotion perception with regards to stimulus intensity. Moreover, to investigate the variation of multisensory integration across the population, we examined the effects of personality and autistic traits of participants. Here, we found no correlation of autistic traits with unisensory processing in a nonclinical population. Our findings shed light on the current understanding of multisensory integration mechanisms.
Collapse
Affiliation(s)
- Ka Lon Sou
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
- Humanities, Arts and Social Sciences, Singapore University of Technology and Design, Singapore, Singapore
| | - Ashley Say
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
| | - Hong Xu
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
- *Correspondence: Hong Xu,
| |
Collapse
|
26
|
Independent mechanisms of temporal and linguistic cue correspondence benefiting audiovisual speech processing. Atten Percept Psychophys 2022; 84:2016-2026. [PMID: 35211849 DOI: 10.3758/s13414-022-02440-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/27/2021] [Indexed: 11/08/2022]
Abstract
It is well established that in order to comprehend speech in noisy environments, listeners use the face of the talker in conjunction with the auditory speech. Yet how listeners use audiovisual speech correspondences along the multisensory speech processing pathway is not known. We engaged listeners in a pair of experiments using face rotation to partially dissociate linguistic and temporal information and two tasks to assess both overall integration and early integration specifically. In our first exploratory experiment, listeners performed a speech in noise task to determine which face rotation maximally disrupts speech comprehension and thus overall audiovisual integration. Our second experiment involved a dual pitch discrimination and visual catch task to test specifically for binding. The results showed that temporal coherence supports early integration, replicating the importance of temporal coherence seen for binding nonspeech stimuli. However, the benefit of temporal coherence was present in both upright and inverted positions, suggesting that binding is minimally affected by face rotation under these conditions. Together, our results suggest that different aspects of audio-visual speech are integrated at different stages of multisensory speech processing.
Collapse
|
27
|
Abstract
Navigating by path integration requires continuously estimating one's self-motion. This estimate may be derived from visual velocity and/or vestibular acceleration signals. Importantly, these senses in isolation are ill-equipped to provide accurate estimates, and thus visuo-vestibular integration is an imperative. After a summary of the visual and vestibular pathways involved, the crux of this review focuses on the human and theoretical approaches that have outlined a normative account of cue combination in behavior and neurons, as well as on the systems neuroscience efforts that are searching for its neural implementation. We then highlight a contemporary frontier in our state of knowledge: understanding how velocity cues with time-varying reliabilities are integrated into an evolving position estimate over prolonged time periods. Further, we discuss how the brain builds internal models inferring when cues ought to be integrated versus segregated-a process of causal inference. Lastly, we suggest that the study of spatial navigation has not yet addressed its initial condition: self-location.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA;
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA;
- Tandon School of Engineering, New York University, New York, NY 11201, USA
| |
Collapse
|
28
|
Neurocomputational mechanisms underlying cross-modal associations and their influence on perceptual decisions. Neuroimage 2021; 247:118841. [PMID: 34952232 PMCID: PMC9127393 DOI: 10.1016/j.neuroimage.2021.118841] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 12/07/2021] [Accepted: 12/19/2021] [Indexed: 12/02/2022] Open
Abstract
When exposed to complementary features of information across sensory modalities, our brains formulate cross-modal associations between features of stimuli presented separately to multiple modalities. For example, auditory pitch-visual size associations map high-pitch tones with small-size visual objects, and low-pitch tones with large-size visual objects. Preferential, or congruent, cross-modal associations have been shown to affect behavioural performance, i.e. choice accuracy and reaction time (RT) across multisensory decision-making paradigms. However, the neural mechanisms underpinning such influences in perceptual decision formation remain unclear. Here, we sought to identify when perceptual improvements from associative congruency emerge in the brain during decision formation. In particular, we asked whether such improvements represent ‘early’ sensory processing benefits, or ‘late’ post-sensory changes in decision dynamics. Using a modified version of the Implicit Association Test (IAT), coupled with electroencephalography (EEG), we measured the neural activity underlying the effect of auditory stimulus-driven pitch-size associations on perceptual decision formation. Behavioural results showed that participants responded significantly faster during trials when auditory pitch was congruent, rather than incongruent, with its associative visual size counterpart. We used multivariate Linear Discriminant Analysis (LDA) to characterise the spatiotemporal dynamics of EEG activity underpinning IAT performance. We found an ‘Early’ component (∼100–110 ms post-stimulus onset) coinciding with the time of maximal discrimination of the auditory stimuli, and a ‘Late’ component (∼330–340 ms post-stimulus onset) underlying IAT performance. To characterise the functional role of these components in decision formation, we incorporated a neurally-informed Hierarchical Drift Diffusion Model (HDDM), revealing that the Late component decreases response caution, requiring less sensory evidence to be accumulated, whereas the Early component increased the duration of sensory-encoding processes for incongruent trials. Overall, our results provide a mechanistic insight into the contribution of ‘early’ sensory processing, as well as ‘late’ post-sensory neural representations of associative congruency to perceptual decision formation.
Collapse
|
29
|
Kitada R, Sadato N. Multisensory integration and its plasticity - How do innate and postnatal factors contribute to forming individual differences? Cortex 2021; 145:A1-A4. [PMID: 34844700 DOI: 10.1016/j.cortex.2021.11.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Affiliation(s)
- Ryo Kitada
- School of Social Sciences, Nanyang Technological University, Singapore; Graduate School of Intercultural Studies, Kobe University, Japan.
| | | |
Collapse
|
30
|
Ferrari A, Noppeney U. Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biol 2021; 19:e3001465. [PMID: 34793436 PMCID: PMC8639080 DOI: 10.1371/journal.pbio.3001465] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 12/02/2021] [Accepted: 11/01/2021] [Indexed: 11/22/2022] Open
Abstract
To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals' causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.
Collapse
Affiliation(s)
- Ambra Ferrari
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
31
|
Pleger A. Eigenwahrnehmung und aktive Inferenz. MANUELLE MEDIZIN 2021. [DOI: 10.1007/s00337-021-00824-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
32
|
Abstract
Our experience of the world seems to unfold seamlessly in a unitary 3D space. For this to be possible, the brain has to merge many disparate cognitive representations and sensory inputs. How does it do so? I discuss work on two key combination problems: coordinating multiple frames of reference (e.g. egocentric and allocentric), and coordinating multiple sensory signals (e.g. visual and proprioceptive). I focus on two populations whose spatial processing we can observe at a crucial stage of being configured and optimised: children, whose spatial abilities are still developing significantly, and naïve adults learning new spatial skills, such as sensing distance using auditory cues. The work uses a model-based approach to compare participants' behaviour with the predictions of alternative information processing models. This lets us see when and how-during development, and with experience-the perceptual-cognitive computations underpinning our experiences in space change. I discuss progress on understanding the limits of effective spatial computation for perception and action, and how lessons from the developing spatial cognitive system can inform approaches to augmenting human abilities with new sensory signals provided by technology.
Collapse
Affiliation(s)
- Marko Nardini
- Department of Psychology, Durham University, Science Site, Durham, DH1 3LE, UK.
| |
Collapse
|
33
|
Ball F, Spuerck I, Noesselt T. Minimal interplay between explicit knowledge, dynamics of learning and temporal expectations in different, complex uni- and multisensory contexts. Atten Percept Psychophys 2021; 83:2551-2573. [PMID: 33977407 PMCID: PMC8302534 DOI: 10.3758/s13414-021-02313-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/29/2021] [Indexed: 01/23/2023]
Abstract
While temporal expectations (TE) generally improve reactions to temporally predictable events, it remains unknown how the learning of temporal regularities (one time point more likely than another time point) and explicit knowledge about temporal regularities contribute to performance improvements; and whether any contributions generalise across modalities. Here, participants discriminated the frequency of diverging auditory, visual or audio-visual targets embedded in auditory, visual or audio-visual distractor sequences. Temporal regularities were manipulated run-wise (early vs. late target within sequence). Behavioural performance (accuracy, RT) plus measures from a computational learning model all suggest that learning of temporal regularities occurred but did not generalise across modalities, and that dynamics of learning (size of TE effect across runs) and explicit knowledge have little to no effect on the strength of TE. Remarkably, explicit knowledge affects performance-if at all-in a context-dependent manner: Only under complex task regimes (here, unknown target modality) might it partially help to resolve response conflict while it is lowering performance in less complex environments.
Collapse
Affiliation(s)
- Felix Ball
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, PO Box 4120, 39106, Magdeburg, Germany.
- Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.
| | - Inga Spuerck
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, PO Box 4120, 39106, Magdeburg, Germany
| | - Toemme Noesselt
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, PO Box 4120, 39106, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| |
Collapse
|
34
|
Abstract
Coordination between different sensory systems is a necessary element of sensory processing. Where and how signals from different sense organs converge onto common neural circuitry have become topics of increasing interest in recent years. In this article, we focus specifically on visual-auditory interactions in areas of the mammalian brain that are commonly considered to be auditory in function. The auditory cortex and inferior colliculus are two key points of entry where visual signals reach the auditory pathway, and both contain visual- and/or eye movement-related signals in humans and other animals. The visual signals observed in these auditory structures reflect a mixture of visual modulation of auditory-evoked activity and visually driven responses that are selective for stimulus location or features. These key response attributes also appear in the classic visual pathway but may play a different role in the auditory pathway: to modify auditory rather than visual perception. Finally, while this review focuses on two particular areas of the auditory pathway where this question has been studied, robust descending as well as ascending connections within this pathway suggest that undiscovered visual signals may be present at other stages as well. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Meredith N Schmehl
- Department of Neurobiology, Duke University, Durham, North Carolina 27708, USA; , .,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina 27708, USA.,Duke Institute for Brain Sciences, Duke University, Durham, North Carolina 27708, USA
| | - Jennifer M Groh
- Department of Neurobiology, Duke University, Durham, North Carolina 27708, USA; , .,Department of Psychology & Neuroscience, Duke University, Durham, North Carolina 27708, USA.,Department of Computer Science, Duke University, Durham, North Carolina 27708, USA.,Department of Biomedical Engineering, Duke University, Durham, North Carolina 27708, USA.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina 27708, USA.,Duke Institute for Brain Sciences, Duke University, Durham, North Carolina 27708, USA
| |
Collapse
|
35
|
Abstract
Adaptive behavior in a complex, dynamic, and multisensory world poses some of the most fundamental computational challenges for the brain, notably inference, decision-making, learning, binding, and attention. We first discuss how the brain integrates sensory signals from the same source to support perceptual inference and decision-making by weighting them according to their momentary sensory uncertainties. We then show how observers solve the binding or causal inference problem-deciding whether signals come from common causes and should hence be integrated or else be treated independently. Next, we describe the multifarious interplay between multisensory processing and attention. We argue that attentional mechanisms are crucial to compute approximate solutions to the binding problem in naturalistic environments when complex time-varying signals arise from myriad causes. Finally, we review how the brain dynamically adapts multisensory processing to a changing world across multiple timescales.
Collapse
Affiliation(s)
- Uta Noppeney
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, The Netherlands;
| |
Collapse
|
36
|
Contribution of muscle proprioception to limb movement perception and proprioceptive decline with ageing. CURRENT OPINION IN PHYSIOLOGY 2021. [DOI: 10.1016/j.cophys.2021.01.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
37
|
Direct Structural Connections between Auditory and Visual Motion-Selective Regions in Humans. J Neurosci 2021; 41:2393-2405. [PMID: 33514674 DOI: 10.1523/jneurosci.1552-20.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 12/23/2020] [Accepted: 01/04/2021] [Indexed: 11/21/2022] Open
Abstract
In humans, the occipital middle-temporal region (hMT+/V5) specializes in the processing of visual motion, while the planum temporale (hPT) specializes in auditory motion processing. It has been hypothesized that these regions might communicate directly to achieve fast and optimal exchange of multisensory motion information. Here we investigated, for the first time in humans (male and female), the presence of direct white matter connections between visual and auditory motion-selective regions using a combined fMRI and diffusion MRI approach. We found evidence supporting the potential existence of direct white matter connections between individually and functionally defined hMT+/V5 and hPT. We show that projections between hMT+/V5 and hPT do not overlap with large white matter bundles, such as the inferior longitudinal fasciculus and the inferior frontal occipital fasciculus. Moreover, we did not find evidence suggesting the presence of projections between the fusiform face area and hPT, supporting the functional specificity of hMT+/V5-hPT connections. Finally, the potential presence of hMT+/V5-hPT connections was corroborated in a large sample of participants (n = 114) from the human connectome project. Together, this study provides a first indication for potential direct occipitotemporal projections between hMT+/V5 and hPT, which may support the exchange of motion information between functionally specialized auditory and visual regions.SIGNIFICANCE STATEMENT Perceiving and integrating moving signal across the senses is arguably one of the most important perceptual skills for the survival of living organisms. In order to create a unified representation of movement, the brain must therefore integrate motion information from separate senses. Our study provides support for the potential existence of direct connections between motion-selective regions in the occipital/visual (hMT+/V5) and temporal/auditory (hPT) cortices in humans. This connection could represent the structural scaffolding for the rapid and optimal exchange and integration of multisensory motion information. These findings suggest the existence of computationally specific pathways that allow information flow between areas that share a similar computational goal.
Collapse
|
38
|
Jones SA, Noppeney U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex 2021; 138:1-23. [PMID: 33676086 DOI: 10.1016/j.cortex.2021.02.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 01/23/2021] [Accepted: 02/02/2021] [Indexed: 11/29/2022]
Abstract
The processing of multisensory signals is crucial for effective interaction with the environment, but our ability to perform this vital function changes as we age. In the first part of this review, we summarise existing research into the effects of healthy ageing on multisensory integration. We note that age differences vary substantially with the paradigms and stimuli used: older adults often receive at least as much benefit (to both accuracy and response times) as younger controls from congruent multisensory stimuli, but are also consistently more negatively impacted by the presence of intersensory conflict. In the second part, we outline a normative Bayesian framework that provides a principled and computationally informed perspective on the key ingredients involved in multisensory perception, and how these are affected by ageing. Applying this framework to the existing literature, we conclude that changes to sensory reliability, prior expectations (together with attentional control), and decisional strategies all contribute to the age differences observed. However, we find no compelling evidence of any age-related changes to the basic inference mechanisms involved in multisensory perception.
Collapse
Affiliation(s)
- Samuel A Jones
- The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
39
|
Thakkar KN, Mathalon DH, Ford JM. Reconciling competing mechanisms posited to underlie auditory verbal hallucinations. Philos Trans R Soc Lond B Biol Sci 2021; 376:20190702. [PMID: 33308062 PMCID: PMC7741078 DOI: 10.1098/rstb.2019.0702] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/24/2020] [Indexed: 01/21/2023] Open
Abstract
Perception is not the passive registration of incoming sensory data. Rather, it involves some analysis by synthesis, based on past experiences and context. One adaptive consequence of this arrangement is imagination-the ability to richly simulate sensory experiences, interrogate and manipulate those simulations, in service of action and decision making. In this paper, we will discuss one possible cost of this adaptation, namely hallucinations-perceptions without sensory stimulation, which characterize serious mental illnesses like schizophrenia, but which also occur in neurological illnesses, and-crucially for the present piece-are common also in the non-treatment-seeking population. We will draw upon a framework for imagination that distinguishes voluntary from non-voluntary experiences and explore the extent to which the varieties and features of hallucinations map onto this distinction, with a focus on auditory-verbal hallucinations (AVHs)-colloquially, hearing voices. We will propose that sense of agency for the act of imagining is key to meaningfully dissecting different forms and features of AVHs, and we will outline the neural, cognitive and phenomenological sequelae of this sense. We will conclude that a compelling unifying framework for action, perception and belief-predictive processing-can incorporate observations regarding sense of agency, imagination and hallucination. This article is part of the theme issue 'Offline perception: voluntary and spontaneous perceptual experiences without matching external stimulation'.
Collapse
Affiliation(s)
- Katharine N. Thakkar
- Department of Psychology, Michigan State University, East Lansing, MI, USA
- Department of Psychiatry and Behavioral Medicine, Michigan State University, East Lansing, MI, USA
| | - Daniel H. Mathalon
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco (UCSF), San Francisco, CA, USA
- Mental Health Service, Veterans Affairs San Francisco Healthcare System, San Francisco, CA, USA
| | - Judith M. Ford
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco (UCSF), San Francisco, CA, USA
- Mental Health Service, Veterans Affairs San Francisco Healthcare System, San Francisco, CA, USA
| |
Collapse
|
40
|
Kaiser M, Senkowski D, Keil J. Mediofrontal theta-band oscillations reflect top-down influence in the ventriloquist illusion. Hum Brain Mapp 2021; 42:452-466. [PMID: 33617132 PMCID: PMC7775991 DOI: 10.1002/hbm.25236] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 09/01/2020] [Accepted: 09/29/2020] [Indexed: 11/12/2022] Open
Abstract
In the ventriloquist illusion, spatially disparate visual signals can influence the perceived location of simultaneous sounds. Previous studies have shown asymmetrical responses in auditory cortical regions following perceived peripheral sound shifts. Moreover, higher-order cortical areas perform inferences on the sources of disparate audiovisual signals. Recent studies have also highlighted top-down influence in the ventriloquist illusion and postulated a governing function of neural oscillations for crossmodal processing. In this EEG study, we analyzed source-reconstructed neural oscillations to address the question of whether perceived sound shifts affect the laterality of auditory responses. Moreover, we investigated the modulation of neural oscillations related to the occurrence of the illusion more generally. With respect to the first question, we did not find evidence for significant changes in the laterality of auditory responses due to perceived sound shifts. However, we found a sustained reduction of mediofrontal theta-band power starting prior to stimulus onset when participants perceived the illusion compared to when they did not perceive the illusion. We suggest that this effect reflects a state of diminished cognitive control, leading to reliance on more readily discriminable visual information and increased crossmodal influence. We conclude that mediofrontal theta-band oscillations serve as a neural mechanism underlying top-down modulation of crossmodal processing in the ventriloquist illusion.
Collapse
Affiliation(s)
- Mathis Kaiser
- Department of Psychiatry and PsychotherapyCharité Universitätsmedizin—BerlinBerlinGermany
- Berlin School of Mind and BrainHumboldt Universität zu BerlinBerlinGermany
| | - Daniel Senkowski
- Department of Psychiatry and PsychotherapyCharité Universitätsmedizin—BerlinBerlinGermany
| | - Julian Keil
- Department of Psychiatry and PsychotherapyCharité Universitätsmedizin—BerlinBerlinGermany
- Biological PsychologyChristian‐Albrechts‐Universität zu KielKielGermany
| |
Collapse
|
41
|
Zheng M, Xu J, Keniston L, Wu J, Chang S, Yu L. Choice-dependent cross-modal interaction in the medial prefrontal cortex of rats. Mol Brain 2021; 14:13. [PMID: 33446258 PMCID: PMC7809823 DOI: 10.1186/s13041-021-00732-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 01/08/2021] [Indexed: 11/25/2022] Open
Abstract
Cross-modal interaction (CMI) could significantly influence the perceptional or decision-making process in many circumstances. However, it remains poorly understood what integrative strategies are employed by the brain to deal with different task contexts. To explore it, we examined neural activities of the medial prefrontal cortex (mPFC) of rats performing cue-guided two-alternative forced-choice tasks. In a task requiring rats to discriminate stimuli based on auditory cue, the simultaneous presentation of an uninformative visual cue substantially strengthened mPFC neurons' capability of auditory discrimination mainly through enhancing the response to the preferred cue. Doing this also increased the number of neurons revealing a cue preference. If the task was changed slightly and a visual cue, like the auditory, denoted a specific behavioral direction, mPFC neurons frequently showed a different CMI pattern with an effect of cross-modal enhancement best evoked in information-congruent multisensory trials. In a choice free task, however, the majority of neurons failed to show a cross-modal enhancement effect and cue preference. These results indicate that CMI at the neuronal level is context-dependent in a way that differs from what has been shown in previous studies.
Collapse
Affiliation(s)
- Mengyao Zheng
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD 21853 USA
| | - Jing Wu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Song Chang
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| |
Collapse
|
42
|
Beierholm U, Rohe T, Ferrari A, Stegle O, Noppeney U. Using the past to estimate sensory uncertainty. eLife 2020; 9:54172. [PMID: 33319749 PMCID: PMC7806269 DOI: 10.7554/elife.54172] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Accepted: 12/13/2020] [Indexed: 01/14/2023] Open
Abstract
To form a more reliable percept of the environment, the brain needs to estimate its own sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus. We evaluated this assumption in four psychophysical experiments, in which human observers localized auditory signals that were presented synchronously with spatially disparate visual signals. Critically, the visual noise changed dynamically over time continuously or with intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory uncertainty estimates that combine information from past and current signals consistent with an optimal Bayesian learner that can be approximated by exponential discounting. Our results challenge leading models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.
Collapse
Affiliation(s)
- Ulrik Beierholm
- Psychology Department, Durham University, Durham, United Kingdom
| | - Tim Rohe
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany.,Department of Psychology, Friedrich-Alexander University Erlangen-Nuernberg, Erlangen, Germany
| | - Ambra Ferrari
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, United Kingdom
| | - Oliver Stegle
- Max Planck Institute for Intelligent Systems, Tübingen, Germany.,European Molecular Biology Laboratory, Genome Biology Unit, Heidelberg, Germany.,Division of Computational Genomics and Systems Genetics, German Cancer Research Center (DKFZ), Heidelberg, Germany, Heidelberg, Germany
| | - Uta Noppeney
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, United Kingdom.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
43
|
Eeles E, Ward S, Teodorczuk A, Dissanayaka N, Burianová H. Consciousness and the rabbit holes of delirium. Med Hypotheses 2020; 144:110260. [PMID: 33254566 DOI: 10.1016/j.mehy.2020.110260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2020] [Revised: 09/02/2020] [Accepted: 09/05/2020] [Indexed: 11/24/2022]
Abstract
Delirium is a common disorder in hospitalized older adults and the defining characteristic is a disturbance of consciousness. Unfortunately, there are currently no testable measures of consciousness as pertains to its disruption in delirium. Not surprisingly rates of recognition of delirium suffer. Arguably, a greater understanding of the quantum of consciousness may improve delirium diagnosis through better diagnostic tools. Candidate dimensions of consciousness derived from fields of psychology, psychiatry, and philosophy are discussed and relevance to delirium explored. Based upon existing literature in the field of consciousness we identify the pre-reflective state, experiential awareness, and functional networks as candidate sites that may be affected in delirium. Opportunities for clinical instrument development and how these tools can be tested are discussed. We conclude that consciousness content may not hold to a unitary measurement, but facets of its integrity that are impacted in delirium are open to further exploration. Disorders in pre-reflective status, experiential awareness, and functional networks may represent the measurable "rabbit holes" of consciousness disturbance.
Collapse
Affiliation(s)
- Eamonn Eeles
- Internal Medicine Service, The Prince Charles Hospital, Brisbane, QLD, Australia; School of Medicine, Northside Clinical School, The University of Queensland, The Prince Charles Hospital, QLD, Australia; UQ Centre for Clinical Research, Faculty of Medicine, The University of Queensland, QLD.
| | - S Ward
- UQ Centre for Clinical Research, Faculty of Medicine, The University of Queensland, QLD; Redcliffe Hospital, Redcliffe, QLD, Australia
| | - A Teodorczuk
- School of Medicine, Griffith University, Gold Coast, Australia; Metro North Mental Health, The Prince Charles Hospital, Brisbane, Australia
| | - N Dissanayaka
- UQ Centre for Clinical Research, Faculty of Medicine, The University of Queensland, QLD; Department of Neurology, Royal Brisbane & Women's Hospital, Herston, Brisbane, QLD, Australia; School of Psychology, The University of Queensland, St Lucia, Brisbane, QLD, Australia
| | - H Burianová
- Department of Psychology, Bournemouth University, Fern Barrow, Poole, Dorset UK
| |
Collapse
|
44
|
Mohl JT, Pearson JM, Groh JM. Monkeys and humans implement causal inference to simultaneously localize auditory and visual stimuli. J Neurophysiol 2020; 124:715-727. [PMID: 32727263 DOI: 10.1152/jn.00046.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
The environment is sampled by multiple senses, which are woven together to produce a unified perceptual state. However, optimally unifying such signals requires assigning particular signals to the same or different underlying objects or events. Many prior studies (especially in animals) have assumed fusion of cross-modal information, whereas recent work in humans has begun to probe the appropriateness of this assumption. Here we present results from a novel behavioral task in which both monkeys (Macaca mulatta) and humans localized visual and auditory stimuli and reported their perceived sources through saccadic eye movements. When the locations of visual and auditory stimuli were widely separated, subjects made two saccades, while when the two stimuli were presented at the same location they made only a single saccade. Intermediate levels of separation produced mixed response patterns: a single saccade to an intermediate position on some trials or separate saccades to both locations on others. The distribution of responses was well described by a hierarchical causal inference model that accurately predicted both the explicit "same vs. different" source judgments as well as biases in localization of the source(s) under each of these conditions. The results from this task are broadly consistent with prior work in humans across a wide variety of analogous tasks, extending the study of multisensory causal inference to nonhuman primates and to a natural behavioral task with both a categorical assay of the number of perceived sources and a continuous report of the perceived position of the stimuli.NEW & NOTEWORTHY We developed a novel behavioral paradigm for the study of multisensory causal inference in both humans and monkeys and found that both species make causal judgments in the same Bayes-optimal fashion. To our knowledge, this is the first demonstration of behavioral causal inference in animals, and this cross-species comparison lays the groundwork for future experiments using neuronal recording techniques that are impractical or impossible in human subjects.
Collapse
Affiliation(s)
- Jeff T Mohl
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina
| | - John M Pearson
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Biostatistics and Bioinformatics, Duke University Medical School, Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina
| |
Collapse
|
45
|
Zuanazzi A, Noppeney U. Modality-specific and multisensory mechanisms of spatial attention and expectation. J Vis 2020; 20:1. [PMID: 32744617 PMCID: PMC7438668 DOI: 10.1167/jov.20.8.1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
In our natural environment, the brain needs to combine signals from multiple sensory modalities into a coherent percept. Whereas spatial attention guides perceptual decisions by prioritizing processing of signals that are task-relevant, spatial expectations encode the probability of signals over space. Previous studies have shown that behavioral effects of spatial attention generalize across sensory modalities. However, because they manipulated spatial attention as signal probability over space, these studies could not dissociate attention and expectation or assess their interaction. In two experiments, we orthogonally manipulated spatial attention (i.e., task-relevance) and expectation (i.e., signal probability) selectively in one sensory modality (i.e., primary modality) (experiment 1: audition, experiment 2: vision) and assessed their effects on primary and secondary sensory modalities in which attention and expectation were held constant. Our results show behavioral effects of spatial attention that are comparable for audition and vision as primary modalities; however, signal probabilities were learned more slowly in audition, so that spatial expectations were formed later in audition than vision. Critically, when these differences in learning between audition and vision were accounted for, both spatial attention and expectation affected responses more strongly in the primary modality in which they were manipulated and generalized to the secondary modality only in an attenuated fashion. Collectively, our results suggest that both spatial attention and expectation rely on modality-specific and multisensory mechanisms.
Collapse
|
46
|
Zumer JM, White TP, Noppeney U. The neural mechanisms of audiotactile binding depend on asynchrony. Eur J Neurosci 2020; 52:4709-4731. [PMID: 32725895 DOI: 10.1111/ejn.14928] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Revised: 07/06/2020] [Accepted: 07/24/2020] [Indexed: 11/30/2022]
Abstract
Asynchrony is a critical cue informing the brain whether sensory signals are caused by a common source and should be integrated or segregated. This psychophysics-electroencephalography (EEG) study investigated the influence of asynchrony on how the brain binds audiotactile (AT) signals to enable faster responses in a redundant target paradigm. Human participants actively responded (psychophysics) or passively attended (EEG) to noise bursts, "taps-to-the-face" and their AT combinations at seven AT asynchronies: 0, ±20, ±70 and ±500 ms. Behaviourally, observers were faster at detecting AT than unisensory stimuli within a temporal integration window: the redundant target effect was maximal for synchronous stimuli and declined within a ≤70 ms AT asynchrony. EEG revealed a cascade of AT interactions that relied on different neural mechanisms depending on AT asynchrony. At small (≤20 ms) asynchronies, AT interactions arose for evoked response potentials (ERPs) at 110 ms and ~400 ms post-stimulus. Selectively at ±70 ms asynchronies, AT interactions were observed for the P200 ERP, theta-band inter-trial coherence (ITC) and power at ~200 ms post-stimulus. In conclusion, AT binding was mediated by distinct neural mechanisms depending on the asynchrony of the AT signals. Early AT interactions in ERPs and theta-band ITC and power were critical for the behavioural response facilitation within a ≤±70 ms temporal integration window.
Collapse
Affiliation(s)
- Johanna M Zumer
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.,Centre for Human Brain Health, University of Birmingham, Birmingham, UK.,School of Life and Health Sciences, Aston University, Birmingham, UK
| | - Thomas P White
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK
| | - Uta Noppeney
- School of Psychology, University of Birmingham, Birmingham, UK.,Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.,Centre for Human Brain Health, University of Birmingham, Birmingham, UK.,Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
47
|
Causal Inference in Audiovisual Perception. J Neurosci 2020; 40:6600-6612. [PMID: 32669354 DOI: 10.1523/jneurosci.0051-20.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 06/26/2020] [Accepted: 07/01/2020] [Indexed: 11/21/2022] Open
Abstract
In our natural environment the senses are continuously flooded with a myriad of signals. To form a coherent representation of the world, the brain needs to integrate sensory signals arising from a common cause and segregate signals coming from separate causes. An unresolved question is how the brain solves this binding or causal inference problem and determines the causal structure of the sensory signals. In this functional magnetic resonance imaging (fMRI) study human observers (female and male) were presented with synchronous auditory and visual signals at the same location (i.e., common cause) or different locations (i.e., separate causes). On each trial, observers decided whether signals come from common or separate sources(i.e., "causal decisions"). To dissociate participants' causal inference from the spatial correspondence cues we adjusted the audiovisual disparity of the signals individually for each participant to threshold accuracy. Multivariate fMRI pattern analysis revealed the lateral prefrontal cortex as the only region that encodes predominantly the outcome of observers' causal inference (i.e., common vs separate causes). By contrast, the frontal eye field (FEF) and the intraparietal sulcus (IPS0-4) form a circuitry that concurrently encodes spatial (auditory and visual stimulus locations), decisional (causal inference), and motor response dimensions. These results suggest that the lateral prefrontal cortex plays a key role in inferring and making explicit decisions about the causal structure that generates sensory signals in our environment. By contrast, informed by observers' inferred causal structure, the FEF-IPS circuitry integrates auditory and visual spatial signals into representations that guide motor responses.SIGNIFICANCE STATEMENT In our natural environment, our senses are continuously flooded with a myriad of signals. Transforming this barrage of sensory signals into a coherent percept of the world relies inherently on solving the causal inference problem, deciding whether sensory signals arise from a common cause and should hence be integrated or else be segregated. This functional magnetic resonance imaging study shows that the lateral prefrontal cortex plays a key role in inferring the causal structure of the environment. Crucially, informed by the spatial correspondence cues and the inferred causal structure the frontal eye field and the intraparietal sulcus form a circuitry that integrates auditory and visual spatial signals into representations that guide motor responses.
Collapse
|
48
|
The interplay between multisensory integration and perceptual decision making. Neuroimage 2020; 222:116970. [PMID: 32454204 DOI: 10.1016/j.neuroimage.2020.116970] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Revised: 03/23/2020] [Accepted: 05/15/2020] [Indexed: 01/15/2023] Open
Abstract
Facing perceptual uncertainty, the brain combines information from different senses to make optimal perceptual decisions and to guide behavior. However, decision making has been investigated mostly in unimodal contexts. Thus, how the brain integrates multisensory information during decision making is still unclear. Two opposing, but not mutually exclusive, scenarios are plausible: either the brain thoroughly combines the signals from different modalities before starting to build a supramodal decision, or unimodal signals are integrated during decision formation. To answer this question, we devised a paradigm mimicking naturalistic situations where human participants were exposed to continuous cacophonous audiovisual inputs containing an unpredictable signal cue in one or two modalities and had to perform a signal detection task or a cue categorization task. First, model-based analyses of behavioral data indicated that multisensory integration takes place alongside perceptual decision making. Next, using supervised machine learning on concurrently recorded EEG, we identified neural signatures of two processing stages: sensory encoding and decision formation. Generalization analyses across experimental conditions and time revealed that multisensory cues were processed faster during both stages. We further established that acceleration of neural dynamics during sensory encoding and decision formation was directly linked to multisensory integration. Our results were consistent across both signal detection and categorization tasks. Taken together, the results revealed a continuous dynamic interplay between multisensory integration and decision making processes (mixed scenario), with integration of multimodal information taking place both during sensory encoding as well as decision formation.
Collapse
|
49
|
French RL, DeAngelis GC. Multisensory neural processing: from cue integration to causal inference. CURRENT OPINION IN PHYSIOLOGY 2020; 16:8-13. [PMID: 32968701 DOI: 10.1016/j.cophys.2020.04.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Neurophysiological studies of multisensory processing have largely focused on how the brain integrates information from different sensory modalities to form a coherent percept. However, in the natural environment, an important extra step is needed: the brain faces the problem of causal inference, which involves determining whether different sources of sensory information arise from the same environmental cause, such that integrating them is advantageous Behavioral and computational studies have provided a strong foundation for studying causal inference, but studies of its neural basis have only recently been undertaken. This review focuses on recent advances regarding how the brain infers the causes of sensory inputs and uses this information to make robust perceptual estimates.
Collapse
Affiliation(s)
- Ranran L French
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY
| |
Collapse
|
50
|
Decoding across sensory modalities reveals common supramodal signatures of conscious perception. Proc Natl Acad Sci U S A 2020; 117:7437-7446. [PMID: 32184331 PMCID: PMC7132110 DOI: 10.1073/pnas.1912584117] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
An increasing number of studies highlight common brain regions and processes in mediating conscious sensory experience. While most studies have been performed in the visual modality, it is implicitly assumed that similar processes are involved in other sensory modalities. However, the existence of supramodal neural processes related to conscious perception has not been convincingly shown so far. Here, we aim to directly address this issue by investigating whether neural correlates of conscious perception in one modality can predict conscious perception in a different modality. In two separate experiments, we presented participants with successive blocks of near-threshold tasks involving subjective reports of tactile, visual, or auditory stimuli during the same magnetoencephalography (MEG) acquisition. Using decoding analysis in the poststimulus period between sensory modalities, our first experiment uncovered supramodal spatiotemporal neural activity patterns predicting conscious perception of the feeble stimulation. Strikingly, these supramodal patterns included activity in primary sensory regions not directly relevant to the task (e.g., neural activity in visual cortex predicting conscious perception of auditory near-threshold stimulation). We carefully replicate our results in a control experiment that furthermore show that the relevant patterns are independent of the type of report (i.e., whether conscious perception was reported by pressing or withholding a button press). Using standard paradigms for probing neural correlates of conscious perception, our findings reveal a common signature of conscious access across sensory modalities and illustrate the temporally late and widespread broadcasting of neural representations, even into task-unrelated primary sensory processing regions.
Collapse
|