1
|
Huizhen Tang J, Solomon SS, Kohn A, Sussman ES. Distinguishing expectation and attention effects in processing temporal patterns of visual input. Brain Cogn 2024; 182:106228. [PMID: 39461075 PMCID: PMC11645222 DOI: 10.1016/j.bandc.2024.106228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Revised: 09/27/2024] [Accepted: 10/18/2024] [Indexed: 10/29/2024]
Abstract
The current study investigated how the brain sets up expectations from stimulus regularities by evaluating the neural responses to expectations driven implicitly (by the stimuli themselves) and explicitly (by task demands). How the brain uses prior information to create expectations and what role attention plays in forming or holding predictions to efficiently respond to incoming sensory information is still debated. We presented temporal patterns of visual input while recording EEG under two different task conditions. When the patterns were task-relevant and pattern recognition was required to perform the button press task, three different event-related brain potentials (ERPs) were elicited, each reflecting a different aspect of pattern expectation. In contrast, when the patterns were task-irrelevant, none of the neural indicators of pattern recognition or pattern violation detection were observed to the same temporally structured sequences. Thus, results revealed a clear distinction between expectation and attention that was prompted by task requirements. These results provide complementary pieces of evidence that implicit exposure to a stimulus pattern may not be sufficient to drive neural effects of expectations that lead to predictive error responses. Task-driven attentional control can dissociate from stimulus-driven expectations, to effectively minimize distracting information and maximize attentional regulation.
Collapse
Affiliation(s)
- Joann Huizhen Tang
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, 1300 Morris Park Avenue, Bronx, NY 10461, USA.
| | - Selina S Solomon
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, 1300 Morris Park Avenue, Bronx, NY 10461, USA
| | - Adam Kohn
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, 1300 Morris Park Avenue, Bronx, NY 10461, USA; Department of Ophthalmology and Vision Sciences, Albert Einstein College of Medicine, 1300 Morris Park Avenue, Bronx, NY 10461, USA; Department of Systems and Computational Biology, Albert Einstein College of Medicine, 1300 Morris Park Avenue, Bronx, NY 10461, USA.
| | - Elyse S Sussman
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, 1300 Morris Park Avenue, Bronx, NY 10461, USA; Department of Otorhinolaryngology - Head & Neck Surgery, Albert Einstein College of Medicine, 1300 Morris Park Avenue, Bronx, NY 10461, USA.
| |
Collapse
|
2
|
Kayser C, Debats N, Heuer H. Both stimulus-specific and configurational features of multiple visual stimuli shape the spatial ventriloquism effect. Eur J Neurosci 2024; 59:1770-1788. [PMID: 38230578 DOI: 10.1111/ejn.16251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/22/2023] [Accepted: 12/25/2023] [Indexed: 01/18/2024]
Abstract
Studies on multisensory perception often focus on simplistic conditions in which one single stimulus is presented per modality. Yet, in everyday life, we usually encounter multiple signals per modality. To understand how multiple signals within and across the senses are combined, we extended the classical audio-visual spatial ventriloquism paradigm to combine two visual stimuli with one sound. The individual visual stimuli presented in the same trial differed in their relative timing and spatial offsets to the sound, allowing us to contrast their individual and combined influence on sound localization judgements. We find that the ventriloquism bias is not dominated by a single visual stimulus but rather is shaped by the collective multisensory evidence. In particular, the contribution of an individual visual stimulus to the ventriloquism bias depends not only on its own relative spatio-temporal alignment to the sound but also the spatio-temporal alignment of the other visual stimulus. We propose that this pattern of multi-stimulus multisensory integration reflects the evolution of evidence for sensory causal relations during individual trials, calling for the need to extend established models of multisensory causal inference to more naturalistic conditions. Our data also suggest that this pattern of multisensory interactions extends to the ventriloquism aftereffect, a bias in sound localization observed in unisensory judgements following a multisensory stimulus.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Nienke Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
3
|
Sciortino P, Kayser C. Steady state visual evoked potentials reveal a signature of the pitch-size crossmodal association in visual cortex. Neuroimage 2023; 273:120093. [PMID: 37028733 DOI: 10.1016/j.neuroimage.2023.120093] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/31/2023] [Accepted: 04/04/2023] [Indexed: 04/08/2023] Open
Abstract
Crossmodal correspondences describe our tendency to associate sensory features from different modalities with each other, such as the pitch of a sound with the size of a visual object. While such crossmodal correspondences (or associations) are described in many behavioural studies their neurophysiological correlates remain unclear. Under the current working model of multisensory perception both a low- and a high-level account seem plausible. That is, the neurophysiological processes shaping these associations could commence in low-level sensory regions, or may predominantly emerge in high-level association regions of semantic and object identification networks. We exploited steady-state visual evoked potentials (SSVEP) to directly probe this question, focusing on the associations between pitch and the visual features of size, hue or chromatic saturation. We found that SSVEPs over occipital regions are sensitive to the congruency between pitch and size, and a source analysis pointed to an origin around primary visual cortices. We speculate that this signature of the pitch-size association in low-level visual cortices reflects the successful pairing of congruent visual and acoustic object properties and may contribute to establishing causal relations between multisensory objects. Besides this, our study also provides a paradigm can be exploited to study other crossmodal associations involving visual stimuli in the future.
Collapse
|
4
|
Abstract
Temporal attention is the selection and prioritization of information at a specific moment. Exogenous temporal attention is the automatic, stimulus driven deployment of attention. The benefits and costs of exogenous temporal attention on performance have not been isolated. Previous experimental designs have precluded distinguishing the effects of attention and expectation about stimulus timing. Here, we manipulated exogenous temporal attention and the uncertainty of stimulus timing independently and investigated visual performance at the attended and unattended moments with different levels of temporal uncertainty. In each trial, two Gabor patches were presented consecutively with a variable stimulus onset. To drive exogenous attention and test performance at attended and unattended moments, a task-irrelevant, brief cue was presented 100 ms before target onset, and an independent response cue was presented at the end of the trial. Exogenous temporal attention slightly improved accuracy, and the effects varied with temporal uncertainty, suggesting a possible interaction of temporal attention and expectations in time.
Collapse
Affiliation(s)
- Aysun Duyar
- Department of Psychology, New York University, New York, NY, USA
| | - Rachel N Denison
- Department of Psychology, New York University, New York, NY, USA
- Department of Psychological & Brain Sciences, Boston University, Boston, MA, USA
- Center for Neural Science, New York University, New York, NY, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
5
|
Magnetoencephalography recordings reveal the neural mechanisms of auditory contributions to improved visual detection. Commun Biol 2023; 6:12. [PMID: 36604455 PMCID: PMC9816120 DOI: 10.1038/s42003-022-04335-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 12/01/2022] [Indexed: 01/07/2023] Open
Abstract
Sounds enhance the detection of visual stimuli while concurrently biasing an observer's decisions. To investigate the neural mechanisms that underlie such multisensory interactions, we decoded time-resolved Signal Detection Theory sensitivity and criterion parameters from magneto-encephalographic recordings of participants that performed a visual detection task. We found that sounds improved visual detection sensitivity by enhancing the accumulation and maintenance of perceptual evidence over time. Meanwhile, criterion decoding analyses revealed that sounds induced brain activity patterns that resembled the patterns evoked by an actual visual stimulus. These two complementary mechanisms of audiovisual interplay differed in terms of their automaticity: Whereas the sound-induced enhancement in visual sensitivity depended on participants being actively engaged in a detection task, we found that sounds activated the visual cortex irrespective of task demands, potentially inducing visual illusory percepts. These results challenge the classical assumption that sound-induced increases in false alarms exclusively correspond to decision-level biases.
Collapse
|
6
|
Symptom Perception in Pathological Illness Anxiety: Tactile Sensitivity and Bias. Psychosom Med 2023; 85:79-88. [PMID: 36516317 DOI: 10.1097/psy.0000000000001154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
OBJECTIVE Symptom perception in pathological illness anxiety (PIA) might be biased so that somatic signals are overreported. In the somatic signal detection task (SSDT), performance in detecting weak tactile stimuli gives information on overreporting or underreporting of stimuli. This task has not yet been applied in PIA. METHODS Participants with PIA (n = 44) and healthy controls (n = 40) underwent two versions of the SSDT in randomized order. In the original version, tactile and auxiliary light-emitting diode (LED) stimuli were each presented in half of the trials. In the adapted version, illness or neutral words were presented alongside tactile stimuli. Participants also conducted a heartbeat mental tracking task. RESULTS We found significantly higher sensitivity and a more liberal response bias in LED versus no-LED trials, but no significant differences between word types. An interaction effect showed a more pronounced increase of sensitivity from no LED to LED trials in participants with PIA when compared with the adapted SSDT and control group (F(1,76) = 5.34, p = .024, η2 = 0.066). Heartbeat perception scores did not differ between groups (BF01 of 3.63). CONCLUSIONS The increase in sensitivity from no LED to LED trials in participants with PIA suggests stronger multisensory integration. Low sensitivity in the adapted SSDT indicates that attentional resources were exhausted by processing word stimuli. Word effects on response bias might have carried over to the original SSDT when the word version was presented first, compromising group effects regarding bias. TRIAL REGISTRATION The study was preregistered on OSF (https://osf.io/sna5v/).
Collapse
|
7
|
Ferrari A, Noppeney U. Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biol 2021; 19:e3001465. [PMID: 34793436 PMCID: PMC8639080 DOI: 10.1371/journal.pbio.3001465] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 12/02/2021] [Accepted: 11/01/2021] [Indexed: 11/22/2022] Open
Abstract
To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals' causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.
Collapse
Affiliation(s)
- Ambra Ferrari
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
8
|
Keefe JM, Pokta E, Störmer VS. Cross-modal orienting of exogenous attention results in visual-cortical facilitation, not suppression. Sci Rep 2021; 11:10237. [PMID: 33986384 PMCID: PMC8119727 DOI: 10.1038/s41598-021-89654-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 04/29/2021] [Indexed: 11/10/2022] Open
Abstract
Attention may be oriented exogenously (i.e., involuntarily) to the location of salient stimuli, resulting in improved perception. However, it is unknown whether exogenous attention improves perception by facilitating processing of attended information, suppressing processing of unattended information, or both. To test this question, we measured behavioral performance and cue-elicited neural changes in the electroencephalogram as participants (N = 19) performed a task in which a spatially non-predictive auditory cue preceded a visual target. Critically, this cue was either presented at a peripheral target location or from the center of the screen, allowing us to isolate spatially specific attentional activity. We find that both behavior and attention-mediated changes in visual-cortical activity are enhanced at the location of a cue prior to the onset of a target, but that behavior and neural activity at an unattended target location is equivalent to that following a central cue that does not direct attention (i.e., baseline). These results suggest that exogenous attention operates via facilitation of information at an attended location.
Collapse
Affiliation(s)
- Jonathan M Keefe
- Department of Psychology, University of California, San Diego, 92092, USA.
| | - Emilia Pokta
- Department of Psychology, University of California, San Diego, 92092, USA
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego, 92092, USA
- Department of Brain and Psychological Sciences, Dartmouth College, Hanover, USA
| |
Collapse
|
9
|
Jones SA, Noppeney U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex 2021; 138:1-23. [PMID: 33676086 DOI: 10.1016/j.cortex.2021.02.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 01/23/2021] [Accepted: 02/02/2021] [Indexed: 11/29/2022]
Abstract
The processing of multisensory signals is crucial for effective interaction with the environment, but our ability to perform this vital function changes as we age. In the first part of this review, we summarise existing research into the effects of healthy ageing on multisensory integration. We note that age differences vary substantially with the paradigms and stimuli used: older adults often receive at least as much benefit (to both accuracy and response times) as younger controls from congruent multisensory stimuli, but are also consistently more negatively impacted by the presence of intersensory conflict. In the second part, we outline a normative Bayesian framework that provides a principled and computationally informed perspective on the key ingredients involved in multisensory perception, and how these are affected by ageing. Applying this framework to the existing literature, we conclude that changes to sensory reliability, prior expectations (together with attentional control), and decisional strategies all contribute to the age differences observed. However, we find no compelling evidence of any age-related changes to the basic inference mechanisms involved in multisensory perception.
Collapse
Affiliation(s)
- Samuel A Jones
- The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
10
|
Auditory information enhances post-sensory visual evidence during rapid multisensory decision-making. Nat Commun 2020; 11:5440. [PMID: 33116148 PMCID: PMC7595090 DOI: 10.1038/s41467-020-19306-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 10/06/2020] [Indexed: 11/08/2022] Open
Abstract
Despite recent progress in understanding multisensory decision-making, a conclusive mechanistic account of how the brain translates the relevant evidence into a decision is lacking. Specifically, it remains unclear whether perceptual improvements during rapid multisensory decisions are best explained by sensory (i.e., ‘Early’) processing benefits or post-sensory (i.e., ‘Late’) changes in decision dynamics. Here, we employ a well-established visual object categorisation task in which early sensory and post-sensory decision evidence can be dissociated using multivariate pattern analysis of the electroencephalogram (EEG). We capitalize on these distinct neural components to identify when and how complementary auditory information influences the encoding of decision-relevant visual evidence in a multisensory context. We show that it is primarily the post-sensory, rather than the early sensory, EEG component amplitudes that are being amplified during rapid audiovisual decision-making. Using a neurally informed drift diffusion model we demonstrate that a multisensory behavioral improvement in accuracy arises from an enhanced quality of the relevant decision evidence, as captured by the post-sensory EEG component, consistent with the emergence of multisensory evidence in higher-order brain areas. A conclusive account on how the brain translates audiovisual evidence into a rapid decision is still lacking. Here, using a neurally-informed modelling approach, the authors show that sounds amplify visual evidence later in the decision process, in line with higher-order multisensory effects.
Collapse
|
11
|
Zuanazzi A, Noppeney U. Modality-specific and multisensory mechanisms of spatial attention and expectation. J Vis 2020; 20:1. [PMID: 32744617 PMCID: PMC7438668 DOI: 10.1167/jov.20.8.1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
In our natural environment, the brain needs to combine signals from multiple sensory modalities into a coherent percept. Whereas spatial attention guides perceptual decisions by prioritizing processing of signals that are task-relevant, spatial expectations encode the probability of signals over space. Previous studies have shown that behavioral effects of spatial attention generalize across sensory modalities. However, because they manipulated spatial attention as signal probability over space, these studies could not dissociate attention and expectation or assess their interaction. In two experiments, we orthogonally manipulated spatial attention (i.e., task-relevance) and expectation (i.e., signal probability) selectively in one sensory modality (i.e., primary modality) (experiment 1: audition, experiment 2: vision) and assessed their effects on primary and secondary sensory modalities in which attention and expectation were held constant. Our results show behavioral effects of spatial attention that are comparable for audition and vision as primary modalities; however, signal probabilities were learned more slowly in audition, so that spatial expectations were formed later in audition than vision. Critically, when these differences in learning between audition and vision were accounted for, both spatial attention and expectation affected responses more strongly in the primary modality in which they were manipulated and generalized to the secondary modality only in an attenuated fashion. Collectively, our results suggest that both spatial attention and expectation rely on modality-specific and multisensory mechanisms.
Collapse
|
12
|
Chebat DR. Introduction to the Special Issue on Multisensory Space - Perception, Neural Representation and Navigation. Multisens Res 2020; 33:375-382. [PMID: 33706284 DOI: 10.1163/22134808-bja10004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Daniel-Robert Chebat
- 1Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Israel.,2Navigation and Accessibility Research Center of Ariel University (NARCA), Israel
| |
Collapse
|