1
|
Gao M, Zhu W, Drewes J. The temporal dynamics of conscious and unconscious audio-visual semantic integration. Heliyon 2024; 10:e33828. [PMID: 39055801 PMCID: PMC11269866 DOI: 10.1016/j.heliyon.2024.e33828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 06/11/2024] [Accepted: 06/27/2024] [Indexed: 07/28/2024] Open
Abstract
We compared the time course of cross-modal semantic effects induced by both naturalistic sounds and spoken words on the processing of visual stimuli, whether visible or suppressed form awareness through continuous flash suppression. We found that, under visible conditions, spoken words elicited audio-visual semantic effects over longer time (-1000, -500, -250 ms SOAs) than naturalistic sounds (-500, -250 ms SOAs). Performance was generally better with auditory primes, but more so with congruent stimuli. Spoken words presented in advance (-1000, -500 ms) outperformed naturalistic sounds; the opposite was true for (near-)simultaneous presentations. Congruent spoken words demonstrated superior categorization performance compared to congruent naturalistic sounds. The audio-visual semantic congruency effect still occurred with suppressed visual stimuli, although without significant variations in the temporal patterns between auditory types. These findings indicate that: 1. Semantically congruent auditory input can enhance visual processing performance, even when the visual stimulus is imperceptible to conscious awareness. 2. The temporal dynamics is contingent on the auditory types only when the visual stimulus is visible. 3. Audiovisual semantic integration requires sufficient time for processing auditory information.
Collapse
Affiliation(s)
- Mingjie Gao
- School of Information Science, Yunnan University, Kunming, China
| | - Weina Zhu
- School of Information Science, Yunnan University, Kunming, China
| | - Jan Drewes
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| |
Collapse
|
2
|
Zhao S, Zhou Y, Ma F, Xie J, Feng C, Feng W. The dissociation of semantically congruent and incongruent cross-modal effects on the visual attentional blink. Front Neurosci 2023; 17:1295010. [PMID: 38161792 PMCID: PMC10755906 DOI: 10.3389/fnins.2023.1295010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024] Open
Abstract
Introduction Recent studies have found that the sound-induced alleviation of visual attentional blink, a well-known phenomenon exemplifying the beneficial influence of multisensory integration on time-based attention, was larger when that sound was semantically congruent relative to incongruent with the second visual target (T2). Although such an audiovisual congruency effect has been attributed mainly to the semantic conflict carried by the incongruent sound restraining that sound from facilitating T2 processing, it is still unclear whether the integrated semantic information carried by the congruent sound benefits T2 processing. Methods To dissociate the congruence-induced benefit and incongruence-induced reduction in the alleviation of visual attentional blink at the behavioral and neural levels, the present study combined behavioral measures and event-related potential (ERP) recordings in a visual attentional blink task wherein the T2-accompanying sound, when delivered, could be semantically neutral in addition to congruent or incongruent with respect to T2. Results The behavioral data clearly showed that compared to the neutral sound, the congruent sound improved T2 discrimination during the blink to a higher degree while the incongruent sound improved it to a lesser degree. The T2-locked ERP data revealed that the early occipital cross-modal N195 component (192-228 ms after T2 onset) was uniquely larger in the congruent-sound condition than in the neutral-sound and incongruent-sound conditions, whereas the late parietal cross-modal N440 component (400-500 ms) was prominent only in the incongruent-sound condition. Discussion These findings provide strong evidence that the modulating effect of audiovisual semantic congruency on the sound-induced alleviation of visual attentional blink contains not only a late incongruence-induced cost but also an early congruence-induced benefit, thereby demonstrating for the first time an unequivocal congruent-sound-induced benefit in alleviating the limitation of time-based visual attention.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Yuxin Zhou
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Fangfang Ma
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Jimei Xie
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
- Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
3
|
Zhao S, Wang C, Chen M, Zhai M, Leng X, Zhao F, Feng C, Feng W. Cross-modal enhancement of spatially unpredictable visual target discrimination during the attentional blink. Atten Percept Psychophys 2023; 85:2178-2195. [PMID: 37312000 DOI: 10.3758/s13414-023-02739-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/20/2023] [Indexed: 06/15/2023]
Abstract
The attentional blink can be substantially reduced by delivering a task-irrelevant sound synchronously with the second target (T2) embedded in a rapid serial visual presentation stream, which is further modulated by the semantic congruency between the sound and T2. The present study extended the cross-modal boost during attentional blink and the modulation of audiovisual semantic congruency in the spatial domain by showing that a spatially uninformative, semantically congruent (but not incongruent) sound could even improve the discrimination of spatially unpredictable T2 during attentional blink. T2-locked event-related potential (ERP) data yielded that the early cross-modal P195 difference component (184-234 ms) over the occipital scalp contralateral to the T2 location was larger preceding accurate than inaccurate discriminations of semantically congruent, but not incongruent, audiovisual T2s. Interestingly, the N2pc component (194-244 ms) associated with visual-spatial attentional allocation was enlarged for incongruent audiovisual T2s relative to congruent audiovisual and unisensory visual T2s only when they were accurately discriminated. These ERP findings suggest that the spatially extended cross-modal boost during attentional blink involves an early cross-modal interaction strengthening the perceptual processing of T2, without any sound-induced enhancement of visual-spatial attentional allocation toward T2. In contrast, the absence of an accuracy decrease in response to semantically incongruent audiovisual T2s may originate from the semantic mismatch capturing extra visual-spatial attentional resources toward T2.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China
| | - Chongzhi Wang
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China
| | - Minran Chen
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China
| | - Mengdie Zhai
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China
| | - Xuechen Leng
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China
| | - Fan Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China.
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China.
- Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, 215123, Jiangsu, China.
| |
Collapse
|
4
|
Teramoto W, Ernst MO. Effects of invisible lip movements on phonetic perception. Sci Rep 2023; 13:6478. [PMID: 37081084 PMCID: PMC10119180 DOI: 10.1038/s41598-023-33791-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Accepted: 04/19/2023] [Indexed: 04/22/2023] Open
Abstract
We investigated whether 'invisible' visual information, i.e., visual information that is not consciously perceived, could affect auditory speech perception. Repeated exposure to McGurk stimuli (auditory /ba/ with visual [ga]) temporarily changes the perception of the auditory /ba/ into a 'da' or 'ga'. This altered auditory percept persists even after the presentation of the McGurk stimuli when the auditory stimulus is presented alone (McGurk aftereffect). We used this and presented the auditory /ba/ either with or without (No Face) a masked face articulating a visual [ba] (Congruent Invisible) or a visual [ga] (Incongruent Invisible). Thus, we measured the extent to which the invisible faces could undo or prolong the McGurk aftereffects. In a further control condition, the incongruent faces remained unmasked and thus visible, resulting in four conditions in total. Visibility was defined by the participants' subjective dichotomous reports ('visible' or 'invisible'). The results showed that the Congruent Invisible condition reduced the McGurk aftereffects compared with the other conditions, while the Incongruent Invisible condition showed no difference with the No Face condition. These results suggest that 'invisible' visual information that is not consciously perceived can affect phonetic perception, but only when visual information is congruent with auditory information.
Collapse
Affiliation(s)
- W Teramoto
- Faculty of Humanities and Cultural Sciences (Psychology), Kumamoto University, 2-40-1 Kurokami, Kumamoto, 860-8555, Japan.
| | - M O Ernst
- Applied Cognitive Psychology, Ulm University, Albert-Einstein-Allee 43, 89081, Ulm, Germany
| |
Collapse
|
5
|
Sun Y, Fu Q. How do irrelevant stimuli from another modality influence responses to the targets in a same-different task. Conscious Cogn 2023; 107:103455. [PMID: 36586291 DOI: 10.1016/j.concog.2022.103455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 11/13/2022] [Accepted: 12/13/2022] [Indexed: 12/30/2022]
Abstract
It remains unclear whether multisensory interaction can implicitly occur at the abstract level. To address this issue, a same-different task was used to select comparable images and sounds in Experiment 1. Then, the stimuli with various levels of discrimination difficulty were adopted in a modified same-different task in Experiments 2, 3, and 4. The resultsshowed that only when the irrelevant stimuli were easily distinguishable, aconsistency effectcould beobservedin the testing phase. Moreover, when easily distinguishableirrelevant stimuliwere simultaneously presented with difficulttarget stimuli, irrelevant auditorystimuli facilitated responses to visual targets whereas irrelevant visual stimuli interfered with responses to auditorytargetsin the training phase,indicating an asymmetry in the role of visual and auditory in abstract multisensory integration. The results suggested that abstract multisensory information could be implicitly integrated and the inverse effectiveness principle might not apply to high-level processing of abstract multisensory integration.
Collapse
Affiliation(s)
- Ying Sun
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Qiufang Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
6
|
Phasic Alertness and Multisensory Integration Contribute to Visual Awareness of Weak Visual Targets in Audio-Visual Stimulation under Continuous Flash Suppression. Vision (Basel) 2022; 6:vision6020031. [PMID: 35737418 PMCID: PMC9228768 DOI: 10.3390/vision6020031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/12/2022] [Accepted: 05/30/2022] [Indexed: 12/04/2022] Open
Abstract
Multisensory stimulation is associated with behavioural benefits, including faster processing speed, higher detection accuracy, and increased subjective awareness. These effects are most likely explained by multisensory integration, alertness, or a combination of the two. To examine changes in subjective awareness under multisensory stimulation, we conducted three experiments in which we used Continuous Flash Suppression to mask subthreshold visual targets for healthy observers. Using the Perceptual Awareness Scale, participants reported their level of awareness of the visual target on a trial-by-trial basis. The first experiment had an audio-visual Redundant Signal Effect paradigm, in which we found faster reaction times in the audio-visual condition compared to responses to auditory or visual signals alone. In two following experiments, we separated the auditory and visual signals, first spatially (experiment 2) and then temporally (experiment 3), to test whether the behavioural benefits in our multisensory stimulation paradigm could best be explained by multisensory integration or increased phasic alerting. Based on the findings, we conclude that the largest contributing factor to increased awareness of visual stimuli accompanied by auditory tones is a rise in phasic alertness and a reduction in temporal uncertainty with a small but significant contribution of multisensory integration.
Collapse
|
7
|
Zhao S, Wang C, Feng C, Wang Y, Feng W. The interplay between audiovisual temporal synchrony and semantic congruency in the cross-modal boost of the visual target discrimination during the attentional blink. Hum Brain Mapp 2022; 43:2478-2494. [PMID: 35122347 PMCID: PMC9057096 DOI: 10.1002/hbm.25797] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 01/12/2022] [Accepted: 01/17/2022] [Indexed: 11/09/2022] Open
Abstract
The visual attentional blink can be substantially reduced by delivering a task-irrelevant sound synchronously with the second visual target (T2), and this effect is further modulated by the semantic congruency between the sound and T2. However, whether the cross-modal benefit originates from audiovisual interactions or sound-induced alertness remains controversial, and whether the semantic congruency effect is contingent on audiovisual temporal synchrony needs further investigation. The current study investigated these questions by recording event-related potentials (ERPs) in a visual attentional blink task wherein a sound could either synchronize with T2, precede T2 by 200 ms, be delayed by 100 ms, or be absent, and could be either semantically congruent or incongruent with T2 when delivered. The behavioral data showed that both the cross-modal boost of T2 discrimination and the further semantic modulation were the largest when the sound synchronized with T2. In parallel, the ERP data yielded that both the early occipital cross-modal P195 component (192-228 ms after T2 onset) and late parietal cross-modal N440 component (424-448 ms) were prominent only when the sound synchronized with T2, with the former being elicited solely when the sound was further semantically congruent whereas the latter occurring only when that sound was incongruent. These findings demonstrate not only that the cross-modal boost of T2 discrimination during the attentional blink stems from early audiovisual interactions and the semantic congruency effect depends on audiovisual temporal synchrony, but also that the semantic modulation can unfold at the early stage of visual discrimination processing.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, China.,Department of English, School of Foreign Languages, Soochow University, Suzhou, China
| | - Chongzhi Wang
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Yijun Wang
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China.,Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
8
|
Spence C, Wang QJ, Reinoso-Carvalho F, Keller S. Commercializing Sonic Seasoning in Multisensory Offline Experiential Events and Online Tasting Experiences. Front Psychol 2021; 12:740354. [PMID: 34659056 PMCID: PMC8514999 DOI: 10.3389/fpsyg.2021.740354] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 08/26/2021] [Indexed: 01/09/2023] Open
Abstract
The term "sonic seasoning" refers to the deliberate pairing of sound/music with taste/flavour in order to enhance, or modify, the multisensory tasting experience. Although the recognition that people experience a multitude of crossmodal correspondences between stimuli in the auditory and chemical senses originally emerged from the psychophysics laboratory, the last decade has seen an explosion of interest in the use and application of sonic seasoning research findings, in a range of multisensory experiential events and online offerings. These marketing-led activations have included a variety of different approaches, from curating pre-composed music selections that have the appropriate sonic qualities (such as pitch or timbre), to the composition of bespoke music/soundscapes that match the specific taste/flavour of particular food or beverage products. Moreover, given that our experience of flavour often changes over time and frequently contains multiple distinct elements, there is also scope to more closely match the sonic seasoning to the temporal evolution of the various components (or notes) of the flavour experience. We review a number of case studies of the use of sonic seasoning, highlighting some of the challenges and opportunities associated with the various approaches, and consider the intriguing interplay between physical and digital (online) experiences. Taken together, the various examples reviewed here help to illustrate the growing commercial relevance of sonic seasoning research.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, University of Oxford, Oxford, United Kingdom
| | | | | | - Steve Keller
- Studio Resonate | SXM Media, Oakland, CA, United States
| |
Collapse
|
9
|
Delong P, Noppeney U. Semantic and spatial congruency mould audiovisual integration depending on perceptual awareness. Sci Rep 2021; 11:10832. [PMID: 34035358 PMCID: PMC8149651 DOI: 10.1038/s41598-021-90183-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 04/22/2021] [Indexed: 11/09/2022] Open
Abstract
Information integration is considered a hallmark of human consciousness. Recent research has challenged this tenet by showing multisensory interactions in the absence of awareness. This psychophysics study assessed the impact of spatial and semantic correspondences on audiovisual binding in the presence and absence of visual awareness by combining forward-backward masking with spatial ventriloquism. Observers were presented with object pictures and synchronous sounds that were spatially and/or semantically congruent or incongruent. On each trial observers located the sound, identified the picture and rated the picture's visibility. We observed a robust ventriloquist effect for subjectively visible and invisible pictures indicating that pictures that evade our perceptual awareness influence where we perceive sounds. Critically, semantic congruency enhanced these visual biases on perceived sound location only when the picture entered observers' awareness. Our results demonstrate that crossmodal influences operating from vision to audition and vice versa are interactively controlled by spatial and semantic congruency in the presence of awareness. However, when visual processing is disrupted by masking procedures audiovisual interactions no longer depend on semantic correspondences.
Collapse
Affiliation(s)
- Patrycja Delong
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.
| | - Uta Noppeney
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
10
|
Zhao S, Feng C, Liao Y, Huang X, Feng W. Attentional blink suppresses both stimulus-driven and representation-driven cross-modal spread of attention. Psychophysiology 2021; 58:e13761. [PMID: 33400294 DOI: 10.1111/psyp.13761] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Revised: 11/05/2020] [Accepted: 12/14/2020] [Indexed: 11/30/2022]
Abstract
Previous studies have shown that visual attention effect can spread to the task-irrelevant auditory modality automatically through either the stimulus-driven binding process or the representation-driven priming process. Using an attentional blink paradigm, the present study investigated whether the long-latency stimulus-driven and representation-driven cross-modal spread of attention would be inhibited or facilitated when the attentional resources operating at the post-perceptual stage of processing are inadequate, whereas ensuring all visual stimuli were spatially attended and the representations of visual target object categories were activated, which were previously thought to be the only endogenous prerequisites for triggering cross-modal spread of attention. The results demonstrated that both types of attentional spreading were completely suppressed during the attentional blink interval but were highly prominent outside the attentional blink interval, with the stimulus-driven process being independent of, whereas the representation-driven process being dependent on, audiovisual semantic congruency. These findings provide the first evidence that the occurrences of both stimulus-driven and representation-driven spread of attention are contingent on the amount of post-perceptual attentional resources responsible for the late consolidation processing of visual stimuli, whereas the early detection of visual stimuli and the top-down activation of the visual representations are not the sole endogenous prerequisites for triggering any types of cross-modal attentional spreading.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Yu Liao
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Xinyin Huang
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| |
Collapse
|
11
|
Zhao S, Feng C, Huang X, Wang Y, Feng W. Neural Basis of Semantically Dependent and Independent Cross-Modal Boosts on the Attentional Blink. Cereb Cortex 2020; 31:2291-2304. [DOI: 10.1093/cercor/bhaa362] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 10/15/2020] [Accepted: 11/02/2020] [Indexed: 01/26/2023] Open
Abstract
Abstract
The present study recorded event-related potentials (ERPs) in a visual object-recognition task under the attentional blink paradigm to explore the temporal dynamics of the cross-modal boost on attentional blink and whether this auditory benefit would be modulated by semantic congruency between T2 and the simultaneous sound. Behaviorally, the present study showed that not only a semantically congruent but also a semantically incongruent sound improved T2 discrimination during the attentional blink interval, whereas the enhancement was larger for the congruent sound. The ERP results revealed that the behavioral improvements induced by both the semantically congruent and incongruent sounds were closely associated with an early cross-modal interaction on the occipital N195 (192–228 ms). In contrast, the lower T2 accuracy for the incongruent than congruent condition was accompanied by a larger late occurring cento-parietal N440 (424–448 ms). These findings suggest that the cross-modal boost on attentional blink is hierarchical: the task-irrelevant but simultaneous sound, irrespective of its semantic relevance, firstly enables T2 to escape the attentional blink via cross-modally strengthening the early stage of visual object-recognition processing, whereas the semantic conflict of the sound begins to interfere with visual awareness only at a later stage when the representation of visual object is extracted.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Xinyin Huang
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Yijun Wang
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| |
Collapse
|
12
|
Delong P, Aller M, Giani AS, Rohe T, Conrad V, Watanabe M, Noppeney U. Invisible Flashes Alter Perceived Sound Location. Sci Rep 2018; 8:12376. [PMID: 30120294 PMCID: PMC6098122 DOI: 10.1038/s41598-018-30773-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2018] [Accepted: 07/31/2018] [Indexed: 12/05/2022] Open
Abstract
Information integration across the senses is fundamental for effective interactions with our environment. The extent to which signals from different senses can interact in the absence of awareness is controversial. Combining the spatial ventriloquist illusion and dynamic continuous flash suppression (dCFS), we investigated in a series of two experiments whether visual signals that observers do not consciously perceive can influence spatial perception of sounds. Importantly, dCFS obliterated visual awareness only on a fraction of trials allowing us to compare spatial ventriloquism for physically identical flashes that were judged as visible or invisible. Our results show a stronger ventriloquist effect for visible than invisible flashes. Critically, a robust ventriloquist effect emerged also for invisible flashes even when participants were at chance when locating the flash. Collectively, our findings demonstrate that signals that we are not aware of in one sensory modality can alter spatial perception of signals in another sensory modality.
Collapse
Affiliation(s)
- Patrycja Delong
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, B15 2TT, Birmingham, UK.
| | - Máté Aller
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, B15 2TT, Birmingham, UK
| | - Anette S Giani
- Max Planck Institute for Biological Cybernetics, 72076, Tübingen, Germany
| | - Tim Rohe
- Max Planck Institute for Biological Cybernetics, 72076, Tübingen, Germany
| | - Verena Conrad
- Max Planck Institute for Biological Cybernetics, 72076, Tübingen, Germany
| | - Masataka Watanabe
- Max Planck Institute for Biological Cybernetics, 72076, Tübingen, Germany
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, B15 2TT, Birmingham, UK
- Max Planck Institute for Biological Cybernetics, 72076, Tübingen, Germany
| |
Collapse
|
13
|
Deroy O, Faivre N, Lunghi C, Spence C, Aller M, Noppeney U. The Complex Interplay Between Multisensory Integration and Perceptual Awareness. Multisens Res 2018; 29:585-606. [PMID: 27795942 DOI: 10.1163/22134808-00002529] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
The integration of information has been considered a hallmark of human consciousness, as it requires information being globally available via widespread neural interactions. Yet the complex interdependencies between multisensory integration and perceptual awareness, or consciousness, remain to be defined. While perceptual awareness has traditionally been studied in a single sense, in recent years we have witnessed a surge of interest in the role of multisensory integration in perceptual awareness. Based on a recent IMRF symposium on multisensory awareness, this review discusses three key questions from conceptual, methodological and experimental perspectives: (1) What do we study when we study multisensory awareness? (2) What is the relationship between multisensory integration and perceptual awareness? (3) Which experimental approaches are most promising to characterize multisensory awareness? We hope that this review paper will provoke lively discussions, novel experiments, and conceptual considerations to advance our understanding of the multifaceted interplay between multisensory integration and consciousness.
Collapse
Affiliation(s)
- O Deroy
- Centre for the Study of the Senses, Institute of Philosophy, School of Advanced Study, University of London, London, UK
| | - N Faivre
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - C Lunghi
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - C Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, UK
| | - M Aller
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - U Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| |
Collapse
|
14
|
Macaluso E, Noppeney U, Talsma D, Vercillo T, Hartcher-O’Brien J, Adam R. The Curious Incident of Attention in Multisensory Integration: Bottom-up vs. Top-down. Multisens Res 2016. [DOI: 10.1163/22134808-00002528] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The role attention plays in our experience of a coherent, multisensory world is still controversial. On the one hand, a subset of inputs may be selected for detailed processing and multisensory integration in a top-down manner, i.e., guidance of multisensory integration by attention. On the other hand, stimuli may be integrated in a bottom-up fashion according to low-level properties such as spatial coincidence, thereby capturing attention. Moreover, attention itself is multifaceted and can be describedviaboth top-down and bottom-up mechanisms. Thus, the interaction between attention and multisensory integration is complex and situation-dependent. The authors of this opinion paper are researchers who have contributed to this discussion from behavioural, computational and neurophysiological perspectives. We posed a series of questions, the goal of which was to illustrate the interplay between bottom-up and top-down processes in various multisensory scenarios in order to clarify the standpoint taken by each author and with the hope of reaching a consensus. Although divergence of viewpoint emerges in the current responses, there is also considerable overlap: In general, it can be concluded that the amount of influence that attention exerts on MSI depends on the current task as well as prior knowledge and expectations of the observer. Moreover stimulus properties such as the reliability and salience also determine how open the processing is to influences of attention.
Collapse
Affiliation(s)
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, UK
| | - Durk Talsma
- Department of Experimental Psychology, Ghent University, Henri Dunantlaan 2, B-9000 Ghent, Belgium
| | | | | | - Ruth Adam
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians-Universität LMU, Munich, Germany
| |
Collapse
|
15
|
Aller M, Giani A, Conrad V, Watanabe M, Noppeney U. A spatially collocated sound thrusts a flash into awareness. Front Integr Neurosci 2015; 9:16. [PMID: 25774126 PMCID: PMC4343005 DOI: 10.3389/fnint.2015.00016] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2014] [Accepted: 02/05/2015] [Indexed: 11/22/2022] Open
Abstract
To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception.
Collapse
Affiliation(s)
- Máté Aller
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham Birmingham, UK
| | - Anette Giani
- Max Planck Institute for Biological Cybernetics Tübingen, Germany
| | - Verena Conrad
- Max Planck Institute for Biological Cybernetics Tübingen, Germany
| | | | - Uta Noppeney
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham Birmingham, UK ; Max Planck Institute for Biological Cybernetics Tübingen, Germany
| |
Collapse
|