1
|
Gao M, Zhu W, Drewes J. The temporal dynamics of conscious and unconscious audio-visual semantic integration. Heliyon 2024; 10:e33828. [PMID: 39055801 PMCID: PMC11269866 DOI: 10.1016/j.heliyon.2024.e33828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 06/11/2024] [Accepted: 06/27/2024] [Indexed: 07/28/2024] Open
Abstract
We compared the time course of cross-modal semantic effects induced by both naturalistic sounds and spoken words on the processing of visual stimuli, whether visible or suppressed form awareness through continuous flash suppression. We found that, under visible conditions, spoken words elicited audio-visual semantic effects over longer time (-1000, -500, -250 ms SOAs) than naturalistic sounds (-500, -250 ms SOAs). Performance was generally better with auditory primes, but more so with congruent stimuli. Spoken words presented in advance (-1000, -500 ms) outperformed naturalistic sounds; the opposite was true for (near-)simultaneous presentations. Congruent spoken words demonstrated superior categorization performance compared to congruent naturalistic sounds. The audio-visual semantic congruency effect still occurred with suppressed visual stimuli, although without significant variations in the temporal patterns between auditory types. These findings indicate that: 1. Semantically congruent auditory input can enhance visual processing performance, even when the visual stimulus is imperceptible to conscious awareness. 2. The temporal dynamics is contingent on the auditory types only when the visual stimulus is visible. 3. Audiovisual semantic integration requires sufficient time for processing auditory information.
Collapse
Affiliation(s)
- Mingjie Gao
- School of Information Science, Yunnan University, Kunming, China
| | - Weina Zhu
- School of Information Science, Yunnan University, Kunming, China
| | - Jan Drewes
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| |
Collapse
|
2
|
Kim HW, Park M, Lee YS, Kim CY. Prior conscious experience modulates the impact of audiovisual temporal correspondence on unconscious visual processing. Conscious Cogn 2024; 122:103709. [PMID: 38781813 DOI: 10.1016/j.concog.2024.103709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 05/09/2024] [Accepted: 05/14/2024] [Indexed: 05/25/2024]
Abstract
Conscious visual experiences are enriched by concurrent auditory information, implying audiovisual interactions. In the present study, we investigated how prior conscious experience of auditory and visual information influences the subsequent audiovisual temporal integration under the surface of awareness. We used continuous flash suppression (CFS) to render perceptually invisible a ball-shaped object constantly moving and bouncing inside a square frame window. To examine whether audiovisual temporal correspondence facilitates the ball stimulus to enter awareness, the visual motion was accompanied by click sounds temporally congruent or incongruent with the bounces of the ball. In Experiment 1, where no prior experience of the audiovisual events was given, we found no significant impact of audiovisual correspondence on visual detection time. However, when the temporally congruent or incongruent bounce-sound relations were consciously experienced prior to CFS in Experiment 2, congruent sounds yielded faster detection time compared to incongruent sounds during CFS. In addition, in Experiment 3, explicit processing of the incongruent bounce-sound relation prior to CFS slowed down detection time when the ball bounces became later congruent with sounds during CFS. These findings suggest that audiovisual temporal integration may take place outside of visual awareness though its potency is modulated by previous conscious experiences of the audiovisual events. The results are discussed in light of the framework of multisensory causal inference.
Collapse
Affiliation(s)
- Hyun-Woong Kim
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, United States; Department of Psychology, The University of Texas at Dallas, Richardson, United States
| | - Minsun Park
- School of Psychology, Korea University, Seoul, Republic of Korea
| | - Yune Sang Lee
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, United States; Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, United States
| | - Chai-Youn Kim
- School of Psychology, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
3
|
Park M, Blake R, Kim CY. Audiovisual interactions outside of visual awareness during motion adaptation. Neurosci Conscious 2024; 2024:niad027. [PMID: 38292024 PMCID: PMC10823907 DOI: 10.1093/nc/niad027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 12/05/2023] [Accepted: 12/27/2023] [Indexed: 02/01/2024] Open
Abstract
Motion aftereffects (MAEs), illusory motion experienced in a direction opposed to real motion experienced during prior adaptation, have been used to assess audiovisual interactions. In a previous study from our laboratory, we demonstrated that a congruent direction of auditory motion presented concurrently with visual motion during adaptation strengthened the consequent visual MAE, compared to when auditory motion was incongruent in direction. Those judgments of MAE strength, however, could have been influenced by expectations or response bias from mere knowledge of the state of audiovisual congruity during adaptation. To prevent such knowledge, we now employed continuous flash suppression to render visual motion perceptually invisible during adaptation, ensuring that observers were completely unaware of visual adapting motion and only aware of the motion direction of the sound they were hearing. We found a small but statistically significant congruence effect of sound on adaptation strength produced by invisible adaptation motion. After considering alternative explanations for this finding, we conclude that auditory motion can impact the strength of visual processing produced by translational visual motion even when that motion transpires outside of awareness.
Collapse
Affiliation(s)
- Minsun Park
- School of Psychology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| | - Randolph Blake
- Department of Psychology, Vanderbilt University, PMB 407817 2301 Vanderbilt Place, Nashville, TN 37240-7817, United States
| | - Chai-Youn Kim
- School of Psychology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| |
Collapse
|
4
|
Williams JR, Markov YA, Tiurina NA, Störmer VS. What You See Is What You Hear: Sounds Alter the Contents of Visual Perception. Psychol Sci 2022; 33:2109-2122. [PMID: 36179072 DOI: 10.1177/09567976221121348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Visual object recognition is not performed in isolation but depends on prior knowledge and context. Here, we found that auditory context plays a critical role in visual object perception. Using a psychophysical task in which naturalistic sounds were paired with noisy visual inputs, we demonstrated across two experiments (young adults; ns = 18-40 in Experiments 1 and 2, respectively) that the representations of ambiguous visual objects were shifted toward the visual features of an object that were related to the incidental sound. In a series of control experiments, we found that these effects were not driven by decision or response biases (ns = 40-85) nor were they due to top-down expectations (n = 40). Instead, these effects were driven by the continuous integration of audiovisual inputs during perception itself. Together, our results demonstrate that the perceptual experience of visual objects is directly shaped by naturalistic auditory context, which provides independent and diagnostic information about the visual world.
Collapse
Affiliation(s)
| | - Yuri A Markov
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne (EPFL)
| | - Natalia A Tiurina
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne (EPFL)
| | - Viola S Störmer
- Department of Psychology, University of California San Diego.,Department of Brain and Psychological Sciences, Dartmouth College
| |
Collapse
|
5
|
Delong P, Noppeney U. Semantic and spatial congruency mould audiovisual integration depending on perceptual awareness. Sci Rep 2021; 11:10832. [PMID: 34035358 PMCID: PMC8149651 DOI: 10.1038/s41598-021-90183-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 04/22/2021] [Indexed: 11/09/2022] Open
Abstract
Information integration is considered a hallmark of human consciousness. Recent research has challenged this tenet by showing multisensory interactions in the absence of awareness. This psychophysics study assessed the impact of spatial and semantic correspondences on audiovisual binding in the presence and absence of visual awareness by combining forward-backward masking with spatial ventriloquism. Observers were presented with object pictures and synchronous sounds that were spatially and/or semantically congruent or incongruent. On each trial observers located the sound, identified the picture and rated the picture's visibility. We observed a robust ventriloquist effect for subjectively visible and invisible pictures indicating that pictures that evade our perceptual awareness influence where we perceive sounds. Critically, semantic congruency enhanced these visual biases on perceived sound location only when the picture entered observers' awareness. Our results demonstrate that crossmodal influences operating from vision to audition and vice versa are interactively controlled by spatial and semantic congruency in the presence of awareness. However, when visual processing is disrupted by masking procedures audiovisual interactions no longer depend on semantic correspondences.
Collapse
Affiliation(s)
- Patrycja Delong
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.
| | - Uta Noppeney
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Barutchu A, Spence C. An Experimenter's Influence on Motor Enhancements: The Effects of Letter Congruency and Sensory Switch-Costs on Multisensory Integration. Front Psychol 2020; 11:588343. [PMID: 33335500 PMCID: PMC7736551 DOI: 10.3389/fpsyg.2020.588343] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 11/05/2020] [Indexed: 11/20/2022] Open
Abstract
Multisensory integration can alter information processing, and previous research has shown that such processes are modulated by sensory switch costs and prior experience (e.g., semantic or letter congruence). Here we report an incidental finding demonstrating, for the first time, the interplay between these processes and experimental factors, specifically the presence (vs. absence) of the experimenter in the testing room. Experiment 1 demonstrates that multisensory motor facilitation in response to audiovisual stimuli (circle and tone with no prior learnt associations) is higher in those trials in which the sensory modality switches than when it repeats. Those participants who completed the study while alone exhibited increased RT variability. Experiment 2 replicated these findings using the letters “b” and “d” presented as unisensory stimuli or congruent and incongruent multisensory stimuli (i.e., grapheme-phoneme pairs). Multisensory enhancements were inflated following a sensory switch; that is, congruent and incongruent multisensory stimuli resulted in significant gains following a sensory switch in the monitored condition. However, when the participants were left alone, multisensory enhancements were only observed for repeating incongruent multisensory stimuli. These incidental findings therefore suggest that the effects of letter congruence and sensory switching on multisensory integration are partly modulated by the presence of an experimenter.
Collapse
Affiliation(s)
- Ayla Barutchu
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Charles Spence
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
7
|
Barutchu A, Spence C, Humphreys GW. Multisensory enhancement elicited by unconscious visual stimuli. Exp Brain Res 2017; 236:409-417. [PMID: 29197998 PMCID: PMC5809521 DOI: 10.1007/s00221-017-5140-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Accepted: 09/26/2017] [Indexed: 12/19/2022]
Abstract
The merging of information from different senses (i.e., multisensory integration) can facilitate information processing. Processing enhancements have been observed with signals that are irrelevant to the task at hand, and with cues that are non-predictive. Such findings are consistent with the notion that multiple sensory signals are sometimes integrated automatically. Multisensory enhancement has even been reported with stimuli that have been presented subliminally, though only with meaningful multisensory relations that have already been learned. The question of whether there exist cases where multisensory effects occur without either learning or awareness has, though, not been clearly established in the literature to date. Here, we present a case study of a patient with Posterior Cortical Atrophy, who was unable to consciously perceive visual stimuli with our task parameters, yet who nevertheless still exhibited signs of multisensory enhancement even with unlearned relations between audiovisual stimuli. In a simple speeded detection task, both response speed, and the variability of reaction times, decreased in a similar manner to controls for multisensory stimuli. These results are consistent with the view that the conscious perception of stimuli and prior learning are not always a prerequisite for multisensory integration to enhance human performance.
Collapse
Affiliation(s)
- Ayla Barutchu
- Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, UK.
| | - Charles Spence
- Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, UK
| | - Glyn W Humphreys
- Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, UK
| |
Collapse
|
8
|
Kim S, Blake R, Lee M, Kim CY. Audio-visual interactions uniquely contribute to resolution of visual conflict in people possessing absolute pitch. PLoS One 2017; 12:e0175103. [PMID: 28380058 PMCID: PMC5381860 DOI: 10.1371/journal.pone.0175103] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2016] [Accepted: 02/23/2017] [Indexed: 11/26/2022] Open
Abstract
Individuals possessing absolute pitch (AP) are able to identify a given musical tone or to reproduce it without reference to another tone. The present study sought to learn whether this exceptional auditory ability impacts visual perception under stimulus conditions that provoke visual competition in the form of binocular rivalry. Nineteen adult participants with 3–19 years of musical training were divided into two groups according to their performance on a task involving identification of the specific note associated with hearing a given musical pitch. During test trials lasting just over half a minute, participants dichoptically viewed a scrolling musical score presented to one eye and a drifting sinusoidal grating presented to the other eye; throughout the trial they pressed buttons to track the alternations in visual awareness produced by these dissimilar monocular stimuli. On “pitch-congruent” trials, participants heard an auditory melody that was congruent in pitch with the visual score, on “pitch-incongruent” trials they heard a transposed auditory melody that was congruent with the score in melody but not in pitch, and on “melody-incongruent” trials they heard an auditory melody completely different from the visual score. For both groups, the visual musical scores predominated over the gratings when the auditory melody was congruent compared to when it was incongruent. Moreover, the AP participants experienced greater predominance of the visual score when it was accompanied by the pitch-congruent melody compared to the same melody transposed in pitch; for non-AP musicians, pitch-congruent and pitch-incongruent trials yielded equivalent predominance. Analysis of individual durations of dominance revealed differential effects on dominance and suppression durations for AP and non-AP participants. These results reveal that AP is accompanied by a robust form of bisensory interaction between tonal frequencies and musical notation that boosts the salience of a visual score.
Collapse
Affiliation(s)
- Sujin Kim
- Department of Psychology, Korea University, Seoul, Korea
| | - Randolph Blake
- Department of Psychological Sciences, Vanderbilt Vision Research Center, Vanderbilt University, Nashville, Tennessee, United States of America
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, Korea
| | - Minyoung Lee
- Department of Psychology, Korea University, Seoul, Korea
| | - Chai-Youn Kim
- Department of Psychology, Korea University, Seoul, Korea
- * E-mail:
| |
Collapse
|
9
|
Juan C, Cappe C, Alric B, Roby B, Gilardeau S, Barone P, Girard P. The variability of multisensory processes of natural stimuli in human and non-human primates in a detection task. PLoS One 2017; 12:e0172480. [PMID: 28212416 PMCID: PMC5315309 DOI: 10.1371/journal.pone.0172480] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Accepted: 02/06/2017] [Indexed: 11/19/2022] Open
Abstract
Background Behavioral studies in both human and animals generally converge to the dogma that multisensory integration improves reaction times (RTs) in comparison to unimodal stimulation. These multisensory effects depend on diverse conditions among which the most studied were the spatial and temporal congruences. Further, most of the studies are using relatively simple stimuli while in everyday life, we are confronted to a large variety of complex stimulations constantly changing our attentional focus over time, a modality switch that can impact on stimuli detection. In the present study, we examined the potential sources of the variability in reaction times and multisensory gains with respect to the intrinsic features of a large set of natural stimuli. Methodology/Principle findings Rhesus macaque monkeys and human subjects performed a simple audio-visual stimulus detection task in which a large collection of unimodal and bimodal natural stimuli with semantic specificities was presented at different saliencies. Although we were able to reproduce the well-established redundant signal effect, we failed to reveal a systematic violation of the race model which is considered to demonstrate multisensory integration. In both monkeys and human species, our study revealed a large range of multisensory gains, with negative and positive values. While modality switch has clear effects on reaction times, one of the main causes of the variability of multisensory gains appeared to be linked to the intrinsic physical parameters of the stimuli. Conclusion/Significance Based on the variability of multisensory benefits, our results suggest that the neuronal mechanisms responsible of the redundant effect (interactions vs. integration) are highly dependent on the stimulus complexity suggesting different implications of uni- and multisensory brain regions. Further, in a simple detection task, the semantic values of individual stimuli tend to have no significant impact on task performances, an effect which is probably present in more cognitive tasks.
Collapse
Affiliation(s)
- Cécile Juan
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Céline Cappe
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Baptiste Alric
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Benoit Roby
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Sophie Gilardeau
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Pascal Barone
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Pascal Girard
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
- INSERM, Toulouse, France
- * E-mail:
| |
Collapse
|
10
|
When audiovisual correspondence disturbs visual processing. Exp Brain Res 2016; 234:1325-32. [PMID: 26884130 DOI: 10.1007/s00221-016-4591-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2015] [Accepted: 01/30/2016] [Indexed: 10/22/2022]
Abstract
Multisensory integration is known to create a more robust and reliable perceptual representation of one's environment. Specifically, a congruent auditory input can make a visual stimulus more salient, consequently enhancing the visibility and detection of the visual target. However, it remains largely unknown whether a congruent auditory input can also impair visual processing. In the current study, we demonstrate that temporally congruent auditory input disrupts visual processing, consequently slowing down visual target detection. More importantly, this cross-modal inhibition occurs only when the contrast of visual targets is high. When the contrast of visual targets is low, enhancement of visual target detection is observed, consistent with the prediction based on the principle of inverse effectiveness (PIE) in cross-modal integration. The switch of the behavioral effect of audiovisual interaction from benefit to cost further extends the PIE to encompass the suppressive cross-modal interaction.
Collapse
|