1
|
Hu J, Badde S, Vetter P. Auditory guidance of eye movements toward threat-related images in the absence of visual awareness. Front Hum Neurosci 2024; 18:1441915. [PMID: 39175660 PMCID: PMC11338778 DOI: 10.3389/fnhum.2024.1441915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 07/30/2024] [Indexed: 08/24/2024] Open
Abstract
The human brain is sensitive to threat-related information even when we are not aware of this information. For example, fearful faces attract gaze in the absence of visual awareness. Moreover, information in different sensory modalities interacts in the absence of awareness, for example, the detection of suppressed visual stimuli is facilitated by simultaneously presented congruent sounds or tactile stimuli. Here, we combined these two lines of research and investigated whether threat-related sounds could facilitate visual processing of threat-related images suppressed from awareness such that they attract eye gaze. We suppressed threat-related images of cars and neutral images of human hands from visual awareness using continuous flash suppression and tracked observers' eye movements while presenting congruent or incongruent sounds (finger snapping and car engine sounds). Indeed, threat-related car sounds guided the eyes toward suppressed car images, participants looked longer at the hidden car images than at any other part of the display. In contrast, neither congruent nor incongruent sounds had a significant effect on eye responses to suppressed finger images. Overall, our results suggest that only in a danger-related context semantically congruent sounds modulate eye movements to images suppressed from awareness, highlighting the prioritisation of eye responses to threat-related stimuli in the absence of visual awareness.
Collapse
Affiliation(s)
- Junchao Hu
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, MA, United States
| | - Petra Vetter
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
2
|
Kim HW, Park M, Lee YS, Kim CY. Prior conscious experience modulates the impact of audiovisual temporal correspondence on unconscious visual processing. Conscious Cogn 2024; 122:103709. [PMID: 38781813 DOI: 10.1016/j.concog.2024.103709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 05/09/2024] [Accepted: 05/14/2024] [Indexed: 05/25/2024]
Abstract
Conscious visual experiences are enriched by concurrent auditory information, implying audiovisual interactions. In the present study, we investigated how prior conscious experience of auditory and visual information influences the subsequent audiovisual temporal integration under the surface of awareness. We used continuous flash suppression (CFS) to render perceptually invisible a ball-shaped object constantly moving and bouncing inside a square frame window. To examine whether audiovisual temporal correspondence facilitates the ball stimulus to enter awareness, the visual motion was accompanied by click sounds temporally congruent or incongruent with the bounces of the ball. In Experiment 1, where no prior experience of the audiovisual events was given, we found no significant impact of audiovisual correspondence on visual detection time. However, when the temporally congruent or incongruent bounce-sound relations were consciously experienced prior to CFS in Experiment 2, congruent sounds yielded faster detection time compared to incongruent sounds during CFS. In addition, in Experiment 3, explicit processing of the incongruent bounce-sound relation prior to CFS slowed down detection time when the ball bounces became later congruent with sounds during CFS. These findings suggest that audiovisual temporal integration may take place outside of visual awareness though its potency is modulated by previous conscious experiences of the audiovisual events. The results are discussed in light of the framework of multisensory causal inference.
Collapse
Affiliation(s)
- Hyun-Woong Kim
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, United States; Department of Psychology, The University of Texas at Dallas, Richardson, United States
| | - Minsun Park
- School of Psychology, Korea University, Seoul, Republic of Korea
| | - Yune Sang Lee
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, United States; Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, United States
| | - Chai-Youn Kim
- School of Psychology, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
3
|
Park M, Blake R, Kim CY. Audiovisual interactions outside of visual awareness during motion adaptation. Neurosci Conscious 2024; 2024:niad027. [PMID: 38292024 PMCID: PMC10823907 DOI: 10.1093/nc/niad027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 12/05/2023] [Accepted: 12/27/2023] [Indexed: 02/01/2024] Open
Abstract
Motion aftereffects (MAEs), illusory motion experienced in a direction opposed to real motion experienced during prior adaptation, have been used to assess audiovisual interactions. In a previous study from our laboratory, we demonstrated that a congruent direction of auditory motion presented concurrently with visual motion during adaptation strengthened the consequent visual MAE, compared to when auditory motion was incongruent in direction. Those judgments of MAE strength, however, could have been influenced by expectations or response bias from mere knowledge of the state of audiovisual congruity during adaptation. To prevent such knowledge, we now employed continuous flash suppression to render visual motion perceptually invisible during adaptation, ensuring that observers were completely unaware of visual adapting motion and only aware of the motion direction of the sound they were hearing. We found a small but statistically significant congruence effect of sound on adaptation strength produced by invisible adaptation motion. After considering alternative explanations for this finding, we conclude that auditory motion can impact the strength of visual processing produced by translational visual motion even when that motion transpires outside of awareness.
Collapse
Affiliation(s)
- Minsun Park
- School of Psychology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| | - Randolph Blake
- Department of Psychology, Vanderbilt University, PMB 407817 2301 Vanderbilt Place, Nashville, TN 37240-7817, United States
| | - Chai-Youn Kim
- School of Psychology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| |
Collapse
|
4
|
Phasic Alertness and Multisensory Integration Contribute to Visual Awareness of Weak Visual Targets in Audio-Visual Stimulation under Continuous Flash Suppression. Vision (Basel) 2022; 6:vision6020031. [PMID: 35737418 PMCID: PMC9228768 DOI: 10.3390/vision6020031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/12/2022] [Accepted: 05/30/2022] [Indexed: 12/04/2022] Open
Abstract
Multisensory stimulation is associated with behavioural benefits, including faster processing speed, higher detection accuracy, and increased subjective awareness. These effects are most likely explained by multisensory integration, alertness, or a combination of the two. To examine changes in subjective awareness under multisensory stimulation, we conducted three experiments in which we used Continuous Flash Suppression to mask subthreshold visual targets for healthy observers. Using the Perceptual Awareness Scale, participants reported their level of awareness of the visual target on a trial-by-trial basis. The first experiment had an audio-visual Redundant Signal Effect paradigm, in which we found faster reaction times in the audio-visual condition compared to responses to auditory or visual signals alone. In two following experiments, we separated the auditory and visual signals, first spatially (experiment 2) and then temporally (experiment 3), to test whether the behavioural benefits in our multisensory stimulation paradigm could best be explained by multisensory integration or increased phasic alerting. Based on the findings, we conclude that the largest contributing factor to increased awareness of visual stimuli accompanied by auditory tones is a rise in phasic alertness and a reduction in temporal uncertainty with a small but significant contribution of multisensory integration.
Collapse
|
5
|
Imbriotis V, Ranson A, Connelly WM. RPG: A low-cost, open-source, high-performance solution for displaying visual stimuli. J Neurosci Methods 2021; 363:109343. [PMID: 34464650 DOI: 10.1016/j.jneumeth.2021.109343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 08/24/2021] [Accepted: 08/26/2021] [Indexed: 10/20/2022]
Abstract
BACKGROUND The development of new high throughput approaches for neuroscience such as high-density silicon probes and 2-photon imaging have led to a renaissance in visual neuroscience. However, generating the stimuli needed to evoke activity in the visual system still represents a non-negligible difficulty for experimentalists. While several widely used software toolkits exist to deliver such stimuli, they all suffer from some shortcomings. Primarily, the hardware needed to effectively display such stimuli comes at a significant financial cost, and secondly, triggering and/or timing the stimuli such that it can be accurately synchronized with other devices requires the use of legacy hardware, further hardware, or bespoke solutions. RESULTS Here we present RPG (Raspberry Pi Gratings), a Python package written for the Raspberry Pi, which overcomes these issues. Specifically, the Raspberry Pi is a low-cost, credit card sized computer with general purpose input/output pins, allowing RPG to be triggered to deliver stimuli and to provide real-time feedback on stimulus timing. RPG delivers stimuli at 60 frames per second and the feedback of frame timings is accurate to 10s of microseconds. COMPARISON WITH EXISTING METHOD(S) With respect to the accuracy of frame timings, the performance of RPG is at least as accurate as commonly used packages. However, the inbuilt ability to trigger stimuli and the real-time feedback of frame timings will be extremely useful for certain experiments. CONCLUSIONS RPG provides a simple to use Python interface that is capable of generating drifting sine wave gratings, Gabor patches and displaying raw images/video.
Collapse
Affiliation(s)
- Vivian Imbriotis
- School of Medicine, University of Tasmania, Hobart, Australia; Faculty of Medicine and Health Sciences, Universitat Internacional de Catalunya, Barcelona, Spain; Institut de Neurociènces, Universitat Autònoma de Barcelona, Bellaterra, Spain
| | - Adam Ranson
- Faculty of Medicine and Health Sciences, Universitat Internacional de Catalunya, Barcelona, Spain; Institut de Neurociènces, Universitat Autònoma de Barcelona, Bellaterra, Spain.
| | | |
Collapse
|
6
|
Delong P, Noppeney U. Semantic and spatial congruency mould audiovisual integration depending on perceptual awareness. Sci Rep 2021; 11:10832. [PMID: 34035358 PMCID: PMC8149651 DOI: 10.1038/s41598-021-90183-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 04/22/2021] [Indexed: 11/09/2022] Open
Abstract
Information integration is considered a hallmark of human consciousness. Recent research has challenged this tenet by showing multisensory interactions in the absence of awareness. This psychophysics study assessed the impact of spatial and semantic correspondences on audiovisual binding in the presence and absence of visual awareness by combining forward-backward masking with spatial ventriloquism. Observers were presented with object pictures and synchronous sounds that were spatially and/or semantically congruent or incongruent. On each trial observers located the sound, identified the picture and rated the picture's visibility. We observed a robust ventriloquist effect for subjectively visible and invisible pictures indicating that pictures that evade our perceptual awareness influence where we perceive sounds. Critically, semantic congruency enhanced these visual biases on perceived sound location only when the picture entered observers' awareness. Our results demonstrate that crossmodal influences operating from vision to audition and vice versa are interactively controlled by spatial and semantic congruency in the presence of awareness. However, when visual processing is disrupted by masking procedures audiovisual interactions no longer depend on semantic correspondences.
Collapse
Affiliation(s)
- Patrycja Delong
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.
| | - Uta Noppeney
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, UK.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
7
|
Cederblad AMH, Visokomogilski A, Andersen SK, MacLeod MJ, Sahraie A. Conscious awareness modulates processing speed in the redundant signal effect. Exp Brain Res 2021; 239:1877-1893. [PMID: 33864488 PMCID: PMC8277652 DOI: 10.1007/s00221-020-06008-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Accepted: 12/05/2020] [Indexed: 11/25/2022]
Abstract
Evidence for the influence of unaware signals on behaviour has been reported in both patient groups and healthy observers using the Redundant Signal Effect (RSE). The RSE refers to faster manual reaction times to the onset of multiple simultaneously presented target than those to a single stimulus. These findings are robust and apply to unimodal and multi-modal sensory inputs. A number of studies on neurologically impaired cases have demonstrated that RSE can be found even in the absence of conscious experience of the redundant signals. Here, we investigated behavioural changes associated with awareness in healthy observers by using Continuous Flash Suppression to render observers unaware of redundant targets. Across three experiments, we found an association between reaction times to the onset of a consciously perceived target and the reported level of visual awareness of the redundant target, with higher awareness being associated with faster reaction times. However, in the absence of any awareness of the redundant target, we found no evidence for speeded reaction times and even weak evidence for an inhibitory effect (slowing down of reaction times) on response to the seen target. These findings reveal marked differences between healthy observers and blindsight patients in how aware and unaware information from different locations is integrated in the RSE.
Collapse
Affiliation(s)
| | | | | | | | - Arash Sahraie
- School of Psychology, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
8
|
Zuanazzi A, Noppeney U. Modality-specific and multisensory mechanisms of spatial attention and expectation. J Vis 2020; 20:1. [PMID: 32744617 PMCID: PMC7438668 DOI: 10.1167/jov.20.8.1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
In our natural environment, the brain needs to combine signals from multiple sensory modalities into a coherent percept. Whereas spatial attention guides perceptual decisions by prioritizing processing of signals that are task-relevant, spatial expectations encode the probability of signals over space. Previous studies have shown that behavioral effects of spatial attention generalize across sensory modalities. However, because they manipulated spatial attention as signal probability over space, these studies could not dissociate attention and expectation or assess their interaction. In two experiments, we orthogonally manipulated spatial attention (i.e., task-relevance) and expectation (i.e., signal probability) selectively in one sensory modality (i.e., primary modality) (experiment 1: audition, experiment 2: vision) and assessed their effects on primary and secondary sensory modalities in which attention and expectation were held constant. Our results show behavioral effects of spatial attention that are comparable for audition and vision as primary modalities; however, signal probabilities were learned more slowly in audition, so that spatial expectations were formed later in audition than vision. Critically, when these differences in learning between audition and vision were accounted for, both spatial attention and expectation affected responses more strongly in the primary modality in which they were manipulated and generalized to the secondary modality only in an attenuated fashion. Collectively, our results suggest that both spatial attention and expectation rely on modality-specific and multisensory mechanisms.
Collapse
|
9
|
Lalwani P, Brang D. Stochastic resonance model of synaesthesia. Philos Trans R Soc Lond B Biol Sci 2019; 374:20190029. [PMID: 31630652 DOI: 10.1098/rstb.2019.0029] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
In synaesthesia, stimulation of one sensory modality evokes additional experiences in another modality (e.g. sounds evoking colours). Along with these cross-sensory experiences, there are several cognitive and perceptual differences between synaesthetes and non-synaesthetes. For example, synaesthetes demonstrate enhanced imagery, increased cortical excitability and greater perceptual sensitivity in the concurrent modality. Previous models suggest that synaesthesia results from increased connectivity between corresponding sensory regions or disinhibited feedback from higher cortical areas. While these models explain how one sense can evoke qualitative experiences in another, they fail to predict the broader phenotype of differences observed in synaesthetes. Here, we propose a novel model of synaesthesia based on the principles of stochastic resonance. Specifically, we hypothesize that synaesthetes have greater neural noise in sensory regions, which allows pre-existing multisensory pathways to elicit supra-threshold activation (i.e. synaesthetic experiences). The strengths of this model are (a) it predicts the broader cognitive and perceptual differences in synaesthetes, (b) it provides a unified framework linking developmental and induced synaesthesias, and (c) it explains why synaesthetic associations are inconsistent at onset but stabilize over time. We review research consistent with this model and propose future studies to test its limits. This article is part of a discussion meeting issue 'Bridging senses: novel insights from synaesthesia'.
Collapse
Affiliation(s)
- Poortata Lalwani
- Department of Psychology, University of Michigan, 530 Church Street, Ann Arbor, MI 48109, USA
| | - David Brang
- Department of Psychology, University of Michigan, 530 Church Street, Ann Arbor, MI 48109, USA
| |
Collapse
|
10
|
Meijer D, Veselič S, Calafiore C, Noppeney U. Integration of audiovisual spatial signals is not consistent with maximum likelihood estimation. Cortex 2019; 119:74-88. [PMID: 31082680 PMCID: PMC6864592 DOI: 10.1016/j.cortex.2019.03.026] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 03/26/2019] [Accepted: 03/28/2019] [Indexed: 01/01/2023]
Abstract
Multisensory perception is regarded as one of the most prominent examples where human behaviour conforms to the computational principles of maximum likelihood estimation (MLE). In particular, observers are thought to integrate auditory and visual spatial cues weighted in proportion to their relative sensory reliabilities into the most reliable and unbiased percept consistent with MLE. Yet, evidence to date has been inconsistent. The current pre-registered, large-scale (N = 36) replication study investigated the extent to which human behaviour for audiovisual localization is in line with maximum likelihood estimation. The acquired psychophysics data show that while observers were able to reduce their multisensory variance relative to the unisensory variances in accordance with MLE, they weighed the visual signals significantly stronger than predicted by MLE. Simulations show that this dissociation can be explained by a greater sensitivity of standard estimation procedures to detect deviations from MLE predictions for sensory weights than for audiovisual variances. Our results therefore suggest that observers did not integrate audiovisual spatial signals weighted exactly in proportion to their relative reliabilities for localization. These small deviations from the predictions of maximum likelihood estimation may be explained by observers' uncertainty about the world's causal structure as accounted for by Bayesian causal inference.
Collapse
Affiliation(s)
- David Meijer
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK.
| | - Sebastijan Veselič
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - Carmelo Calafiore
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - Uta Noppeney
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| |
Collapse
|
11
|
Delong P, Aller M, Giani AS, Rohe T, Conrad V, Watanabe M, Noppeney U. Invisible Flashes Alter Perceived Sound Location. Sci Rep 2018; 8:12376. [PMID: 30120294 PMCID: PMC6098122 DOI: 10.1038/s41598-018-30773-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2018] [Accepted: 07/31/2018] [Indexed: 12/05/2022] Open
Abstract
Information integration across the senses is fundamental for effective interactions with our environment. The extent to which signals from different senses can interact in the absence of awareness is controversial. Combining the spatial ventriloquist illusion and dynamic continuous flash suppression (dCFS), we investigated in a series of two experiments whether visual signals that observers do not consciously perceive can influence spatial perception of sounds. Importantly, dCFS obliterated visual awareness only on a fraction of trials allowing us to compare spatial ventriloquism for physically identical flashes that were judged as visible or invisible. Our results show a stronger ventriloquist effect for visible than invisible flashes. Critically, a robust ventriloquist effect emerged also for invisible flashes even when participants were at chance when locating the flash. Collectively, our findings demonstrate that signals that we are not aware of in one sensory modality can alter spatial perception of signals in another sensory modality.
Collapse
Affiliation(s)
- Patrycja Delong
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, B15 2TT, Birmingham, UK.
| | - Máté Aller
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, B15 2TT, Birmingham, UK
| | - Anette S Giani
- Max Planck Institute for Biological Cybernetics, 72076, Tübingen, Germany
| | - Tim Rohe
- Max Planck Institute for Biological Cybernetics, 72076, Tübingen, Germany
| | - Verena Conrad
- Max Planck Institute for Biological Cybernetics, 72076, Tübingen, Germany
| | - Masataka Watanabe
- Max Planck Institute for Biological Cybernetics, 72076, Tübingen, Germany
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, B15 2TT, Birmingham, UK
- Max Planck Institute for Biological Cybernetics, 72076, Tübingen, Germany
| |
Collapse
|
12
|
Deroy O, Faivre N, Lunghi C, Spence C, Aller M, Noppeney U. The Complex Interplay Between Multisensory Integration and Perceptual Awareness. Multisens Res 2018; 29:585-606. [PMID: 27795942 DOI: 10.1163/22134808-00002529] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
The integration of information has been considered a hallmark of human consciousness, as it requires information being globally available via widespread neural interactions. Yet the complex interdependencies between multisensory integration and perceptual awareness, or consciousness, remain to be defined. While perceptual awareness has traditionally been studied in a single sense, in recent years we have witnessed a surge of interest in the role of multisensory integration in perceptual awareness. Based on a recent IMRF symposium on multisensory awareness, this review discusses three key questions from conceptual, methodological and experimental perspectives: (1) What do we study when we study multisensory awareness? (2) What is the relationship between multisensory integration and perceptual awareness? (3) Which experimental approaches are most promising to characterize multisensory awareness? We hope that this review paper will provoke lively discussions, novel experiments, and conceptual considerations to advance our understanding of the multifaceted interplay between multisensory integration and consciousness.
Collapse
Affiliation(s)
- O Deroy
- Centre for the Study of the Senses, Institute of Philosophy, School of Advanced Study, University of London, London, UK
| | - N Faivre
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - C Lunghi
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - C Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, UK
| | - M Aller
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - U Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| |
Collapse
|
13
|
Noel JP, Simon D, Thelen A, Maier A, Blake R, Wallace MT. Probing Electrophysiological Indices of Perceptual Awareness across Unisensory and Multisensory Modalities. J Cogn Neurosci 2018; 30:814-828. [PMID: 29488853 PMCID: PMC10804124 DOI: 10.1162/jocn_a_01247] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2024]
Abstract
The neural underpinnings of perceptual awareness have been extensively studied using unisensory (e.g., visual alone) stimuli. However, perception is generally multisensory, and it is unclear whether the neural architecture uncovered in these studies directly translates to the multisensory domain. Here, we use EEG to examine brain responses associated with the processing of visual, auditory, and audiovisual stimuli presented near threshold levels of detectability, with the aim of deciphering similarities and differences in the neural signals indexing the transition into perceptual awareness across vision, audition, and combined visual-auditory (multisensory) processing. More specifically, we examine (1) the presence of late evoked potentials (∼>300 msec), (2) the across-trial reproducibility, and (3) the evoked complexity associated with perceived versus nonperceived stimuli. Results reveal that, although perceived stimuli are associated with the presence of late evoked potentials across each of the examined sensory modalities, between-trial variability and EEG complexity differed for unisensory versus multisensory conditions. Whereas across-trial variability and complexity differed for perceived versus nonperceived stimuli in the visual and auditory conditions, this was not the case for the multisensory condition. Taken together, these results suggest that there are fundamental differences in the neural correlates of perceptual awareness for unisensory versus multisensory stimuli. Specifically, the work argues that the presence of late evoked potentials, as opposed to neural reproducibility or complexity, most closely tracks perceptual awareness regardless of the nature of the sensory stimulus. In addition, the current findings suggest a greater similarity between the neural correlates of perceptual awareness of unisensory (visual and auditory) stimuli when compared with multisensory stimuli.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Neuroscience Graduate Program, Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - David Simon
- Neuroscience Graduate Program, Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - Antonia Thelen
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - Alexander Maier
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA
| | - Randolph Blake
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37235, USA
- Department of Psychiatry, Vanderbilt University Medical Center, Nashville, TN 37235, USA
| |
Collapse
|
14
|
Faivre N, Arzi A, Lunghi C, Salomon R. Consciousness is more than meets the eye: a call for a multisensory study of subjective experience. Neurosci Conscious 2017; 2017:nix003. [PMID: 30042838 PMCID: PMC6007148 DOI: 10.1093/nc/nix003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2016] [Revised: 02/06/2017] [Accepted: 02/16/2017] [Indexed: 11/17/2022] Open
Abstract
Over the last 30 years, our understanding of the neurocognitive bases of consciousness has improved, mostly through studies employing vision. While studying consciousness in the visual modality presents clear advantages, we believe that a comprehensive scientific account of subjective experience must not neglect other exteroceptive and interoceptive signals as well as the role of multisensory interactions for perceptual and self-consciousness. Here, we briefly review four distinct lines of work which converge in documenting how multisensory signals are processed across several levels and contents of consciousness. Namely, how multisensory interactions occur when consciousness is prevented because of perceptual manipulations (i.e. subliminal stimuli) or because of low vigilance states (i.e. sleep, anesthesia), how interactions between exteroceptive and interoceptive signals give rise to bodily self-consciousness, and how multisensory signals are combined to form metacognitive judgments. By describing the interactions between multisensory signals at the perceptual, cognitive, and metacognitive levels, we illustrate how stepping out the visual comfort zone may help in deriving refined accounts of consciousness, and may allow cancelling out idiosyncrasies of each sense to delineate supramodal mechanisms involved during consciousness.
Collapse
Affiliation(s)
- Nathan Faivre
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
- Centre d’Economie de la Sorbonne, CNRS UMR 8174, Paris, France
| | - Anat Arzi
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Claudia Lunghi
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
- Institute of Neuroscience, National Research Council (CNR), Pisa, Italy
| | - Roy Salomon
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel
| |
Collapse
|
15
|
Sounds can boost the awareness of visual events through attention without cross-modal integration. Sci Rep 2017; 7:41684. [PMID: 28139712 PMCID: PMC5282564 DOI: 10.1038/srep41684] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 12/21/2016] [Indexed: 11/09/2022] Open
Abstract
Cross-modal interactions can lead to enhancement of visual perception, even for visual events below awareness. However, the underlying mechanism is still unclear. Can purely bottom-up cross-modal integration break through the threshold of awareness? We used a binocular rivalry paradigm to measure perceptual switches after brief flashes or sounds which, sometimes, co-occurred. When flashes at the suppressed eye coincided with sounds, perceptual switches occurred the earliest. Yet, contrary to the hypothesis of cross-modal integration, this facilitation never surpassed the assumption of probability summation of independent sensory signals. A follow-up experiment replicated the same pattern of results using silent gaps embedded in continuous noise, instead of sounds. This manipulation should weaken putative sound-flash integration, although keep them salient as bottom-up attention cues. Additional results showed that spatial congruency between flashes and sounds did not determine the effectiveness of cross-modal facilitation, which was again not better than probability summation. Thus, the present findings fail to fully support the hypothesis of bottom-up cross-modal integration, above and beyond the independent contribution of two transient signals, as an account for cross-modal enhancement of visual events below level of awareness.
Collapse
|
16
|
Noel JP, Blanke O, Serino A, Salomon R. Interplay between Narrative and Bodily Self in Access to Consciousness: No Difference between Self- and Non-self Attributes. Front Psychol 2017; 8:72. [PMID: 28197110 PMCID: PMC5281626 DOI: 10.3389/fpsyg.2017.00072] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2016] [Accepted: 01/12/2017] [Indexed: 11/20/2022] Open
Abstract
The construct of the “self” is conceived as being fundamental in promoting survival. As such, extensive studies have documented preferential processing of self-relevant stimuli. For example, attributes that relate to the self are better encoded and retrieved, and are more readily consciously perceived. The preferential processing of self-relevant information, however, appears to be especially true for physical (e.g., faces), as opposed to psychological (e.g., traits), conceptions of the self. Here, we test whether semantic attributes that participants judge as self-relevant are further processed unconsciously than attributes that were not judged as self-relevant. In Experiment 1, a continuous flash suppression paradigm was employed with “self” and “non-self” attribute words being presented subliminally, and we asked participants to categorize unseen words as either self-related or not. In a second experiment, we attempted to boost putative preferential self-processing by relation to its physical conception, that is, one’s own body. To this aim, we repeated Experiment 1 while administrating acoustic stimuli either close or far from the body, i.e., within or outside peripersonal space. Results of both Experiment 1 and 2 demonstrate no difference in breaking suppression for self and non-self words. Additionally, we found that while participants were able to process the physical location of the unseen words (above or below fixation) they were not able to categorize these as self-relevant or not. Finally, results showed that sounds presented in the extra-personal space elicited a more stringent response criterion for “self” in the process of categorizing unseen visual stimuli. This shift in criterion as a consequence of sound location was restricted to the self, as no such effect was observed in the categorization of attributes occurring above or below fixation. Overall, our findings seem to indicate that subliminally presented stimuli are not semantically processed, at least inasmuch as to be categorized as self-relevant or not. However, we do demonstrate that the distance at which acoustic stimuli are presented may alter the balance between self- and non-self biases.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Laboratory of Cognitive Neuroscience, Faculty of Life Science, Brain Mind Institute, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Center for Neuroprosthetics, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Vanderbilt Brain Institute, Vanderbilt University, NashvilleTN, USA
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Faculty of Life Science, Brain Mind Institute, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Center for Neuroprosthetics, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Department of Neurology, University HospitalGeneva, Switzerland
| | - Andrea Serino
- Laboratory of Cognitive Neuroscience, Faculty of Life Science, Brain Mind Institute, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Center for Neuroprosthetics, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Department of Psychology, Alma Mater Studiorum - Università di BolognaBologna, Italy
| | - Roy Salomon
- Laboratory of Cognitive Neuroscience, Faculty of Life Science, Brain Mind Institute, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Center for Neuroprosthetics, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Gonda Multidisciplinary Brain Research Center, Bar-Ilan UniversityRamat Gan, Israel
| |
Collapse
|
17
|
Abstract
To efficiently interact with the external environment, our nervous system combines information arising from different sensory modalities. Recent evidence suggests that cross-modal interactions can be automatic and even unconscious, reflecting the ecological relevance of cross-modal processing. Here, we use continuous flash suppression (CFS) to directly investigate whether haptic signals can interact with visual signals outside of visual awareness. We measured suppression durations of visual gratings rendered invisible by CFS either during visual stimulation alone or during visuo-haptic stimulation. We found that active exploration of a haptic grating congruent in orientation with the suppressed visual grating reduced suppression durations both compared with visual-only stimulation and to incongruent visuo-haptic stimulation. We also found that the facilitatory effect of touch on visual suppression disappeared when the visual and haptic gratings were mismatched in either spatial frequency or orientation. Together, these results demonstrate that congruent touch can accelerate the rise to consciousness of a suppressed visual stimulus and that this unconscious cross-modal interaction depends on visuo-haptic congruency. Furthermore, since CFS suppression is thought to occur early in visual cortical processing, our data reinforce the evidence suggesting that visuo-haptic interactions can occur at the earliest stages of cortical processing.
Collapse
Affiliation(s)
- Claudia Lunghi
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy; Institute of Neuroscience, CNR, Pisa, Italy
| | - Luca Lo Verde
- Institute of Neuroscience, CNR, Pisa, Italy; Department NEUROFARBA, University of Florence, Italy
| | - David Alais
- School of Psychology, University of Sydney, NSW, Australia
| |
Collapse
|
18
|
Time for Awareness: The Influence of Temporal Properties of the Mask on Continuous Flash Suppression Effectiveness. PLoS One 2016; 11:e0159206. [PMID: 27416317 PMCID: PMC4945020 DOI: 10.1371/journal.pone.0159206] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 06/28/2016] [Indexed: 11/22/2022] Open
Abstract
Visual processing is not instantaneous, but instead our conscious perception depends on the integration of sensory input over time. In the case of Continuous Flash Suppression (CFS), masks are flashed to one eye, suppressing awareness of stimuli presented to the other eye. One potential explanation of CFS is that it depends, at least in part, on the flashing mask continually interrupting visual processing before the stimulus reaches awareness. We investigated the temporal features of masks in two ways. First, we measured the suppression effectiveness of a wide range of masking frequencies (0-32Hz), using both complex (faces/houses) and simple (closed/open geometric shapes) stimuli. Second, we varied whether the different frequencies were interleaved within blocks or separated in homogenous blocks, in order to see if suppression was stronger or weaker when the frequency remained constant across trials. We found that break-through contrast differed dramatically between masking frequencies, with mask effectiveness following a skewed-normal curve peaking around 6Hz and little or no masking for low and high temporal frequencies. Peak frequency was similar for trial-randomized and block randomized conditions. In terms of type of stimulus, we found no significant difference in peak frequency between the stimulus groups (complex/simple, face/house, closed/open). These findings suggest that temporal factors play a critical role in perceptual awareness, perhaps due to interactions between mask frequency and the time frame of visual processing.
Collapse
|
19
|
ten Oever S, Romei V, van Atteveldt N, Soto-Faraco S, Murray MM, Matusz PJ. The COGs (context, object, and goals) in multisensory processing. Exp Brain Res 2016; 234:1307-23. [PMID: 26931340 DOI: 10.1007/s00221-016-4590-z] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Accepted: 01/30/2016] [Indexed: 12/20/2022]
Abstract
Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and "top-down" control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer's goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications.
Collapse
Affiliation(s)
- Sanne ten Oever
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Vincenzo Romei
- Department of Psychology, Centre for Brain Science, University of Essex, Colchester, UK
| | - Nienke van Atteveldt
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.,Department of Educational Neuroscience, Faculty of Psychology and Education and Institute LEARN!, VU University Amsterdam, Amsterdam, The Netherlands
| | - Salvador Soto-Faraco
- Multisensory Research Group, Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain.,Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, Centre Hospitalier Universitaire Vaudois (CHUV), University Hospital Center and University of Lausanne, BH7.081, rue du Bugnon 46, 1011, Lausanne, Switzerland.,EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland.,Department of Ophthalmology, Jules-Gonin Eye Hospital, University of Lausanne, Lausanne, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, Centre Hospitalier Universitaire Vaudois (CHUV), University Hospital Center and University of Lausanne, BH7.081, rue du Bugnon 46, 1011, Lausanne, Switzerland. .,Attention, Brain, and Cognitive Development Group, Department of Experimental Psychology, University of Oxford, Oxford, UK.
| |
Collapse
|
20
|
Interactions between space and effectiveness in human multisensory performance. Neuropsychologia 2016; 88:83-91. [PMID: 26826522 DOI: 10.1016/j.neuropsychologia.2016.01.031] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 12/30/2015] [Accepted: 01/26/2016] [Indexed: 11/23/2022]
Abstract
Several stimulus factors are important in multisensory integration, including the spatial and temporal relationships of the paired stimuli as well as their effectiveness. Changes in these factors have been shown to dramatically change the nature and magnitude of multisensory interactions. Typically, these factors are considered in isolation, although there is a growing appreciation for the fact that they are likely to be strongly interrelated. Here, we examined interactions between two of these factors - spatial location and effectiveness - in dictating performance in the localization of an audiovisual target. A psychophysical experiment was conducted in which participants reported the perceived location of visual flashes and auditory noise bursts presented alone and in combination. Stimuli were presented at four spatial locations relative to fixation (0°, 30°, 60°, 90°) and at two intensity levels (high, low). Multisensory combinations were always spatially coincident and of the matching intensity (high-high or low-low). In responding to visual stimuli alone, localization accuracy decreased and response times (RTs) increased as stimuli were presented at more eccentric locations. In responding to auditory stimuli, performance was poorest at the 30° and 60° locations. For both visual and auditory stimuli, accuracy was greater and RTs were faster for more intense stimuli. For responses to visual-auditory stimulus combinations, performance enhancements were found at locations in which the unisensory performance was lowest, results concordant with the concept of inverse effectiveness. RTs for these multisensory presentations frequently violated race-model predictions, implying integration of these inputs, and a significant location-by-intensity interaction was observed. Performance gains under multisensory conditions were larger as stimuli were positioned at more peripheral locations, and this increase was most pronounced for the low-intensity conditions. These results provide strong support that the effects of stimulus location and effectiveness on multisensory integration are interdependent, with both contributing to the overall effectiveness of the stimuli in driving the resultant multisensory response.
Collapse
|
21
|
Macaluso E, Noppeney U, Talsma D, Vercillo T, Hartcher-O’Brien J, Adam R. The Curious Incident of Attention in Multisensory Integration: Bottom-up vs. Top-down. Multisens Res 2016. [DOI: 10.1163/22134808-00002528] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The role attention plays in our experience of a coherent, multisensory world is still controversial. On the one hand, a subset of inputs may be selected for detailed processing and multisensory integration in a top-down manner, i.e., guidance of multisensory integration by attention. On the other hand, stimuli may be integrated in a bottom-up fashion according to low-level properties such as spatial coincidence, thereby capturing attention. Moreover, attention itself is multifaceted and can be describedviaboth top-down and bottom-up mechanisms. Thus, the interaction between attention and multisensory integration is complex and situation-dependent. The authors of this opinion paper are researchers who have contributed to this discussion from behavioural, computational and neurophysiological perspectives. We posed a series of questions, the goal of which was to illustrate the interplay between bottom-up and top-down processes in various multisensory scenarios in order to clarify the standpoint taken by each author and with the hope of reaching a consensus. Although divergence of viewpoint emerges in the current responses, there is also considerable overlap: In general, it can be concluded that the amount of influence that attention exerts on MSI depends on the current task as well as prior knowledge and expectations of the observer. Moreover stimulus properties such as the reliability and salience also determine how open the processing is to influences of attention.
Collapse
Affiliation(s)
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, UK
| | - Durk Talsma
- Department of Experimental Psychology, Ghent University, Henri Dunantlaan 2, B-9000 Ghent, Belgium
| | | | | | - Ruth Adam
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians-Universität LMU, Munich, Germany
| |
Collapse
|