1
|
Hu J, Badde S, Vetter P. Auditory guidance of eye movements toward threat-related images in the absence of visual awareness. Front Hum Neurosci 2024; 18:1441915. [PMID: 39175660 PMCID: PMC11338778 DOI: 10.3389/fnhum.2024.1441915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 07/30/2024] [Indexed: 08/24/2024] Open
Abstract
The human brain is sensitive to threat-related information even when we are not aware of this information. For example, fearful faces attract gaze in the absence of visual awareness. Moreover, information in different sensory modalities interacts in the absence of awareness, for example, the detection of suppressed visual stimuli is facilitated by simultaneously presented congruent sounds or tactile stimuli. Here, we combined these two lines of research and investigated whether threat-related sounds could facilitate visual processing of threat-related images suppressed from awareness such that they attract eye gaze. We suppressed threat-related images of cars and neutral images of human hands from visual awareness using continuous flash suppression and tracked observers' eye movements while presenting congruent or incongruent sounds (finger snapping and car engine sounds). Indeed, threat-related car sounds guided the eyes toward suppressed car images, participants looked longer at the hidden car images than at any other part of the display. In contrast, neither congruent nor incongruent sounds had a significant effect on eye responses to suppressed finger images. Overall, our results suggest that only in a danger-related context semantically congruent sounds modulate eye movements to images suppressed from awareness, highlighting the prioritisation of eye responses to threat-related stimuli in the absence of visual awareness.
Collapse
Affiliation(s)
- Junchao Hu
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, MA, United States
| | - Petra Vetter
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
2
|
Han S, Blake R, Aubuchon C, Tadin D. Binocular rivalry under naturalistic geometry: Evidence from worlds simulated in virtual reality. PNAS NEXUS 2024; 3:pgae054. [PMID: 38380058 PMCID: PMC10877069 DOI: 10.1093/pnasnexus/pgae054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 01/30/2024] [Indexed: 02/22/2024]
Abstract
Binocular rivalry is a fascinating, widely studied visual phenomenon in which perception alternates between two competing images. This experience, however, is generally restricted to laboratory settings where two irreconcilable images are presented separately to the two eyes, an implausible geometry where two objects occupy the same physical location. Such laboratory experiences are in stark contrast to everyday visual behavior, where rivalry is almost never encountered, casting doubt on whether rivalry is relevant to our understanding of everyday binocular vision. To investigate the external validity of binocular rivalry, we manipulated the geometric plausibility of rival images using a naturalistic, cue-rich, 3D-corridor model created in virtual reality. Rival stimuli were presented in geometrically implausible, semi-plausible, or plausible layouts. Participants tracked rivalry fluctuations in each of these three layouts and for both static and moving rival stimuli. Results revealed significant and canonical binocular rivalry alternations regardless of geometrical plausibility and stimulus type. Rivalry occurred for layouts that mirrored the unnatural geometry used in laboratory studies and for layouts that mimicked real-world occlusion geometry. In a complementary 3D modeling analysis, we show that interocular conflict caused by geometrically plausible occlusion is a common outcome in a visual scene containing multiple objects. Together, our findings demonstrate that binocular rivalry can reliably occur for both geometrically implausible interocular conflicts and conflicts caused by a common form of naturalistic occlusion. Thus, key features of binocular rivalry are not simply laboratory artifacts but generalize to conditions that match the geometry of everyday binocular vision.
Collapse
Affiliation(s)
- Shui'er Han
- Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14642, USA
- Institute for Infocomm Research Agency for Science, Technology and Research, Singapore 138632, Singapore
- Centre for Frontier AI Research, Agency for Science, Technology and Research, Singapore 138632, Singapore
| | - Randolph Blake
- Department of Psychology, Vanderbilt University, Nashville, TN 37240, USA
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN 37232, USA
| | - Celine Aubuchon
- Department of Cognitive Linguistic and Psychological Sciences, Brown University, Providence, RI 02912, USA
| | - Duje Tadin
- Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14642, USA
- Department of Neuroscience, University of Rochester, Rochester, NY 14642, USA
- Department of Ophthalmology, University of Rochester, Rochester, NY 14642, USA
| |
Collapse
|
3
|
Montoya S, Badde S. Only visible flicker helps flutter: Tactile-visual integration breaks in the absence of visual awareness. Cognition 2023; 238:105528. [PMID: 37354787 DOI: 10.1016/j.cognition.2023.105528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 06/15/2023] [Accepted: 06/16/2023] [Indexed: 06/26/2023]
Abstract
Combining information from multiple senses enhances our perception of the world. Whether we need to be aware of all stimuli to benefit from multisensory integration, however, is still under investigation. Here, we tested whether tactile frequency perception benefits from the presence of congruent visual flicker even if the flicker is so rapid that it is perceptually fused into a steady light and therefore invisible. Our participants completed a tactile frequency discrimination task given either unisensory tactile or congruent tactile-visual stimulation. Tactile and tactile-visual test frequencies ranged from far below to far above participants' flicker fusion threshold (determined separately). For frequencies distinctively below their flicker fusion threshold, participants performed significantly better given tactile-visual stimulation than when presented with only tactile stimuli. Yet, for frequencies above their flicker fusion threshold, participants' tactile frequency perception did not profit from the presence of congruent but likely fused and thus invisible visual flicker. The results matched the predictions of an ideal-observer model in which tactile-visual integration is conditional on awareness of both stimuli. In contrast, it was impossible to reproduce the observed results with a model that assumed tactile-visual integration proceeds irrespective of stimulus awareness. In sum, we revealed that the benefits of congruent visual stimulation for tactile flutter frequency perception depend on the visibility of the visual flicker, suggesting that multisensory integration requires awareness.
Collapse
Affiliation(s)
- Sofia Montoya
- Department of Psychology, Tufts University, 490 Boston Avenue, 02155 Medford, MA, USA
| | - Stephanie Badde
- Department of Psychology, Tufts University, 490 Boston Avenue, 02155 Medford, MA, USA.
| |
Collapse
|
4
|
Wang W, Xu N, Liu H, Qu J, Dang S, Hong X. The Dynamic Target Motion Perception Mechanism of Tactile-Assisted Vision in MR Environments. SENSORS (BASEL, SWITZERLAND) 2022; 22:8931. [PMID: 36433528 PMCID: PMC9695400 DOI: 10.3390/s22228931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 11/10/2022] [Accepted: 11/15/2022] [Indexed: 06/16/2023]
Abstract
In the mixed reality (MR) environment, the task of target motion perception is usually undertaken by vision. This approach suffers from poor discrimination and high cognitive load when the tasks are complex. This cannot meet the needs of the air traffic control field for rapid capture and precise positioning of the dynamic targets in the air. Based on this problem, we conducted a multimodal optimization study on target motion perception judgment by controlling the hand tactile sensor to achieve the use of tactile sensation to assist vision in MR environment. This allows it to adapt to the requirements of future development-led interactive tasks under the mixed reality holographic aviation tower. Motion perception tasks are usually divided into urgency sensing for multiple targets and precise position tracking for single targets according to the number of targets and task division. Therefore, in this paper, we designed experiments to investigate the correlation between tactile intensity-velocity correspondence and target urgency, and the correlation between the PRS (position, rhythm, sequence) tactile indication scheme and position tracking. We also evaluated it through comprehensive experiment. We obtained the following conclusions: (1) high, higher, medium, lower, and low tactile intensities would bias human visual cognitive induction to fast, faster, medium, slower, and slow motion targets. Additionally, this correspondence can significantly improve the efficiency of the participants' judgment of target urgency; (2) under the PRS tactile indication scheme, position-based rhythm and sequence cues can improve the judgment effect of human tracking target dynamic position, and the effect of adding rhythm cues is better. However, when adding rhythm and sequence cues at the same time, it can cause clutter; (3) tactile assisted vision has a good improvement effect on the comprehensive perception of dynamic target movement. The above findings are useful for the study of target motion perception in MR environments and provide a theoretical basis for subsequent research on the cognitive mechanism and quantitative of tactile indication in MR environment.
Collapse
|
5
|
Direction-selective modulation of visual motion rivalry by collocated tactile motion. Atten Percept Psychophys 2022; 84:899-914. [PMID: 35194773 PMCID: PMC9001558 DOI: 10.3758/s13414-022-02453-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/28/2022] [Indexed: 12/03/2022]
Abstract
Early models of multisensory integration posited that cross-modal signals only converged in higher-order association cortices and that vision automatically dominates. However, recent studies have challenged this view. In this study, the significance of the alignment of motion axes and spatial alignment across visual and tactile stimuli, as well as the effect of hand visibility on visuo-tactile interactions were examined. Using binocular rivalry, opposed motions were presented to each eye and participants were required to track the perceived visual direction. A tactile motion that was either a leftward or rightward sweep across the fingerpad was intermittently presented. Results showed that tactile effects on visual percepts were dependent on the alignment of motion axes: rivalry between up/down visual motions was not modulated at all by left/right tactile motion. On the other hand, visual percepts could be altered by tactile motion signals when both modalities shared a common axis of motion: a tactile stimulus could maintain the dominance duration of a congruent visual stimulus and shorten its suppression period. The effects were also conditional on the spatial alignment of the visual and tactile stimuli, being eliminated when the tactile device was displaced 15 cm away to the right of the visual stimulus. In contrast, visibility of the hand touching the tactile stimulus facilitated congruent switches relative to a visual-only baseline but did not present a significant advantage overall. In sum, these results show a low-level sensory interaction that is conditional on visual and tactile stimuli sharing a common motion axis and location in space.
Collapse
|
6
|
Jin H, Chen RB, Zhong YL, Lai PH, Huang X. Effect of Impaired Stereoscopic Vision on Large-Scale Resting-State Functional Network Connectivity in Comitant Exotropia Patients. Front Neurosci 2022; 16:833937. [PMID: 35350559 PMCID: PMC8957945 DOI: 10.3389/fnins.2022.833937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 01/31/2022] [Indexed: 12/27/2022] Open
Abstract
Background Comitant exotropia (CE) is a common eye movement disorder, characterized by impaired eye movements and stereoscopic vision. CE patients reportedly exhibit changes in the central nervous system. However, it remains unclear whether large-scale brain network changes occur in CE patients. Purpose This study investigated the effects of exotropia and stereoscopic vision dysfunction on large-scale brain networks in CE patients via independent component analysis (ICA). Methods Twenty-eight CE patients (mean age, 15.80 ± 2.46 years) and 27 healthy controls (HCs; mean age, 16.00 ± 2.68 years; closely matched for age, sex, and education) underwent resting-state magnetic resonance imaging. ICA was applied to extract resting-state networks (RSNs) in both groups. Two-sample’s t-tests were conducted to investigate intranetwork functional connectivity (FC) within RSNs and interactions among RSNs between the two groups. Results Compared with the HC group, the CE group showed increased intranetwork FC in the bilateral postcentral gyrus of the sensorimotor network (SMN). The CE group also showed decreased intranetwork FC in the right cerebellum_8 of the cerebellum network (CER), the right superior temporal gyrus of the auditory network (AN), and the right middle occipital gyrus of the visual network (VN). Moreover, functional network connectivity (FNC) analysis showed that CER-AN, SMN-VN, SN-DMN, and DMN-VN connections were significantly altered between the two groups. Conclusion Comitant exotropia patients had abnormal brain networks related to the CER, SMN, AN, and VN. Our results offer important insights into the neural mechanisms of eye movements and stereoscopic vision dysfunction in CE patients.
Collapse
Affiliation(s)
- Han Jin
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
| | - Ri-Bo Chen
- Department of Radiology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
| | - Yu-Lin Zhong
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
| | - Ping-Hong Lai
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
| | - Xin Huang
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
- *Correspondence: Xin Huang,
| |
Collapse
|
7
|
The body-ownership is unconsciously distorted in the brain: An event-related potential study of rubber hand illusion. PSIHOLOGIJA 2022. [DOI: 10.2298/psi210126002l] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
Many studies have reported that bottom-up multisensory integration of visual, tactile, and proprioceptive information can distort our sense of body-ownership, producing rubber hand illusion (RHI). There is less evidence about when and how the body-ownership is distorted in the brain during RHI. To examine whether this illusion effect occurs preattentively at an early stage of processing, we monitored the visual mismatch negativity (vMMN) component (the index of automatic deviant detection) and N2 (the index for conflict monitoring). Participants first performed an RHI elicitation task in a synchronous or asynchronous setting and then finished a passive visual oddball task in which the deviant stimuli were unrelated to the explicit task. A significant interaction between Deviancy (deviant hand vs. standard hand) and Group (synchronous vs. asynchronous) was found. The asynchronous group showed clear mismatch effects in both vMMN and N2, while the synchronous group had such effect only in N2. The results indicate that after the elicitation of RHI bottom-up integration could be retrieved at the early stage of sensory processing before top-down processing, providing evidence for the priority of the bottom-up processes after the generation of RHI and revealing the mechanism of how the body-ownership is unconsciously distorted in the brain.
Collapse
|
8
|
Vestibular and active self-motion signals drive visual perception in binocular rivalry. iScience 2021; 24:103417. [PMID: 34877486 PMCID: PMC8632839 DOI: 10.1016/j.isci.2021.103417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 09/24/2021] [Accepted: 11/04/2021] [Indexed: 11/24/2022] Open
Abstract
Multisensory integration helps the brain build reliable models of the world and resolve ambiguities. Visual interactions with sound and touch are well established but vestibular influences on vision are less well studied. Here, we test the vestibular influence on vision using horizontally opposed motions presented one to each eye so that visual perception is unstable and alternates irregularly. Passive, whole-body rotations in the yaw plane stabilized visual alternations, with perceived direction oscillating congruently with rotation (leftward motion during leftward rotation, and vice versa). This demonstrates a purely vestibular signal can resolve ambiguous visual motion and determine visual perception. Active self-rotation following the same sinusoidal profile also entrained vision to the rotation cycle – more strongly and with a lesser time lag, likely because of efference copy and predictive internal models. Both experiments show that visual ambiguity provides an effective paradigm to reveal how vestibular and motor inputs can shape visual perception. Binocular rivalry between left/right motions is stabilized by congruent head movement Left/right head rotations entrain rivalry dynamics so matching direction is perceived Active and passive rotations both drive rivalry dominance to match rotation direction Resolving ambiguous vision occurs in a broader vestibular and action-based context
Collapse
|
9
|
Deane G. Consciousness in active inference: Deep self-models, other minds, and the challenge of psychedelic-induced ego-dissolution. Neurosci Conscious 2021; 2021:niab024. [PMID: 34484808 PMCID: PMC8408766 DOI: 10.1093/nc/niab024] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 07/26/2021] [Accepted: 08/02/2021] [Indexed: 11/16/2022] Open
Abstract
Predictive processing approaches to brain function are increasingly delivering promise for illuminating the computational underpinnings of a wide range of phenomenological states. It remains unclear, however, whether predictive processing is equipped to accommodate a theory of consciousness itself. Furthermore, objectors have argued that without specification of the core computational mechanisms of consciousness, predictive processing is unable to inform the attribution of consciousness to other non-human (biological and artificial) systems. In this paper, I argue that an account of consciousness in the predictive brain is within reach via recent accounts of phenomenal self-modelling in the active inference framework. The central claim here is that phenomenal consciousness is underpinned by 'subjective valuation'-a deep inference about the precision or 'predictability' of the self-evidencing ('fitness-promoting') outcomes of action. Based on this account, I argue that this approach can critically inform the distribution of experience in other systems, paying particular attention to the complex sensory attenuation mechanisms associated with deep self-models. I then consider an objection to the account: several recent papers argue that theories of consciousness that invoke self-consciousness as constitutive or necessary for consciousness are undermined by states (or traits) of 'selflessness'; in particular the 'totally selfless' states of ego-dissolution occasioned by psychedelic drugs. Drawing on existing work that accounts for psychedelic-induced ego-dissolution in the active inference framework, I argue that these states do not threaten to undermine an active inference theory of consciousness. Instead, these accounts corroborate the view that subjective valuation is the constitutive facet of experience, and they highlight the potential of psychedelic research to inform consciousness science, computational psychiatry and computational phenomenology.
Collapse
Affiliation(s)
- George Deane
- School of Philosophy, Psychology and Language Sciences, The University of Edinburgh, 3 Charles Street, Edinburgh EH8 9AD, UK
| |
Collapse
|
10
|
Motyka P, Kozłowska Z, Litwin P. Perceptual Awareness of Optic Flows Paced Optimally and Non-optimally to Walking Speed. Perception 2021; 50:797-818. [PMID: 34459288 DOI: 10.1177/03010066211034368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Previous research suggests that visual processing depends strongly on locomotor activity and is tuned to optic flows consistent with self-motion speed. Here, we used a binocular rivalry paradigm to investigate whether perceptual access to optic flows depends on their optimality in relation to walking velocity. Participants walked at two different speeds on a treadmill while viewing discrepant visualizations of a virtual tunnel in each eye. We hypothesized that visualizations paced appropriately to the walking speeds will be perceived longer than non optimal (too fast/slow) ones. The presented optic flow speeds were predetermined individually in a task based on matching visual speed to both walking velocities. In addition, perceptual preference for optimal optic flows was expected to increase with proprioceptive ability to detect threshold-level changes in walking speed. Whereas faster (more familiar) optic flows showed enhanced access to awareness during faster compared with slower walking conditions, for slower visual flows, only a nonsignificant tendency for the analogous effect was observed. These effects were not dependent on individual proprioceptive sensitivity. Our findings concur with the emerging view that the velocity of one's locomotion is used to calibrate visual perception of self-motion and extend the scope of reported action effects on visual awareness.
Collapse
Affiliation(s)
- Paweł Motyka
- Faculty of Psychology, University of Warsaw, Poland
| | | | - Piotr Litwin
- Faculty of Psychology, University of Warsaw, Poland
| |
Collapse
|
11
|
Motyka P, Akbal M, Litwin P. Forward optic flow is prioritised in visual awareness independently of walking direction. PLoS One 2021; 16:e0250905. [PMID: 33945563 PMCID: PMC8096117 DOI: 10.1371/journal.pone.0250905] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 04/15/2021] [Indexed: 12/31/2022] Open
Abstract
When two different images are presented separately to each eye, one experiences smooth transitions between them-a phenomenon called binocular rivalry. Previous studies have shown that exposure to signals from other senses can enhance the access of stimulation-congruent images to conscious perception. However, despite our ability to infer perceptual consequences from bodily movements, evidence that action can have an analogous influence on visual awareness is scarce and mainly limited to hand movements. Here, we investigated whether one's direction of locomotion affects perceptual access to optic flow patterns during binocular rivalry. Participants walked forwards and backwards on a treadmill while viewing highly-realistic visualisations of self-motion in a virtual environment. We hypothesised that visualisations congruent with walking direction would predominate in visual awareness over incongruent ones, and that this effect would increase with the precision of one's active proprioception. These predictions were not confirmed: optic flow consistent with forward locomotion was prioritised in visual awareness independently of walking direction and proprioceptive abilities. Our findings suggest the limited role of kinaesthetic-proprioceptive information in disambiguating visually perceived direction of self-motion and indicate that vision might be tuned to the (expanding) optic flow patterns prevalent in everyday life.
Collapse
Affiliation(s)
- Paweł Motyka
- Faculty of Psychology, University of Warsaw, Warsaw, Poland
| | - Mert Akbal
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Academy of Fine Arts Saar, Saarbrücken, Germany
| | - Piotr Litwin
- Faculty of Psychology, University of Warsaw, Warsaw, Poland
| |
Collapse
|
12
|
Maier A, Tsuchiya N. Growing evidence for separate neural mechanisms for attention and consciousness. Atten Percept Psychophys 2021; 83:558-576. [PMID: 33034851 PMCID: PMC7886945 DOI: 10.3758/s13414-020-02146-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2020] [Indexed: 11/08/2022]
Abstract
Our conscious experience of the world seems to go in lockstep with our attentional focus: We tend to see, hear, taste, and feel what we attend to, and vice versa. This tight coupling between attention and consciousness has given rise to the idea that these two phenomena are indivisible. In the late 1950s, the honoree of this special issue, Charles Eriksen, was among a small group of early pioneers that sought to investigate whether a transient increase in overall level of attention (alertness) in response to a noxious stimulus can be decoupled from conscious perception using experimental techniques. Recent years saw a similar debate regarding whether attention and consciousness are two dissociable processes. Initial evidence that attention and consciousness are two separate processes primarily rested on behavioral data. However, the past couple of years witnessed an explosion of studies aimed at testing this conjecture using neuroscientific techniques. Here we provide an overview of these and related empirical studies on the distinction between the neuronal correlates of attention and consciousness, and detail how advancements in theory and technology can bring about a more detailed understanding of the two. We argue that the most promising approach will combine ever-evolving neurophysiological and interventionist tools with quantitative, empirically testable theories of consciousness that are grounded in a mathematically formalized understanding of phenomenology.
Collapse
Affiliation(s)
- Alexander Maier
- Department of Psychology, Vanderbilt University, Nashville, TN, USA.
| | - Naotsugu Tsuchiya
- Turner Institute for Brain and Mental Health & School of Psychological Sciences, Faculty of Medicine, Nursing, and Health Sciences, Monash University, Melbourne, VIC, Australia
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Suita, Osaka, 565-0871, Japan
- Advanced Telecommunications Research Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan
| |
Collapse
|
13
|
Bai J, He X, Jiang Y, Zhang T, Bao M. Rotating One's Head Modulates the Perceived Velocity of Motion Aftereffect. Multisens Res 2020; 33:189-212. [PMID: 31648199 DOI: 10.1163/22134808-20191477] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Accepted: 09/11/2019] [Indexed: 11/19/2022]
Abstract
As a prominent illusion, the motion aftereffect (MAE) has traditionally been considered a visual phenomenon. Recent neuroimaging work has revealed increased activities in MT+ and decreased activities in vestibular regions during the MAE, supporting the notion of visual-vestibular interaction on the MAE. Since the head had to remain stationary in fMRI experiments, vestibular self-motion signals were absent in those studies. Accordingly, more direct evidence is still lacking in terms of whether and how vestibular signals modulate the MAE. By developing a virtual reality approach, the present study for the first time demonstrates that horizontal head rotation affects the perceived velocity of the MAE. We found that the MAE was predominantly perceived as moving faster when its direction was opposite to the direction of head rotation than when its direction was the same as head rotation. The magnitude of this effect was positively correlated with the velocity of head rotation. Similar result patterns were not observed for the real motion stimuli. Our findings support a 'cross-modal bias' hypothesis that after living in a multisensory environment long-term the brain develops a strong association between signals from the visual and vestibular pathways. Consequently, weak biasing visual signals in the associated direction can spontaneously emerge with the input of vestibular signals in the multisensory brain areas, substantially modulating the illusory visual motion represented in those areas as well. The hypothesis can also be used to explain other multisensory integration phenomena.
Collapse
Affiliation(s)
- Jianying Bai
- 1CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China.,2Xinjiang Astronomical Observatory, Chinese Academy of Sciences, Urumqi 830011, China.,3University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xin He
- 1CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China.,5Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yi Jiang
- 4State Key Laboratory of Brain and Cognitive Science, Beijing 100101, China.,5Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.,6CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China
| | - Tao Zhang
- 4State Key Laboratory of Brain and Cognitive Science, Beijing 100101, China.,5Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Min Bao
- 1CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China.,4State Key Laboratory of Brain and Cognitive Science, Beijing 100101, China.,5Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
14
|
Lang Y, Gao M, Huang Q, Liu Z, Wu L, Tang R. Tactile priming accelerates conscious access to continuous flash-suppressed characters. Exp Physiol 2019; 104:1711-1716. [PMID: 31475750 DOI: 10.1113/ep087944] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2019] [Accepted: 08/30/2019] [Indexed: 11/08/2022]
Abstract
NEW FINDINGS What is the central question of this study? Research has reported that some sensory input, such as auditory and olfactory input, can affect subliminal visual processing. However, it is important to address whether tactile input, another form of elementary sensory input, could influence the interocular rivalry process. What is the main finding and its importance? We present several pieces of evidence regarding the influences of familiar tactile shapes and temperature on continuous flash suppression. Our findings provide support for the hypothesis that there is a cross-modal effect on subconscious visual semantic processing of Chinese characters. More specifically, tactile sensations affect subliminal processing of visual information. ABSTRACT Tactile and visual sensations are the most vital human functions for obtaining environmental information. However, whether tactile information influences visual processing remains unclear. In this study, a breaking continuous flash suppression (b-CFS) protocol was used to measure the extent to which tactile sensations facilitate visual processing subconsciously. In experiment 1, finger stimulation with cold and hot temperatures served as primers for the words 'cold' and 'hot', which were in turn suppressed by CFS. In experiment 2, subjects viewed the upright or inverted word 'cell phone', with or without tactile priming of holding a cell phone in their hand. Results demonstrated that the tactile primer significantly shortened the reaction time in the touch group compared with the control group in both experiments. Thus, the tactile sensation of a familiar article and/or temperature appears to facilitate corresponding visual semantic recognition to break CFS earlier.
Collapse
Affiliation(s)
- Yiran Lang
- Beijing Institute of Technology, Beijing, China
| | - Ming Gao
- Beijing Institute of Technology, Beijing, China
| | - Qiang Huang
- Beijing Institute of Technology, Beijing, China
| | - Zejian Liu
- Beijing Xiaotangshan Hospital, Beijing, China
| | - Liang Wu
- Beijing Xiaotangshan Hospital, Beijing, China
| | - Rongyu Tang
- Beijing Institute of Technology, Beijing, China
| |
Collapse
|
15
|
Kim S, Kim J. Effects of Multimodal Association on Ambiguous Perception in Binocular Rivalry. Perception 2019; 48:796-819. [DOI: 10.1177/0301006619867023] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
When two eyes view dissimilar images, an observer typically reports ambiguous perception called binocular rivalry where the subjective perception fluctuates between the two inputs. This perceptual instability is often comprised of exclusive dominance of each image and a transition state called piecemeal state where the two images are intermingled in patchwork manner. Herein, we investigated the effects of multimodal association of sensory congruent pair, arbitrary pair, and reverse pair on piecemeal state in order to see how each level of association affects the ambiguous perception during binocular rivalry. To induce the multisensory associations, we designed a matching task with audiovisual feedback where subjects were required to respond according to given pairing rules. We found that explicit audiovisual associations can substantially affect the piecemeal state during binocular rivalry and that this congruency effect that reduces the amount of visual ambiguity originates primarily from explicit audiovisual association training rather than common sensory features. Furthermore, when one information is associated with multiple information, recent and preexisting associations work collectively to influence the perceptual ambiguity during rivalry. Our findings show that learned multimodal association directly affects the temporal dynamics of ambiguous perception during binocular rivalry by modulating not only the exclusive dominance but also the piecemeal state in a systematic manner.
Collapse
Affiliation(s)
- Sungyong Kim
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Jeounghoon Kim
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea; School of Humanities and Social Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
16
|
Sensorimotor contingency modulates breakthrough of virtual 3D objects during a breaking continuous flash suppression paradigm. Cognition 2019; 187:95-107. [PMID: 30852262 DOI: 10.1016/j.cognition.2019.03.003] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2018] [Revised: 03/03/2019] [Accepted: 03/04/2019] [Indexed: 11/21/2022]
Abstract
To investigate how embodied sensorimotor interactions shape subjective visual experience, we developed a novel combination of Virtual Reality (VR) and Augmented Reality (AR) within an adapted breaking continuous flash suppression (bCFS) paradigm. In a first experiment, participants manipulated novel virtual 3D objects, viewed through a head-mounted display, using three interlocking cogs. This setup allowed us to manipulate the sensorimotor contingencies governing interactions with virtual objects, while characterising the effects on subjective visual experience by measuring breakthrough times from bCFS. We contrasted the effects of the congruency (veridical versus reversed sensorimotor coupling) and contingency (live versus replayed interactions) using a motion discrimination task. The results showed that the contingency but not congruency of sensorimotor coupling affected breakthrough times, with live interactions displaying faster breakthrough times. In a second experiment, we investigated how the contingency of sensorimotor interactions affected object category discrimination within a more naturalistic setting, using a motion tracker that allowed object interactions with increased degrees of freedom. We again found that breakthrough times were faster for live compared to replayed interactions (contingency effect). Together, these data demonstrate that bCFS breakthrough times for unfamiliar 3D virtual objects are modulated by the contingency of the dynamic causal coupling between actions and their visual consequences, in line with theories of perception that emphasise the influence of sensorimotor contingencies on visual experience. The combination of VR/AR and motion tracking technologies with bCFS provides a novel methodology extending the use of binocular suppression paradigms into more dynamic and realistic sensorimotor environments.
Collapse
|
17
|
Abstract
There is an ongoing debate whether or not multisensory interactions require awareness of the sensory signals. Static visual and tactile stimuli have been shown to influence each other even in the absence of visual awareness. However, it is unclear if this finding generalizes to dynamic contexts. In the present study, we presented visual and tactile motion stimuli and induced fluctuations of visual awareness by means of binocular rivalry: two gratings which drifted in opposite directions were displayed, one to each eye. One visual motion stimulus dominated and reached awareness while the other visual stimulus was suppressed from awareness. Tactile motion stimuli were presented at random time points during the visual stimulation. The motion direction of a tactile stimulus always matched the direction of one of the concurrently presented visual stimuli. The visual gratings were differently tinted, and participants reported the color of the currently seen stimulus. Tactile motion delayed perceptual switches that ended dominance periods of congruently moving visual stimuli compared to switches during visual-only stimulation. In addition, tactile motion fostered the return to dominance of suppressed, congruently moving visual stimuli, but only if the tactile motion started at a late stage of the ongoing visual suppression period. At later stages, perceptual suppression is typically decreasing. These results suggest that visual awareness facilitates but does not gate multisensory interactions between visual and tactile motion signals.
Collapse
|
18
|
Convento S, Wegner-Clemens KA, Yau JM. Reciprocal Interactions Between Audition and Touch in Flutter Frequency Perception. Multisens Res 2019; 32:67-85. [PMID: 31059492 DOI: 10.1163/22134808-20181334] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2018] [Accepted: 11/09/2018] [Indexed: 11/19/2022]
Abstract
In both audition and touch, sensory cues comprising repeating events are perceived either as a continuous signal or as a stream of temporally discrete events (flutter), depending on the events' repetition rate. At high repetition rates (>100 Hz), auditory and tactile cues interact reciprocally in pitch processing. The frequency of a cue experienced in one modality systematically biases the perceived frequency of a cue experienced in the other modality. Here, we tested whether audition and touch also interact in the processing of low-frequency stimulation. We also tested whether multisensory interactions occurred if the stimulation in one modality comprised click trains and the stimulation in the other modality comprised amplitude-modulated signals. We found that auditory cues bias touch and tactile cues bias audition on a flutter discrimination task. Even though participants were instructed to attend to a single sensory modality and ignore the other cue, the flutter rate in the attended modality is perceived to be similar to that of the distractor modality. Moreover, we observed similar interaction patterns regardless of stimulus type and whether the same stimulus types were experienced by both senses. Combined with earlier studies, our results suggest that the nervous system extracts and combines temporal rate information from multisensory environmental signals, regardless of stimulus type, in both the low- and high temporal frequency domains. This function likely reflects the importance of temporal frequency as a fundamental feature of our multisensory experience.
Collapse
Affiliation(s)
- Silvia Convento
- 1Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| | - Kira A Wegner-Clemens
- 2Department of Neurosurgery, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| | - Jeffrey M Yau
- 1Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| |
Collapse
|
19
|
Davidson MJ, Alais D, van Boxtel JJA, Tsuchiya N. Attention periodically samples competing stimuli during binocular rivalry. eLife 2018; 7:e40868. [PMID: 30507378 PMCID: PMC6298779 DOI: 10.7554/elife.40868] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 11/19/2018] [Indexed: 12/14/2022] Open
Abstract
The attentional sampling hypothesis suggests that attention rhythmically enhances sensory processing when attending to a single (~8 Hz), or multiple (~4 Hz) objects. Here, we investigated whether attention samples sensory representations that are not part of the conscious percept during binocular rivalry. When crossmodally cued toward a conscious image, subsequent changes in consciousness occurred at ~8 Hz, consistent with the rates of undivided attentional sampling. However, when attention was cued toward the suppressed image, changes in consciousness slowed to ~3.5 Hz, indicating the division of attention away from the conscious visual image. In the electroencephalogram, we found that at attentional sampling frequencies, the strength of inter-trial phase-coherence over fronto-temporal and parieto-occipital regions correlated with changes in perception. When cues were not task-relevant, these effects disappeared, confirming that perceptual changes were dependent upon the allocation of attention, and that attention can flexibly sample away from a conscious image in a task-dependent manner.
Collapse
Affiliation(s)
- Matthew J Davidson
- School of Psychological Sciences, Faculty of Medicine, Nursing, and Health SciencesMonash UniversityMelbourneAustralia
- Monash Institute of Cognitive and Clinical NeurosciencesMonash UniversityMelbourneAustralia
| | - David Alais
- School of PsychologyThe University of SydneyCamperdownAustralia
| | - Jeroen JA van Boxtel
- School of Psychological Sciences, Faculty of Medicine, Nursing, and Health SciencesMonash UniversityMelbourneAustralia
- Monash Institute of Cognitive and Clinical NeurosciencesMonash UniversityMelbourneAustralia
- School of Psychology, Faculty of HealthUniversity of CanberraCanberraAustralia
| | - Naotsugu Tsuchiya
- School of Psychological Sciences, Faculty of Medicine, Nursing, and Health SciencesMonash UniversityMelbourneAustralia
- Monash Institute of Cognitive and Clinical NeurosciencesMonash UniversityMelbourneAustralia
| |
Collapse
|
20
|
Deroy O, Faivre N, Lunghi C, Spence C, Aller M, Noppeney U. The Complex Interplay Between Multisensory Integration and Perceptual Awareness. Multisens Res 2018; 29:585-606. [PMID: 27795942 DOI: 10.1163/22134808-00002529] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
The integration of information has been considered a hallmark of human consciousness, as it requires information being globally available via widespread neural interactions. Yet the complex interdependencies between multisensory integration and perceptual awareness, or consciousness, remain to be defined. While perceptual awareness has traditionally been studied in a single sense, in recent years we have witnessed a surge of interest in the role of multisensory integration in perceptual awareness. Based on a recent IMRF symposium on multisensory awareness, this review discusses three key questions from conceptual, methodological and experimental perspectives: (1) What do we study when we study multisensory awareness? (2) What is the relationship between multisensory integration and perceptual awareness? (3) Which experimental approaches are most promising to characterize multisensory awareness? We hope that this review paper will provoke lively discussions, novel experiments, and conceptual considerations to advance our understanding of the multifaceted interplay between multisensory integration and consciousness.
Collapse
Affiliation(s)
- O Deroy
- Centre for the Study of the Senses, Institute of Philosophy, School of Advanced Study, University of London, London, UK
| | - N Faivre
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - C Lunghi
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - C Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, UK
| | - M Aller
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - U Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| |
Collapse
|
21
|
Mustonen T, Nuutinen M, Vainio L, Häkkinen J. Upper nasal hemifield location and nonspatial auditory tones accelerate visual detection during dichoptic viewing. PLoS One 2018; 13:e0199962. [PMID: 30036400 PMCID: PMC6056051 DOI: 10.1371/journal.pone.0199962] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 06/15/2018] [Indexed: 12/04/2022] Open
Abstract
Visual performance is asymmetric across the visual field, but locational biases that occur during dichoptic viewing are not well understood. In this study, we characterized horizontal, vertical and naso-temporal biases in visual target detection during dichoptic stimulation and explored whether the detection was facilitated by non-spatial auditory tones associated with the target’s location. The detection time for single monocular targets that were suppressed from view with a 10 Hz dynamic noise mask presented to the other eye was measured at the 4° intercardinal location of each eye with the breaking Continuous Flash Suppression (b-CFS) technique. Each target was either combined with a sound (i.e., high or low pitch tone) that was congruent or incongruent with its vertical location (i.e., upper or lower visual field) or presented without a sound. The results indicated faster detection of targets in the upper rather than lower visual field and faster detection of targets in the nasal than temporal hemifield of each eye. Sounds generally accelerated target detection, but the tone pitch-elevation congruency did not further enhance performance. These findings suggest that visual detection during dichoptic viewing differs from standard viewing conditions with respect to location-related perceptual biases and crossmodal modulation of visual perception. These differences should be carefully considered in experimental designs employing dichoptic stimulation techniques and in display applications that utilize dichoptic viewing.
Collapse
Affiliation(s)
- Terhi Mustonen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- * E-mail:
| | - Mikko Nuutinen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Lari Vainio
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Jukka Häkkinen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| |
Collapse
|
22
|
Noel JP, Simon D, Thelen A, Maier A, Blake R, Wallace MT. Probing Electrophysiological Indices of Perceptual Awareness across Unisensory and Multisensory Modalities. J Cogn Neurosci 2018; 30:814-828. [PMID: 29488853 PMCID: PMC10804124 DOI: 10.1162/jocn_a_01247] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2024]
Abstract
The neural underpinnings of perceptual awareness have been extensively studied using unisensory (e.g., visual alone) stimuli. However, perception is generally multisensory, and it is unclear whether the neural architecture uncovered in these studies directly translates to the multisensory domain. Here, we use EEG to examine brain responses associated with the processing of visual, auditory, and audiovisual stimuli presented near threshold levels of detectability, with the aim of deciphering similarities and differences in the neural signals indexing the transition into perceptual awareness across vision, audition, and combined visual-auditory (multisensory) processing. More specifically, we examine (1) the presence of late evoked potentials (∼>300 msec), (2) the across-trial reproducibility, and (3) the evoked complexity associated with perceived versus nonperceived stimuli. Results reveal that, although perceived stimuli are associated with the presence of late evoked potentials across each of the examined sensory modalities, between-trial variability and EEG complexity differed for unisensory versus multisensory conditions. Whereas across-trial variability and complexity differed for perceived versus nonperceived stimuli in the visual and auditory conditions, this was not the case for the multisensory condition. Taken together, these results suggest that there are fundamental differences in the neural correlates of perceptual awareness for unisensory versus multisensory stimuli. Specifically, the work argues that the presence of late evoked potentials, as opposed to neural reproducibility or complexity, most closely tracks perceptual awareness regardless of the nature of the sensory stimulus. In addition, the current findings suggest a greater similarity between the neural correlates of perceptual awareness of unisensory (visual and auditory) stimuli when compared with multisensory stimuli.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Neuroscience Graduate Program, Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - David Simon
- Neuroscience Graduate Program, Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - Antonia Thelen
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - Alexander Maier
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA
| | - Randolph Blake
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37235, USA
- Department of Psychiatry, Vanderbilt University Medical Center, Nashville, TN 37235, USA
| |
Collapse
|
23
|
|
24
|
Piazza EA, Denison RN, Silver MA. Recent cross-modal statistical learning influences visual perceptual selection. J Vis 2018; 18:1. [PMID: 29497742 PMCID: PMC5837665 DOI: 10.1167/18.3.1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Incoming sensory signals are often ambiguous and consistent with multiple perceptual interpretations. Information from one sensory modality can help to resolve ambiguity in another modality, but the mechanisms by which multisensory associations come to influence the contents of conscious perception are unclear. We asked whether and how novel statistical information about the coupling between sounds and images influences the early stages of awareness of visual stimuli. We exposed subjects to consistent, arbitrary pairings of sounds and images and then measured the impact of this recent passive statistical learning on subjects' initial conscious perception of a stimulus by employing binocular rivalry, a phenomenon in which incompatible images presented separately to the two eyes result in a perceptual alternation between the two images. On each trial of the rivalry test, subjects were presented with a pair of rivalrous images (one of which had been consistently paired with a specific sound during exposure while the other had not) and an accompanying sound. We found that, at the onset of binocular rivalry, an image was significantly more likely to be perceived, and was perceived for a longer duration, when it was presented with its paired sound than when presented with other sounds. Our results indicate that recently acquired multisensory information helps resolve sensory ambiguity, and they demonstrate that statistical learning is a fast, flexible mechanism that facilitates this process.
Collapse
Affiliation(s)
- Elise A Piazza
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.,Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA.,Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
| | - Rachel N Denison
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.,Department of Psychology and Center for Neural Science, New York University, New York, NY, USA
| | - Michael A Silver
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA.,Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.,School of Optometry, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|
25
|
Łukowska M, Sznajder M, Wierzchoń M. Error-related cardiac response as information for visibility judgements. Sci Rep 2018; 8:1131. [PMID: 29348407 PMCID: PMC5773515 DOI: 10.1038/s41598-018-19144-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2017] [Accepted: 12/21/2017] [Indexed: 11/09/2022] Open
Abstract
Interoception provides information about the saliency of external or internal sensory events and thus may inform perceptual decision-making. Error in performance is an example of a motivationally significant internal event that evokes autonomic nervous system response resembling the orienting response: heart rate deceleration, increased skin conductance response, and pupil dilation. Here, we investigate whether error-related cardiac activity may serve as a source of information when making metacognitive judgments in an orientation discrimination backward masking task. In the first experiment, we found that the heart accelerates less after an incorrect stimuli discrimination than after a correct one. Moreover, this difference becomes more pronounced with increasing subjective visibility of the stimuli. In the second experiment, this accuracy-dependent pattern of cardiac activity was found only when participants listened to their own heartbeats, but not someone else's. We propose that decision accuracy coded in cardiac activity may be fed as a cue to subjective visibility judgments.
Collapse
Affiliation(s)
- Marta Łukowska
- Consciousness Lab, Institute of Psychology, Jagiellonian University, Cracow, 30-060, Poland.
| | - Michał Sznajder
- Consciousness Lab, Institute of Psychology, Jagiellonian University, Cracow, 30-060, Poland
| | - Michał Wierzchoń
- Consciousness Lab, Institute of Psychology, Jagiellonian University, Cracow, 30-060, Poland
| |
Collapse
|
26
|
Does direction of walking impact binocular rivalry between competing patterns of optic flow? Atten Percept Psychophys 2017; 79:1182-1194. [PMID: 28197836 DOI: 10.3758/s13414-017-1299-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When dissimilar monocular images are viewed simultaneously by the two eyes, stable binocular vision gives way to unstable vision characterized by alternations in dominance between the two images in a phenomenon called binocular rivalry. These alternations in perception reveal the existence of inhibitory interactions between neural representations associated with conflicting visual inputs. Binocular rivalry has been studied since the days of Wheatstone, but one recent strategy is to investigate its susceptibility to influences caused by one's own motor activity. This paper focused on the activity of walking, which produces an expected, characteristic direction of optic flow dependent upon the direction of one's walking. In a set of experiments, we employed virtual reality technology to present dichoptic stimuli to observers who walked forward, backward, or were sitting. Optic flow was presented to a given eye, and was sometimes congruent with the direction of walking, sometimes incongruent, and sometimes random, except when the participant was sitting. Our results indicate that, while walking had a reliable influence on rivalry dynamics, the predominance of congruent or incongruent motion did not.
Collapse
|
27
|
Unconscious integration of multisensory bodily inputs in the peripersonal space shapes bodily self-consciousness. Cognition 2017; 166:174-183. [PMID: 28577447 DOI: 10.1016/j.cognition.2017.05.028] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2016] [Revised: 05/17/2017] [Accepted: 05/17/2017] [Indexed: 12/21/2022]
|
28
|
Ferri F, Ambrosini E, Pinti P, Merla A, Costantini M. The role of expectation in multisensory body representation - neural evidence. Eur J Neurosci 2017. [PMID: 28644914 DOI: 10.1111/ejn.13629] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Sensory events contribute to body ownership, the feeling that the body belongs to me. However, the encoding of sensory events is not only reactive, but also proactive in that our brain generates prediction about forthcoming stimuli. In previous studies, we have shown that prediction of sensory events is a sufficient condition to induce the sense of body ownership. In this study, we investigated the underlying neural mechanisms. Participants were seated with their right arm resting upon a table just below another smaller table. Hence, the real hand was hidden from the participant's view and a life-sized rubber model of a right hand was placed on the small table in front of them. Participants observed a wooden plank while approaching - without touching - the rubber hand. We measured the phenomenology of the illusion by means of questionnaire. Neural activity was recorded by means of near-infrared spectroscopy (fNIRS). Results showed higher activation of multisensory parietal cortices in the rubber hand illusion induced by touch expectation. Furthermore, such activity was correlated with the subjective feeling of owning the rubber hand. Our results enrich current models of body ownership suggesting that our multisensory brain regions generate prediction on what could be my body and what could not. This finding might have interesting implications in all those cases in which body representation is altered, anorexia, bulimia nervosa and obesity, among others.
Collapse
Affiliation(s)
- Francesca Ferri
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, CO4 3SQ, UK
| | | | - Paola Pinti
- Infrared Imaging Lab, Institute for Advanced Biomedical Technologies - ITAB, Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio, Chieti-Pescara, Italy
| | - Arcangelo Merla
- Infrared Imaging Lab, Institute for Advanced Biomedical Technologies - ITAB, Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio, Chieti-Pescara, Italy
| | - Marcello Costantini
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, CO4 3SQ, UK.,Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio, Chieti, Italy.,Institute for Advanced Biomedical Technologies - ITAB, University G. d'Annunzio, Chieti, Italy
| |
Collapse
|
29
|
Newly acquired audio-visual associations bias perception in binocular rivalry. Vision Res 2017; 133:121-129. [DOI: 10.1016/j.visres.2017.02.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2016] [Revised: 02/11/2017] [Accepted: 02/17/2017] [Indexed: 11/16/2022]
|
30
|
Faivre N, Arzi A, Lunghi C, Salomon R. Consciousness is more than meets the eye: a call for a multisensory study of subjective experience. Neurosci Conscious 2017; 2017:nix003. [PMID: 30042838 PMCID: PMC6007148 DOI: 10.1093/nc/nix003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2016] [Revised: 02/06/2017] [Accepted: 02/16/2017] [Indexed: 11/17/2022] Open
Abstract
Over the last 30 years, our understanding of the neurocognitive bases of consciousness has improved, mostly through studies employing vision. While studying consciousness in the visual modality presents clear advantages, we believe that a comprehensive scientific account of subjective experience must not neglect other exteroceptive and interoceptive signals as well as the role of multisensory interactions for perceptual and self-consciousness. Here, we briefly review four distinct lines of work which converge in documenting how multisensory signals are processed across several levels and contents of consciousness. Namely, how multisensory interactions occur when consciousness is prevented because of perceptual manipulations (i.e. subliminal stimuli) or because of low vigilance states (i.e. sleep, anesthesia), how interactions between exteroceptive and interoceptive signals give rise to bodily self-consciousness, and how multisensory signals are combined to form metacognitive judgments. By describing the interactions between multisensory signals at the perceptual, cognitive, and metacognitive levels, we illustrate how stepping out the visual comfort zone may help in deriving refined accounts of consciousness, and may allow cancelling out idiosyncrasies of each sense to delineate supramodal mechanisms involved during consciousness.
Collapse
Affiliation(s)
- Nathan Faivre
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
- Centre d’Economie de la Sorbonne, CNRS UMR 8174, Paris, France
| | - Anat Arzi
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Claudia Lunghi
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
- Institute of Neuroscience, National Research Council (CNR), Pisa, Italy
| | - Roy Salomon
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel
| |
Collapse
|
31
|
Sounds can boost the awareness of visual events through attention without cross-modal integration. Sci Rep 2017; 7:41684. [PMID: 28139712 PMCID: PMC5282564 DOI: 10.1038/srep41684] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 12/21/2016] [Indexed: 11/09/2022] Open
Abstract
Cross-modal interactions can lead to enhancement of visual perception, even for visual events below awareness. However, the underlying mechanism is still unclear. Can purely bottom-up cross-modal integration break through the threshold of awareness? We used a binocular rivalry paradigm to measure perceptual switches after brief flashes or sounds which, sometimes, co-occurred. When flashes at the suppressed eye coincided with sounds, perceptual switches occurred the earliest. Yet, contrary to the hypothesis of cross-modal integration, this facilitation never surpassed the assumption of probability summation of independent sensory signals. A follow-up experiment replicated the same pattern of results using silent gaps embedded in continuous noise, instead of sounds. This manipulation should weaken putative sound-flash integration, although keep them salient as bottom-up attention cues. Additional results showed that spatial congruency between flashes and sounds did not determine the effectiveness of cross-modal facilitation, which was again not better than probability summation. Thus, the present findings fail to fully support the hypothesis of bottom-up cross-modal integration, above and beyond the independent contribution of two transient signals, as an account for cross-modal enhancement of visual events below level of awareness.
Collapse
|
32
|
Noel JP, Blanke O, Serino A, Salomon R. Interplay between Narrative and Bodily Self in Access to Consciousness: No Difference between Self- and Non-self Attributes. Front Psychol 2017; 8:72. [PMID: 28197110 PMCID: PMC5281626 DOI: 10.3389/fpsyg.2017.00072] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2016] [Accepted: 01/12/2017] [Indexed: 11/20/2022] Open
Abstract
The construct of the “self” is conceived as being fundamental in promoting survival. As such, extensive studies have documented preferential processing of self-relevant stimuli. For example, attributes that relate to the self are better encoded and retrieved, and are more readily consciously perceived. The preferential processing of self-relevant information, however, appears to be especially true for physical (e.g., faces), as opposed to psychological (e.g., traits), conceptions of the self. Here, we test whether semantic attributes that participants judge as self-relevant are further processed unconsciously than attributes that were not judged as self-relevant. In Experiment 1, a continuous flash suppression paradigm was employed with “self” and “non-self” attribute words being presented subliminally, and we asked participants to categorize unseen words as either self-related or not. In a second experiment, we attempted to boost putative preferential self-processing by relation to its physical conception, that is, one’s own body. To this aim, we repeated Experiment 1 while administrating acoustic stimuli either close or far from the body, i.e., within or outside peripersonal space. Results of both Experiment 1 and 2 demonstrate no difference in breaking suppression for self and non-self words. Additionally, we found that while participants were able to process the physical location of the unseen words (above or below fixation) they were not able to categorize these as self-relevant or not. Finally, results showed that sounds presented in the extra-personal space elicited a more stringent response criterion for “self” in the process of categorizing unseen visual stimuli. This shift in criterion as a consequence of sound location was restricted to the self, as no such effect was observed in the categorization of attributes occurring above or below fixation. Overall, our findings seem to indicate that subliminally presented stimuli are not semantically processed, at least inasmuch as to be categorized as self-relevant or not. However, we do demonstrate that the distance at which acoustic stimuli are presented may alter the balance between self- and non-self biases.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Laboratory of Cognitive Neuroscience, Faculty of Life Science, Brain Mind Institute, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Center for Neuroprosthetics, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Vanderbilt Brain Institute, Vanderbilt University, NashvilleTN, USA
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Faculty of Life Science, Brain Mind Institute, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Center for Neuroprosthetics, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Department of Neurology, University HospitalGeneva, Switzerland
| | - Andrea Serino
- Laboratory of Cognitive Neuroscience, Faculty of Life Science, Brain Mind Institute, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Center for Neuroprosthetics, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Department of Psychology, Alma Mater Studiorum - Università di BolognaBologna, Italy
| | - Roy Salomon
- Laboratory of Cognitive Neuroscience, Faculty of Life Science, Brain Mind Institute, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Center for Neuroprosthetics, Ecole Polytechnique Federale de LausanneLausanne, Switzerland; Gonda Multidisciplinary Brain Research Center, Bar-Ilan UniversityRamat Gan, Israel
| |
Collapse
|
33
|
Abstract
The implementation of computer games in physical therapy is motivated by characteristics such as attractiveness, motivation, and engagement, but these do not guarantee the intended therapeutic effect of the interventions. Yet, these characteristics are important variables in physical therapy interventions because they involve reward-related dopaminergic systems in the brain that are known to facilitate learning through long-term potentiation of neural connections. In this perspective we propose a way to apply game design approaches to therapy development by "designing" therapy sessions in such a way as to trigger physical and cognitive behavioral patterns required for treatment and neurological recovery. We also advocate that improving game knowledge among therapists and improving communication between therapists and game designers may lead to a novel avenue in designing applied games with specific therapeutic input, thereby making gamification in therapy a realistic and promising future that may optimize clinical practice.
Collapse
|
34
|
Abstract
To efficiently interact with the external environment, our nervous system combines information arising from different sensory modalities. Recent evidence suggests that cross-modal interactions can be automatic and even unconscious, reflecting the ecological relevance of cross-modal processing. Here, we use continuous flash suppression (CFS) to directly investigate whether haptic signals can interact with visual signals outside of visual awareness. We measured suppression durations of visual gratings rendered invisible by CFS either during visual stimulation alone or during visuo-haptic stimulation. We found that active exploration of a haptic grating congruent in orientation with the suppressed visual grating reduced suppression durations both compared with visual-only stimulation and to incongruent visuo-haptic stimulation. We also found that the facilitatory effect of touch on visual suppression disappeared when the visual and haptic gratings were mismatched in either spatial frequency or orientation. Together, these results demonstrate that congruent touch can accelerate the rise to consciousness of a suppressed visual stimulus and that this unconscious cross-modal interaction depends on visuo-haptic congruency. Furthermore, since CFS suppression is thought to occur early in visual cortical processing, our data reinforce the evidence suggesting that visuo-haptic interactions can occur at the earliest stages of cortical processing.
Collapse
Affiliation(s)
- Claudia Lunghi
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy; Institute of Neuroscience, CNR, Pisa, Italy
| | - Luca Lo Verde
- Institute of Neuroscience, CNR, Pisa, Italy; Department NEUROFARBA, University of Florence, Italy
| | - David Alais
- School of Psychology, University of Sydney, NSW, Australia
| |
Collapse
|
35
|
Hogendoorn H, Verstraten FAJ, MacDougall H, Alais D. Vestibular signals of self-motion modulate global motion perception. Vision Res 2016; 130:22-30. [PMID: 27871885 DOI: 10.1016/j.visres.2016.11.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2016] [Revised: 11/08/2016] [Accepted: 11/16/2016] [Indexed: 11/26/2022]
Abstract
Certain visual stimuli can have two possible interpretations. These perceptual interpretations may alternate stochastically, a phenomenon known as bistability. Some classes of bistable stimuli, including binocular rivalry, are sensitive to bias from input through other modalities, such as sound and touch. Here, we address the question whether bistable visual motion stimuli, known as plaids, are affected by vestibular input that is caused by self-motion. In Experiment 1, we show that a vestibular self-motion signal biases the interpretation of the bistable plaid, increasing or decreasing the likelihood of the plaid being perceived as globally coherent or transparently sliding depending on the relationship between self-motion and global visual motion directions. In Experiment 2, we find that when the vestibular direction is orthogonal to the visual direction, the vestibular self-motion signal also biases the direction of one-dimensional motion. This interaction suggests that the effect in Experiment 1 is due to the self-motion vector adding to the visual motion vectors. Together, this demonstrates that the perception of visual motion direction can be systematically affected by concurrent but uninformative and task-irrelevant vestibular input caused by self-motion.
Collapse
Affiliation(s)
- Hinze Hogendoorn
- Helmholtz Institute, Department of Experimental Psychology, Utrecht University, The Netherlands; School of Psychology, The University of Sydney, NSW 2006, Australia.
| | - Frans A J Verstraten
- Helmholtz Institute, Department of Experimental Psychology, Utrecht University, The Netherlands; School of Psychology, The University of Sydney, NSW 2006, Australia
| | | | - David Alais
- School of Psychology, The University of Sydney, NSW 2006, Australia
| |
Collapse
|
36
|
Abstract
It is known that, after a prolonged period of visual deprivation, the adult visual cortex can be recruited for nonvisual processing, reflecting cross-modal plasticity. Here, we investigated whether cross-modal plasticity can occur at short timescales in the typical adult brain by comparing the interaction between vision and touch during binocular rivalry before and after a brief period of monocular deprivation, which strongly alters ocular balance favoring the deprived eye. While viewing dichoptically two gratings of orthogonal orientation, participants were asked to actively explore a haptic grating congruent in orientation to one of the two rivalrous stimuli. We repeated this procedure before and after 150 min of monocular deprivation. We first confirmed that haptic stimulation interacted with vision during rivalry promoting dominance of the congruent visuo-haptic stimulus and that monocular deprivation increased the deprived eye and decreased the nondeprived eye dominance. Interestingly, after deprivation, we found that the effect of touch did not change for the nondeprived eye, whereas it disappeared for the deprived eye, which was potentiated after deprivation. The absence of visuo-haptic interaction for the deprived eye lasted for over 1 hr and was not attributable to a masking induced by the stronger response of the deprived eye as confirmed by a control experiment. Taken together, our results demonstrate that the adult human visual cortex retains a high degree of cross-modal plasticity, which can occur even at very short timescales.
Collapse
Affiliation(s)
- Luca Lo Verde
- University of Florence.,Institute of Neuroscience, Consiglio Nazionale Delle Ricerche, Pisa
| | | | - Claudia Lunghi
- Institute of Neuroscience, Consiglio Nazionale Delle Ricerche, Pisa.,University of Pisa
| |
Collapse
|
37
|
Han S, Lunghi C, Alais D. The temporal frequency tuning of continuous flash suppression reveals peak suppression at very low frequencies. Sci Rep 2016; 6:35723. [PMID: 27767078 PMCID: PMC5073327 DOI: 10.1038/srep35723] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Accepted: 10/03/2016] [Indexed: 11/09/2022] Open
Abstract
Continuous flash suppression (CFS) is a psychophysical technique where a rapidly changing Mondrian pattern viewed by one eye suppresses the target in the other eye for several seconds. Despite the widespread use of CFS to study unconscious visual processes, the temporal tuning of CFS suppression is currently unknown. In the present study we used spatiotemporally filtered dynamic noise as masking stimuli to probe the temporal characteristics of CFS. Surprisingly, we find that suppression in CFS peaks very prominently at approximately 1 Hz, well below the rates typically used in CFS studies (10 Hz or more). As well as a strong bias to low temporal frequencies, CFS suppression is greater for high spatial frequencies and increases with increasing masker contrast, indicating involvement of parvocellular/ventral mechanisms in the suppression process. These results are reminiscent of binocular rivalry, and unifies two phenomenon previously thought to require different explanations.
Collapse
Affiliation(s)
- Shui'er Han
- School of Psychology, University of Sydney, NSW 2006, Australia
| | - Claudia Lunghi
- Department of Translational Research on New Technologies in Medicine and Surgery, Via Savi 10, 56100 Pisa, Italy.,Neuroscience Institute, National Research Council (CNR), Via Moruzzi 1, 56100, Pisa, Italy
| | - David Alais
- School of Psychology, University of Sydney, NSW 2006, Australia
| |
Collapse
|
38
|
ten Oever S, Romei V, van Atteveldt N, Soto-Faraco S, Murray MM, Matusz PJ. The COGs (context, object, and goals) in multisensory processing. Exp Brain Res 2016; 234:1307-23. [PMID: 26931340 DOI: 10.1007/s00221-016-4590-z] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Accepted: 01/30/2016] [Indexed: 12/20/2022]
Abstract
Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and "top-down" control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer's goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications.
Collapse
Affiliation(s)
- Sanne ten Oever
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Vincenzo Romei
- Department of Psychology, Centre for Brain Science, University of Essex, Colchester, UK
| | - Nienke van Atteveldt
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.,Department of Educational Neuroscience, Faculty of Psychology and Education and Institute LEARN!, VU University Amsterdam, Amsterdam, The Netherlands
| | - Salvador Soto-Faraco
- Multisensory Research Group, Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain.,Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, Centre Hospitalier Universitaire Vaudois (CHUV), University Hospital Center and University of Lausanne, BH7.081, rue du Bugnon 46, 1011, Lausanne, Switzerland.,EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland.,Department of Ophthalmology, Jules-Gonin Eye Hospital, University of Lausanne, Lausanne, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, Centre Hospitalier Universitaire Vaudois (CHUV), University Hospital Center and University of Lausanne, BH7.081, rue du Bugnon 46, 1011, Lausanne, Switzerland. .,Attention, Brain, and Cognitive Development Group, Department of Experimental Psychology, University of Oxford, Oxford, UK.
| |
Collapse
|
39
|
When audiovisual correspondence disturbs visual processing. Exp Brain Res 2016; 234:1325-32. [PMID: 26884130 DOI: 10.1007/s00221-016-4591-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2015] [Accepted: 01/30/2016] [Indexed: 10/22/2022]
Abstract
Multisensory integration is known to create a more robust and reliable perceptual representation of one's environment. Specifically, a congruent auditory input can make a visual stimulus more salient, consequently enhancing the visibility and detection of the visual target. However, it remains largely unknown whether a congruent auditory input can also impair visual processing. In the current study, we demonstrate that temporally congruent auditory input disrupts visual processing, consequently slowing down visual target detection. More importantly, this cross-modal inhibition occurs only when the contrast of visual targets is high. When the contrast of visual targets is low, enhancement of visual target detection is observed, consistent with the prediction based on the principle of inverse effectiveness (PIE) in cross-modal integration. The switch of the behavioral effect of audiovisual interaction from benefit to cost further extends the PIE to encompass the suppressive cross-modal interaction.
Collapse
|
40
|
Salomon R, Kaliuzhna M, Herbelin B, Blanke O. Balancing awareness: Vestibular signals modulate visual consciousness in the absence of awareness. Conscious Cogn 2015. [DOI: 10.1016/j.concog.2015.07.009] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
41
|
An invisible touch: Body-related multisensory conflicts modulate visual consciousness. Neuropsychologia 2015; 88:131-139. [PMID: 26519553 DOI: 10.1016/j.neuropsychologia.2015.10.034] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2015] [Revised: 09/15/2015] [Accepted: 10/26/2015] [Indexed: 11/22/2022]
Abstract
The majority of scientific studies on consciousness have focused on vision, exploring the cognitive and neural mechanisms of conscious access to visual stimuli. In parallel, studies on bodily consciousness have revealed that bodily (i.e. tactile, proprioceptive, visceral, vestibular) signals are the basis for the sense of self. However, the role of bodily signals in the formation of visual consciousness is not well understood. Here we investigated how body-related visuo-tactile stimulation modulates conscious access to visual stimuli. We used a robotic platform to apply controlled tactile stimulation to the participants' back while they viewed a dot moving either in synchrony or asynchrony with the touch on their back. Critically, the dot was rendered invisible through continuous flash suppression. Manipulating the visual context by presenting the dot moving on either a body form, or a non-bodily object we show that: (i) conflict induced by synchronous visuo-tactile stimulation in a body context is associated with a delayed conscious access compared to asynchronous visuo-tactile stimulation, (ii) this effect occurs only in the context of a visual body form, and (iii) is not due to detection or response biases. The results indicate that body-related visuo-tactile conflicts impact visual consciousness by facilitating access of non-conflicting visual information to awareness, and that these are sensitive to the visual context in which they are presented, highlighting the interplay between bodily signals and visual experience.
Collapse
|
42
|
Lee M, Blake R, Kim S, Kim CY. Melodic sound enhances visual awareness of congruent musical notes, but only if you can read music. Proc Natl Acad Sci U S A 2015; 112:8493-8. [PMID: 26077907 PMCID: PMC4500286 DOI: 10.1073/pnas.1509529112] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Predictive influences of auditory information on resolution of visual competition were investigated using music, whose visual symbolic notation is familiar only to those with musical training. Results from two experiments using different experimental paradigms revealed that melodic congruence between what is seen and what is heard impacts perceptual dynamics during binocular rivalry. This bisensory interaction was observed only when the musical score was perceptually dominant, not when it was suppressed from awareness, and it was observed only in people who could read music. Results from two ancillary experiments showed that this effect of congruence cannot be explained by differential patterns of eye movements or by differential response sluggishness associated with congruent score/melody combinations. Taken together, these results demonstrate robust audiovisual interaction based on high-level, symbolic representations and its predictive influence on perceptual dynamics during binocular rivalry.
Collapse
Affiliation(s)
- Minyoung Lee
- Department of Psychology, Korea University, Seoul 136701, Korea
| | - Randolph Blake
- Department of Psychological Sciences, Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN 37240; Department of Brain and Cognitive Sciences, Seoul National University, Seoul 151742, Korea
| | - Sujin Kim
- Department of Psychology, Korea University, Seoul 136701, Korea;
| | - Chai-Youn Kim
- Department of Psychology, Korea University, Seoul 136701, Korea;
| |
Collapse
|
43
|
Moors P, Huygelier H, Wagemans J, de-Wit L, van Ee R. Suppressed visual looming stimuli are not integrated with auditory looming signals: Evidence from continuous flash suppression. Iperception 2015; 6:48-62. [PMID: 26034573 PMCID: PMC4441023 DOI: 10.1068/i0678] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2014] [Revised: 02/16/2015] [Indexed: 11/13/2022] Open
Abstract
Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.
Collapse
Affiliation(s)
- Pieter Moors
- Laboratory of Experimental Psychology, University of Leuven (KU Leuven), Leuven, Belgium; e-mail:
| | - Hanne Huygelier
- Laboratory of Experimental Psychology, University of Leuven (KU Leuven), Leuven, Belgium; e-mail:
| | - Johan Wagemans
- Laboratory of Experimental Psychology, University of Leuven (KU Leuven), Leuven, Belgium; e-mail:
| | - Lee de-Wit
- Laboratory of Experimental Psychology, University of Leuven (KU Leuven), Leuven, Belgium; e-mail:
| | - Raymond van Ee
- Laboratory of Experimental Psychology, University of Leuven (KU Leuven), Leuven, Belgium; Department of Biophysics, Donders Institute, Radboud University, Nijmegen, The Netherlands; Department of Brain, Body, & Behavior, Philips Research Laboratories, Eindhoven, The Netherlands; e-mail:
| |
Collapse
|
44
|
Pomper U, Keil J, Foxe JJ, Senkowski D. Intersensory selective attention and temporal orienting operate in parallel and are instantiated in spatially distinct sensory and motor cortices. Hum Brain Mapp 2015; 36:3246-59. [PMID: 26032901 DOI: 10.1002/hbm.22845] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2014] [Revised: 04/13/2015] [Accepted: 05/05/2015] [Indexed: 11/12/2022] Open
Abstract
Knowledge about the sensory modality in which a forthcoming event might occur permits anticipatory intersensory attention. Information as to when exactly an event occurs enables temporal orienting. Intersensory and temporal attention mechanisms are often deployed simultaneously, but as yet it is unknown whether these processes operate interactively or in parallel. In this human electroencephalography study, we manipulated intersensory attention and temporal orienting in the same paradigm. A continuous stream of bisensory visuo-tactile inputs was presented, and a preceding auditory cue indicated to which modality participants should attend (visual or tactile). Temporal orienting was manipulated blockwise by presenting stimuli either at regular or irregular intervals. Using linear beamforming, we examined neural oscillations at virtual channels in sensory and motor cortices. Both attentional processes simultaneously modulated the power of anticipatory delta- and beta-band oscillations, as well as delta-band phase coherence. Modulations in sensory cortices reflected intersensory attention, indicative of modality-specific gating mechanisms. Modulations in motor and partly in somatosensory cortex reflected temporal orienting, indicative of a supramodal preparatory mechanism. We found no evidence for interactions between intersensory attention and temporal orienting, suggesting that these two mechanisms act in parallel and largely independent of each other in sensory and motor cortices.
Collapse
Affiliation(s)
- Ulrich Pomper
- Department of Psychiatry and Psychotherapy, St. Hedwig Hospital, Charité-Universitätsmedizin Berlin, Große Hamburger Str. 5-11, 10115, Berlin, Germany.,UCL, Ear Institute, 332 Gray's Inn Road, London, WC1X 8EE, UK
| | - Julian Keil
- Department of Psychiatry and Psychotherapy, St. Hedwig Hospital, Charité-Universitätsmedizin Berlin, Große Hamburger Str. 5-11, 10115, Berlin, Germany
| | - John J Foxe
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Departments of Pediatrics & Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Van Etten Building - Wing 1C, 1225 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Daniel Senkowski
- Department of Psychiatry and Psychotherapy, St. Hedwig Hospital, Charité-Universitätsmedizin Berlin, Große Hamburger Str. 5-11, 10115, Berlin, Germany
| |
Collapse
|
45
|
Congruent tactile stimulation reduces the strength of visual suppression during binocular rivalry. Sci Rep 2015; 5:9413. [PMID: 25797534 PMCID: PMC4369741 DOI: 10.1038/srep09413] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2014] [Accepted: 03/04/2015] [Indexed: 11/16/2022] Open
Abstract
Presenting different images to each eye triggers ‘binocular rivalry’ in which one image is visible and the other suppressed, with the visible image alternating every second or so. We previously showed that binocular rivalry between cross-oriented gratings is altered when the fingertip explores a grooved stimulus aligned with one of the rivaling gratings: the matching visual grating's dominance duration was lengthened and its suppression duration shortened. In a more robust test, we here measure visual contrast sensitivity during rivalry dominance and suppression, with and without exploration of the grooved surface, to determine if rivalry suppression strength is modulated by touch. We find that a visual grating undergoes 45% less suppression when observers touch an aligned grating, compared to a cross-oriented one. Touching an aligned grating also improved visual detection thresholds for the ‘invisible’ suppressed grating by 2.4 dB, relative to a vision-only condition. These results show that congruent haptic stimulation prevents a visual stimulus from becoming deeply suppressed in binocular rivalry. Moreover, because congruent touch acted on the phenomenally invisible grating, this visuo-haptic interaction must precede awareness and likely occurs early in visual processing.
Collapse
|
46
|
Aller M, Giani A, Conrad V, Watanabe M, Noppeney U. A spatially collocated sound thrusts a flash into awareness. Front Integr Neurosci 2015; 9:16. [PMID: 25774126 PMCID: PMC4343005 DOI: 10.3389/fnint.2015.00016] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2014] [Accepted: 02/05/2015] [Indexed: 11/22/2022] Open
Abstract
To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception.
Collapse
Affiliation(s)
- Máté Aller
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham Birmingham, UK
| | - Anette Giani
- Max Planck Institute for Biological Cybernetics Tübingen, Germany
| | - Verena Conrad
- Max Planck Institute for Biological Cybernetics Tübingen, Germany
| | | | - Uta Noppeney
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham Birmingham, UK ; Max Planck Institute for Biological Cybernetics Tübingen, Germany
| |
Collapse
|
47
|
|
48
|
Adam R, Noppeney U. A phonologically congruent sound boosts a visual target into perceptual awareness. Front Integr Neurosci 2014; 8:70. [PMID: 25309357 PMCID: PMC4160974 DOI: 10.3389/fnint.2014.00070] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2014] [Accepted: 08/20/2014] [Indexed: 11/13/2022] Open
Abstract
Capacity limitations of attentional resources allow only a fraction of sensory inputs to enter our awareness. Most prominently, in the attentional blink the observer often fails to detect the second of two rapidly successive targets that are presented in a sequence of distractor items. To investigate how auditory inputs enable a visual target to escape the attentional blink, this study presented the visual letter targets T1 and T2 together with phonologically congruent or incongruent spoken letter names. First, a congruent relative to an incongruent sound at T2 rendered visual T2 more visible. Second, this T2 congruency effect was amplified when the sound was congruent at T1 as indicated by a T1 congruency × T2 congruency interaction. Critically, these effects were observed both when the sounds were presented in synchrony with and prior to the visual target letters suggesting that the sounds may increase visual target identification via multiple mechanisms such as audiovisual priming or decisional interactions. Our results demonstrate that a sound around the time of T2 increases subjects' awareness of the visual target as a function of T1 and T2 congruency. Consistent with Bayesian causal inference, the brain may thus combine (1) prior congruency expectations based on T1 congruency and (2) phonological congruency cues provided by the audiovisual inputs at T2 to infer whether auditory and visual signals emanate from a common source and should hence be integrated for perceptual decisions.
Collapse
Affiliation(s)
- Ruth Adam
- Cognitive Neuroimaging Group, Max Planck Institute for Biological Cybernetics Tuebingen, Germany ; Department of General Psychiatry, Center of Psychosocial Medicine, University of Heidelberg Heidelberg, Germany ; Institute for Stroke and Dementia Research, Ludwig-Maximilian-University Munich, Germany
| | - Uta Noppeney
- Cognitive Neuroimaging Group, Max Planck Institute for Biological Cybernetics Tuebingen, Germany ; Department of Psychology, Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham Birmingham, UK
| |
Collapse
|
49
|
Predictive coding explains auditory and tactile influences on vision during binocular rivalry. J Neurosci 2014; 34:6423-4. [PMID: 24806668 DOI: 10.1523/jneurosci.1040-14.2014] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|