1
|
Yilmaz SK, Kafaligonul H. Attentional demands in the visual field modulate audiovisual interactions in the temporal domain. Hum Brain Mapp 2024; 45:e70009. [PMID: 39185690 PMCID: PMC11345635 DOI: 10.1002/hbm.70009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 07/10/2024] [Accepted: 08/13/2024] [Indexed: 08/27/2024] Open
Abstract
Attention and crossmodal interactions are closely linked through a complex interplay at different stages of sensory processing. Within the context of motion perception, previous research revealed that attentional demands alter audiovisual interactions in the temporal domain. In the present study, we aimed to understand the neurophysiological correlates of these attentional modulations. We utilized an audiovisual motion paradigm that elicits auditory time interval effects on perceived visual speed. The audiovisual interactions in the temporal domain were quantified by changes in perceived visual speed across different auditory time intervals. We manipulated attentional demands in the visual field by having a secondary task on a stationary object (i.e., single- vs. dual-task conditions). When the attentional demands were high (i.e., dual-task condition), there was a significant decrease in the effects of auditory time interval on perceived visual speed, suggesting a reduction in audiovisual interactions. Moreover, we found significant differences in both early and late neural activities elicited by visual stimuli across task conditions (single vs. dual), reflecting an overall increase in attentional demands in the visual field. Consistent with the changes in perceived visual speed, the audiovisual interactions in neural signals declined in the late positive component range. Compared with the findings from previous studies using different paradigms, our findings support the view that attentional modulations of crossmodal interactions are not unitary and depend on task-specific components. They also have important implications for motion processing and speed estimation in daily life situations where sensory relevance and attentional demands constantly change.
Collapse
Affiliation(s)
- Seyma Koc Yilmaz
- Aysel Sabuncu Brain Research CenterBilkent UniversityAnkaraTurkey
- National Magnetic Resonance Research Center (UMRAM)Bilkent UniversityAnkaraTurkey
- Department of NeuroscienceBilkent UniversityAnkaraTurkey
| | - Hulusi Kafaligonul
- Aysel Sabuncu Brain Research CenterBilkent UniversityAnkaraTurkey
- National Magnetic Resonance Research Center (UMRAM)Bilkent UniversityAnkaraTurkey
- Department of NeuroscienceBilkent UniversityAnkaraTurkey
- Neuroscience and Neurotechnology Center of Excellence (NÖROM), Faculty of MedicineGazi UniversityAnkaraTurkey
| |
Collapse
|
2
|
Bao X, Lomber SG. Visual modulation of auditory evoked potentials in the cat. Sci Rep 2024; 14:7177. [PMID: 38531940 DOI: 10.1038/s41598-024-57075-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 03/14/2024] [Indexed: 03/28/2024] Open
Abstract
Visual modulation of the auditory system is not only a neural substrate for multisensory processing, but also serves as a backup input underlying cross-modal plasticity in deaf individuals. Event-related potential (ERP) studies in humans have provided evidence of a multiple-stage audiovisual interactions, ranging from tens to hundreds of milliseconds after the presentation of stimuli. However, it is still unknown if the temporal course of visual modulation in the auditory ERPs can be characterized in animal models. EEG signals were recorded in sedated cats from subdermal needle electrodes. The auditory stimuli (clicks) and visual stimuli (flashes) were timed by two independent Poison processes and were presented either simultaneously or alone. The visual-only ERPs were subtracted from audiovisual ERPs before being compared to the auditory-only ERPs. N1 amplitude showed a trend of transiting from suppression-to-facilitation with a disruption at ~ 100-ms flash-to-click delay. We concluded that visual modulation as a function of SOA with extended range is more complex than previously characterized with short SOAs and its periodic pattern can be interpreted with "phase resetting" hypothesis.
Collapse
Affiliation(s)
- Xiaohan Bao
- Integrated Program in Neuroscience, McGill University, Montreal, QC, H3G 1Y6, Canada
| | - Stephen G Lomber
- Department of Physiology, McGill University, McIntyre Medical Sciences Building, Rm 1223, 3655 Promenade Sir William Osler, Montreal, QC, H3G 1Y6, Canada.
| |
Collapse
|
3
|
Zou Z, Zhao B, Ting KH, Wong C, Hou X, Chan CCH. Multisensory integration augmenting motor processes among older adults. Front Aging Neurosci 2023; 15:1293479. [PMID: 38192281 PMCID: PMC10773807 DOI: 10.3389/fnagi.2023.1293479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 12/04/2023] [Indexed: 01/10/2024] Open
Abstract
Objective Multisensory integration enhances sensory processing in older adults. This study aimed to investigate how the sensory enhancement would modulate the motor related process in healthy older adults. Method Thirty-one older adults (12 males, mean age 67.7 years) and 29 younger adults as controls (16 males, mean age 24.9 years) participated in this study. Participants were asked to discriminate spatial information embedded in the unisensory (visual or audial) and multisensory (audiovisual) conditions. The responses made by the movements of the left and right wrists corresponding to the spatial information were registered with specially designed pads. The electroencephalogram (EEG) marker was the event-related super-additive P2 in the frontal-central region, the stimulus-locked lateralized readiness potentials (s-LRP) and response-locked lateralized readiness potentials (r-LRP). Results Older participants showed significantly faster and more accurate responses than controls in the multisensory condition than in the unisensory conditions. Both groups had significantly less negative-going s-LRP amplitudes elicited at the central sites in the between-condition contrasts. However, only the older group showed significantly less negative-going, centrally distributed r-LRP amplitudes. More importantly, only the r-LRP amplitude in the audiovisual condition significantly predicted behavioral performance. Conclusion Audiovisual integration enhances reaction time, which associates with modulated motor related processes among the older participants. The super-additive effects modulate both the motor preparation and generation processes. Interestingly, only the modulated motor generation process contributes to faster reaction time. As such effects were observed in older but not younger participants, multisensory integration likely augments motor functions in those with age-related neurodegeneration.
Collapse
Affiliation(s)
- Zhi Zou
- Department of Sport and Health, Guangzhou Sport University, Guangzhou, China
| | - Benxuan Zhao
- Department of Sport and Health, Guangzhou Sport University, Guangzhou, China
| | - Kin-hung Ting
- University Research Facility in Behavioral and Systems Neuroscience, The Hong Kong Polytechnic University, Hong Kong, Hong Kong SAR, China
| | - Clive Wong
- Department of Psychology, The Education University of Hong Kong, New Territories, Hong Kong SAR, China
| | - Xiaohui Hou
- Department of Sport and Health, Guangzhou Sport University, Guangzhou, China
| | - Chetwyn C. H. Chan
- Department of Psychology, The Education University of Hong Kong, New Territories, Hong Kong SAR, China
| |
Collapse
|
4
|
Wang Y, Liu P, Liu Z, Ding J, Zhou W. The effect of mobile phone ringtone on visual recognition during driving: Evidence from laboratory and real-scene eye movement experiments. TRAFFIC INJURY PREVENTION 2023; 24:678-685. [PMID: 37640435 DOI: 10.1080/15389588.2023.2247111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 08/08/2023] [Accepted: 08/08/2023] [Indexed: 08/31/2023]
Abstract
OBJECTIVE To determine the effect of mobile phone ringtones on visual recognition during driving, laboratory and real-scene eye movement experiments were conducted with simulated and real driving tasks, respectively. Competition for visual attention during driving increases with the integration of sounds, which is related to driving safety. METHOD We manipulated the physical (long exposure duration vs. short exposure duration) and psychological (self-related vs. non-self-related) properties of mobile phone ringtones presented to drivers. Estimates were based on linear mixed models (LMMs) and generalized linear mixed models (GLMMs). RESULTS Self-related ringtones had a greater influence on driving attention than non-self-related ones, and the interaction between exposure duration and self-relatedness was significant. Furthermore, the impact of the mobile phone ringtone occurred in real time after the ringtone stopped. CONCLUSION These results highlight the importance of considering the impact of ringtones on driving performance and demonstrate that ringtone properties (exposure duration and self-relatedness) can affect cognitive processes.
Collapse
Affiliation(s)
- Yi Wang
- Beijing Key Laboratory of Learning and Cognition, School of Psychology, Capital Normal University, Beijing, China
| | - Ping Liu
- Beijing Key Laboratory of Learning and Cognition, School of Psychology, Capital Normal University, Beijing, China
| | - Zeqi Liu
- College of Elementary Education, Capital Normal University, Beijing, China
| | - Jinhong Ding
- Beijing Key Laboratory of Learning and Cognition, School of Psychology, Capital Normal University, Beijing, China
| | - Wei Zhou
- Beijing Key Laboratory of Learning and Cognition, School of Psychology, Capital Normal University, Beijing, China
| |
Collapse
|
5
|
Jiang Y, Qiao R, Shi Y, Tang Y, Hou Z, Tian Y. The effects of attention in auditory-visual integration revealed by time-varying networks. Front Neurosci 2023; 17:1235480. [PMID: 37600005 PMCID: PMC10434229 DOI: 10.3389/fnins.2023.1235480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 07/17/2023] [Indexed: 08/22/2023] Open
Abstract
Attention and audiovisual integration are crucial subjects in the field of brain information processing. A large number of previous studies have sought to determine the relationship between them through specific experiments, but failed to reach a unified conclusion. The reported studies explored the relationship through the frameworks of early, late, and parallel integration, though network analysis has been employed sparingly. In this study, we employed time-varying network analysis, which offers a comprehensive and dynamic insight into cognitive processing, to explore the relationship between attention and auditory-visual integration. The combination of high spatial resolution functional magnetic resonance imaging (fMRI) and high temporal resolution electroencephalography (EEG) was used. Firstly, a generalized linear model (GLM) was employed to find the task-related fMRI activations, which was selected as regions of interesting (ROIs) for nodes of time-varying network. Then the electrical activity of the auditory-visual cortex was estimated via the normalized minimum norm estimation (MNE) source localization method. Finally, the time-varying network was constructed using the adaptive directed transfer function (ADTF) technology. Notably, Task-related fMRI activations were mainly observed in the bilateral temporoparietal junction (TPJ), superior temporal gyrus (STG), primary visual and auditory areas. And the time-varying network analysis revealed that V1/A1↔STG occurred before TPJ↔STG. Therefore, the results supported the theory that auditory-visual integration occurred before attention, aligning with the early integration framework.
Collapse
Affiliation(s)
- Yuhao Jiang
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
- Central Nervous System Drug Key Laboratory of Sichuan Province, Luzhou, China
| | - Rui Qiao
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yupan Shi
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yi Tang
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Zhengjun Hou
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yin Tian
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| |
Collapse
|
6
|
Fossataro C, Galigani M, Rossi Sebastiano A, Bruno V, Ronga I, Garbarini F. Spatial proximity to others induces plastic changes in the neural representation of the peripersonal space. iScience 2022; 26:105879. [PMID: 36654859 PMCID: PMC9840938 DOI: 10.1016/j.isci.2022.105879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 11/21/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022] Open
Abstract
Peripersonal space (PPS) is a highly plastic "invisible bubble" surrounding the body whose boundaries are mapped through multisensory integration. Yet, it is unclear how the spatial proximity to others alters PPS boundaries. Across five experiments (N = 80), by recording behavioral and electrophysiological responses to visuo-tactile stimuli, we demonstrate that the proximity to others induces plastic changes in the neural PPS representation. The spatial proximity to someone else's hand shrinks the portion of space within which multisensory responses occur, thus reducing the PPS boundaries. This suggests that PPS representation, built from bodily and multisensory signals, plastically adapts to the presence of conspecifics to define the self-other boundaries, so that what is usually coded as "my space" is recoded as "your space". When the space is shared with conspecifics, it seems adaptive to move the other-space away from the self-space to discriminate whether external events pertain to the self-body or to other-bodies.
Collapse
Affiliation(s)
- Carlotta Fossataro
- MANIBUS Lab, Psychology Department, University of Turin, Turin 10123, Italy
| | - Mattia Galigani
- MANIBUS Lab, Psychology Department, University of Turin, Turin 10123, Italy
| | | | - Valentina Bruno
- MANIBUS Lab, Psychology Department, University of Turin, Turin 10123, Italy
| | - Irene Ronga
- MANIBUS Lab, Psychology Department, University of Turin, Turin 10123, Italy
| | - Francesca Garbarini
- MANIBUS Lab, Psychology Department, University of Turin, Turin 10123, Italy,Neuroscience Institute of Turin (NIT), Turin 10123, Italy,Corresponding author
| |
Collapse
|
7
|
Electrophysiological differences and similarities in audiovisual speech processing in CI users with unilateral and bilateral hearing loss. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100059. [DOI: 10.1016/j.crneur.2022.100059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 08/24/2022] [Accepted: 10/07/2022] [Indexed: 11/11/2022] Open
|
8
|
Gori M, Bertonati G, Campus C, Amadeo MB. Multisensory representations of space and time in sensory cortices. Hum Brain Mapp 2022; 44:656-667. [PMID: 36169038 PMCID: PMC9842891 DOI: 10.1002/hbm.26090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Giorgia Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly,Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - Claudio Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
9
|
Ren Q, Marshall AC, Kaiser J, Schütz-Bosbach S. Multisensory Integration of Anticipated Cardiac Signals with Visual Targets Affects Their Detection among Multiple Visual Stimuli. Neuroimage 2022; 262:119549. [DOI: 10.1016/j.neuroimage.2022.119549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/29/2022] [Accepted: 08/04/2022] [Indexed: 11/17/2022] Open
|
10
|
The relationship between multisensory associative learning and multisensory integration. Neuropsychologia 2022; 174:108336. [PMID: 35872233 DOI: 10.1016/j.neuropsychologia.2022.108336] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 07/15/2022] [Accepted: 07/16/2022] [Indexed: 11/23/2022]
Abstract
Integrating sensory information from multiple modalities leads to more precise and efficient perception and behaviour. The process of determining which sensory information should be perceptually bound is reliant on both low-level stimulus features, as well as multisensory associations learned throughout development based on the statistics of our environment. Here, we explored the relationship between multisensory associative learning and multisensory integration using encephalography (EEG) and behavioural measures. Sixty-one participants completed a three-phase study. First, participants were exposed to novel audiovisual shape-tone pairings with frequent and infrequent stimulus pairings and complete a target detection task. EEG recordings of the mismatch negativity (MMN) and P3 were calculated as neural indices of multisensory associative learning. Next, the same learned stimulus pairs were presented in audiovisual as well as unisensory auditory and visual modalities while both early (<120 ms) and late neural indices of multisensory integration were recorded. Finally, participants completed an analogous behavioural speeded-response task, with behavioural indices of multisensory gain calculated using the Race Model. Significant relationships were found in fronto-central and occipital areas between neural measures of associative learning and both early and late indices of multisensory integration in frontal and centro-parietal areas, respectively. Participants who showed stronger indices of associative learning also exhibited stronger indices of multisensory integration of the stimuli they learned to associate. Furthermore, a significant relationship was found between neural index of early multisensory integration and behavioural indices of multisensory gain. These results provide insight into the neural underpinnings of how higher-order processes such as associative learning guide multisensory integration.
Collapse
|
11
|
The timecourse of multisensory speech processing in unilaterally stimulated cochlear implant users revealed by ERPs. Neuroimage Clin 2022; 34:102982. [PMID: 35303598 PMCID: PMC8927996 DOI: 10.1016/j.nicl.2022.102982] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 03/01/2022] [Accepted: 03/02/2022] [Indexed: 11/21/2022]
Abstract
Both normal-hearing (NH) and cochlear implant (CI) users show a clear benefit in multisensory speech processing. Group differences in ERP topographies and cortical source activation suggest distinct audiovisual speech processing in CI users when compared to NH listeners. Electrical neuroimaging, including topographic and ERP source analysis, provides a suitable tool to study the timecourse of multisensory speech processing in CI users.
A cochlear implant (CI) is an auditory prosthesis which can partially restore the auditory function in patients with severe to profound hearing loss. However, this bionic device provides only limited auditory information, and CI patients may compensate for this limitation by means of a stronger interaction between the auditory and visual system. To better understand the electrophysiological correlates of audiovisual speech perception, the present study used electroencephalography (EEG) and a redundant target paradigm. Postlingually deafened CI users and normal-hearing (NH) listeners were compared in auditory, visual and audiovisual speech conditions. The behavioural results revealed multisensory integration for both groups, as indicated by shortened response times for the audiovisual as compared to the two unisensory conditions. The analysis of the N1 and P2 event-related potentials (ERPs), including topographic and source analyses, confirmed a multisensory effect for both groups and showed a cortical auditory response which was modulated by the simultaneous processing of the visual stimulus. Nevertheless, the CI users in particular revealed a distinct pattern of N1 topography, pointing to a strong visual impact on auditory speech processing. Apart from these condition effects, the results revealed ERP differences between CI users and NH listeners, not only in N1/P2 ERP topographies, but also in the cortical source configuration. When compared to the NH listeners, the CI users showed an additional activation in the visual cortex at N1 latency, which was positively correlated with CI experience, and a delayed auditory-cortex activation with a reversed, rightward functional lateralisation. In sum, our behavioural and ERP findings demonstrate a clear audiovisual benefit for both groups, and a CI-specific alteration in cortical activation at N1 latency when auditory and visual input is combined. These cortical alterations may reflect a compensatory strategy to overcome the limited CI input, which allows the CI users to improve the lip-reading skills and to approximate the behavioural performance of NH listeners in audiovisual speech conditions. Our results are clinically relevant, as they highlight the importance of assessing the CI outcome not only in auditory-only, but also in audiovisual speech conditions.
Collapse
|
12
|
Neurocomputational mechanisms underlying cross-modal associations and their influence on perceptual decisions. Neuroimage 2021; 247:118841. [PMID: 34952232 PMCID: PMC9127393 DOI: 10.1016/j.neuroimage.2021.118841] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 12/07/2021] [Accepted: 12/19/2021] [Indexed: 12/02/2022] Open
Abstract
When exposed to complementary features of information across sensory modalities, our brains formulate cross-modal associations between features of stimuli presented separately to multiple modalities. For example, auditory pitch-visual size associations map high-pitch tones with small-size visual objects, and low-pitch tones with large-size visual objects. Preferential, or congruent, cross-modal associations have been shown to affect behavioural performance, i.e. choice accuracy and reaction time (RT) across multisensory decision-making paradigms. However, the neural mechanisms underpinning such influences in perceptual decision formation remain unclear. Here, we sought to identify when perceptual improvements from associative congruency emerge in the brain during decision formation. In particular, we asked whether such improvements represent ‘early’ sensory processing benefits, or ‘late’ post-sensory changes in decision dynamics. Using a modified version of the Implicit Association Test (IAT), coupled with electroencephalography (EEG), we measured the neural activity underlying the effect of auditory stimulus-driven pitch-size associations on perceptual decision formation. Behavioural results showed that participants responded significantly faster during trials when auditory pitch was congruent, rather than incongruent, with its associative visual size counterpart. We used multivariate Linear Discriminant Analysis (LDA) to characterise the spatiotemporal dynamics of EEG activity underpinning IAT performance. We found an ‘Early’ component (∼100–110 ms post-stimulus onset) coinciding with the time of maximal discrimination of the auditory stimuli, and a ‘Late’ component (∼330–340 ms post-stimulus onset) underlying IAT performance. To characterise the functional role of these components in decision formation, we incorporated a neurally-informed Hierarchical Drift Diffusion Model (HDDM), revealing that the Late component decreases response caution, requiring less sensory evidence to be accumulated, whereas the Early component increased the duration of sensory-encoding processes for incongruent trials. Overall, our results provide a mechanistic insight into the contribution of ‘early’ sensory processing, as well as ‘late’ post-sensory neural representations of associative congruency to perceptual decision formation.
Collapse
|
13
|
Turoman N, Tivadar RI, Retsa C, Murray MM, Matusz PJ. Towards understanding how we pay attention in naturalistic visual search settings. Neuroimage 2021; 244:118556. [PMID: 34492292 DOI: 10.1016/j.neuroimage.2021.118556] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 08/31/2021] [Accepted: 09/03/2021] [Indexed: 10/20/2022] Open
Abstract
Research on attentional control has largely focused on single senses and the importance of behavioural goals in controlling attention. However, everyday situations are multisensory and contain regularities, both likely influencing attention. We investigated how visual attentional capture is simultaneously impacted by top-down goals, the multisensory nature of stimuli, and the contextual factors of stimuli's semantic relationship and temporal predictability. Participants performed a multisensory version of the Folk et al. (1992) spatial cueing paradigm, searching for a target of a predefined colour (e.g. a red bar) within an array preceded by a distractor. We manipulated: 1) stimuli's goal-relevance via distractor's colour (matching vs. mismatching the target), 2) stimuli's multisensory nature (colour distractors appearing alone vs. with tones), 3) the relationship between the distractor sound and colour (arbitrary vs. semantically congruent) and 4) the temporal predictability of distractor onset. Reaction-time spatial cueing served as a behavioural measure of attentional selection. We also recorded 129-channel event-related potentials (ERPs), analysing the distractor-elicited N2pc component both canonically and using a multivariate electrical neuroimaging framework. Behaviourally, arbitrary target-matching distractors captured attention more strongly than semantically congruent ones, with no evidence for context modulating multisensory enhancements of capture. Notably, electrical neuroimaging of surface-level EEG analyses revealed context-based influences on attention to both visual and multisensory distractors, in how strongly they activated the brain and type of activated brain networks. For both processes, the context-driven brain response modulations occurred long before the N2pc time-window, with topographic (network-based) modulations at ∼30 ms, followed by strength-based modulations at ∼100 ms post-distractor onset. Our results reveal that both stimulus meaning and predictability modulate attentional selection, and they interact while doing so. Meaning, in addition to temporal predictability, is thus a second source of contextual information facilitating goal-directed behaviour. More broadly, in everyday situations, attention is controlled by an interplay between one's goals, stimuli's perceptual salience, meaning and predictability. Our study calls for a revision of attentional control theories to account for the role of contextual and multisensory control.
Collapse
Affiliation(s)
- Nora Turoman
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; MEDGIFT Lab, Institute of Information Systems, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Techno-Pôle 3, 3960 Sierre, Switzerland; Working Memory, Cognition and Development lab, Department of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Ruxandra I Tivadar
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland; Cognitive Computational Neuroscience group, Institute of Computer Science, Faculty of Science, University of Bern, Switzerland
| | - Chrysa Retsa
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; CIBM Center for Biomedical Imaging, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Micah M Murray
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland; CIBM Center for Biomedical Imaging, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Pawel J Matusz
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; MEDGIFT Lab, Institute of Information Systems, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Techno-Pôle 3, 3960 Sierre, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
14
|
Brandman T, Avancini C, Leticevscaia O, Peelen MV. Auditory and Semantic Cues Facilitate Decoding of Visual Object Category in MEG. Cereb Cortex 2021; 30:597-606. [PMID: 31216008 DOI: 10.1093/cercor/bhz110] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 04/04/2019] [Accepted: 05/02/2019] [Indexed: 11/13/2022] Open
Abstract
Sounds (e.g., barking) help us to visually identify objects (e.g., a dog) that are distant or ambiguous. While neuroimaging studies have revealed neuroanatomical sites of audiovisual interactions, little is known about the time course by which sounds facilitate visual object processing. Here we used magnetoencephalography to reveal the time course of the facilitatory influence of natural sounds (e.g., barking) on visual object processing and compared this to the facilitatory influence of spoken words (e.g., "dog"). Participants viewed images of blurred objects preceded by a task-irrelevant natural sound, a spoken word, or uninformative noise. A classifier was trained to discriminate multivariate sensor patterns evoked by animate and inanimate intact objects with no sounds, presented in a separate experiment, and tested on sensor patterns evoked by the blurred objects in the 3 auditory conditions. Results revealed that both sounds and words, relative to uninformative noise, significantly facilitated visual object category decoding between 300-500 ms after visual onset. We found no evidence for earlier facilitation by sounds than by words. These findings provide evidence for a semantic route of facilitation by both natural sounds and spoken words, whereby the auditory input first activates semantic object representations, which then modulate the visual processing of objects.
Collapse
Affiliation(s)
- Talia Brandman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Chiara Avancini
- Centre for Neuroscience in Education, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Olga Leticevscaia
- Cell and Developmental Biology, University College London, London WC1E 6BT, United Kingdom
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 HR Nijmegen, The Netherlands
| |
Collapse
|
15
|
Turoman N, Tivadar RI, Retsa C, Maillard AM, Scerif G, Matusz PJ. The development of attentional control mechanisms in multisensory environments. Dev Cogn Neurosci 2021; 48:100930. [PMID: 33561691 PMCID: PMC7873372 DOI: 10.1016/j.dcn.2021.100930] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 11/26/2020] [Accepted: 01/26/2021] [Indexed: 01/02/2023] Open
Abstract
Outside the laboratory, people need to pay attention to relevant objects that are typically multisensory, but it remains poorly understood how the underlying neurocognitive mechanisms develop. We investigated when adult-like mechanisms controlling one's attentional selection of visual and multisensory objects emerge across childhood. Five-, 7-, and 9-year-olds were compared with adults in their performance on a computer game-like multisensory spatial cueing task, while 129-channel EEG was simultaneously recorded. Markers of attentional control were behavioural spatial cueing effects and the N2pc ERP component (analysed traditionally and using a multivariate electrical neuroimaging framework). In behaviour, adult-like visual attentional control was present from age 7 onwards, whereas multisensory control was absent in all children groups. In EEG, multivariate analyses of the activity over the N2pc time-window revealed stable brain activity patterns in children. Adult-like visual-attentional control EEG patterns were present age 7 onwards, while multisensory control activity patterns were found in 9-year-olds (albeit behavioural measures showed no effects). By combining rigorous yet naturalistic paradigms with multivariate signal analyses, we demonstrated that visual attentional control seems to reach an adult-like state at ∼7 years, before adult-like multisensory control, emerging at ∼9 years. These results enrich our understanding of how attention in naturalistic settings develops.
Collapse
Affiliation(s)
- Nora Turoman
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, 3960, Switzerland; Working Memory, Cognition and Development lab, Department of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Ruxandra I Tivadar
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Cognitive Computational Neuroscience group, Institute of Computer Science, Faculty of Science, University of Bern, Bern, Switzerland
| | - Chrysa Retsa
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Anne M Maillard
- Service des Troubles du Spectre de l'Autisme et apparentés, Department of Psychiatry, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Gaia Scerif
- Department of Experimental Psychology, University of Oxford, Oxfordshire, UK
| | - Pawel J Matusz
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, 3960, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
16
|
Kaya U, Kafaligonul H. Audiovisual interactions in speeded discrimination of a visual event. Psychophysiology 2021; 58:e13777. [PMID: 33483971 DOI: 10.1111/psyp.13777] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 01/07/2021] [Accepted: 01/07/2021] [Indexed: 01/10/2023]
Abstract
The integration of information from different senses is central to our perception of the external world. Audiovisual interactions have been particularly well studied in this context and various illusions have been developed to demonstrate strong influences of these interactions on the final percept. Using audiovisual paradigms, previous studies have shown that even task-irrelevant information provided by a secondary modality can change the detection and discrimination of a primary target. These modulations have been found to be significantly dependent on the relative timing between auditory and visual stimuli. Although these interactions in time have been commonly reported, we have still limited understanding of the relationship between the modulations of event-related potentials (ERPs) and final behavioral performance. Here, we aimed to shed light on this important issue by using a speeded discrimination paradigm combined with electroencephalogram (EEG). During the experimental sessions, the timing between an auditory click and a visual flash was varied over a wide range of stimulus onset asynchronies and observers were engaged in speeded discrimination of flash location. Behavioral reaction times were significantly changed by click timing. Furthermore, the modulations of evoked activities over medial parietal/parieto-occipital electrodes were associated with this effect. These modulations were within the 126-176 ms time range and more importantly, they were also correlated with the changes in reaction times. These results provide an important functional link between audiovisual interactions at early stages of sensory processing and reaction times. Together with previous research, they further suggest that early crossmodal interactions play a critical role in perceptual performance.
Collapse
Affiliation(s)
- Utku Kaya
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey.,Informatics Institute, Middle East Technical University, Ankara, Turkey.,Department of Anesthesiology, University of Michigan, Ann Arbor, MI, USA
| | - Hulusi Kafaligonul
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey.,Interdisciplinary Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent University, Ankara, Turkey
| |
Collapse
|
17
|
Zhao S, Feng C, Huang X, Wang Y, Feng W. Neural Basis of Semantically Dependent and Independent Cross-Modal Boosts on the Attentional Blink. Cereb Cortex 2020; 31:2291-2304. [DOI: 10.1093/cercor/bhaa362] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 10/15/2020] [Accepted: 11/02/2020] [Indexed: 01/26/2023] Open
Abstract
Abstract
The present study recorded event-related potentials (ERPs) in a visual object-recognition task under the attentional blink paradigm to explore the temporal dynamics of the cross-modal boost on attentional blink and whether this auditory benefit would be modulated by semantic congruency between T2 and the simultaneous sound. Behaviorally, the present study showed that not only a semantically congruent but also a semantically incongruent sound improved T2 discrimination during the attentional blink interval, whereas the enhancement was larger for the congruent sound. The ERP results revealed that the behavioral improvements induced by both the semantically congruent and incongruent sounds were closely associated with an early cross-modal interaction on the occipital N195 (192–228 ms). In contrast, the lower T2 accuracy for the incongruent than congruent condition was accompanied by a larger late occurring cento-parietal N440 (424–448 ms). These findings suggest that the cross-modal boost on attentional blink is hierarchical: the task-irrelevant but simultaneous sound, irrespective of its semantic relevance, firstly enables T2 to escape the attentional blink via cross-modally strengthening the early stage of visual object-recognition processing, whereas the semantic conflict of the sound begins to interfere with visual awareness only at a later stage when the representation of visual object is extracted.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Xinyin Huang
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Yijun Wang
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| |
Collapse
|
18
|
Noel JP, Bertoni T, Terrebonne E, Pellencin E, Herbelin B, Cascio C, Blanke O, Magosso E, Wallace MT, Serino A. Rapid Recalibration of Peri-Personal Space: Psychophysical, Electrophysiological, and Neural Network Modeling Evidence. Cereb Cortex 2020; 30:5088-5106. [PMID: 32377673 PMCID: PMC7391419 DOI: 10.1093/cercor/bhaa103] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Revised: 03/27/2020] [Accepted: 03/30/2020] [Indexed: 12/20/2022] Open
Abstract
Interactions between individuals and the environment occur within the peri-personal space (PPS). The encoding of this space plastically adapts to bodily constraints and stimuli features. However, these remapping effects have not been demonstrated on an adaptive time-scale, trial-to-trial. Here, we test this idea first via a visuo-tactile reaction time (RT) paradigm in augmented reality where participants are asked to respond as fast as possible to touch, as visual objects approach them. Results demonstrate that RTs to touch are facilitated as a function of visual proximity, and the sigmoidal function describing this facilitation shifts closer to the body if the immediately precedent trial had indexed a smaller visuo-tactile disparity. Next, we derive the electroencephalographic correlates of PPS and demonstrate that this multisensory measure is equally shaped by recent sensory history. Finally, we demonstrate that a validated neural network model of PPS is able to account for the present results via a simple Hebbian plasticity rule. The present findings suggest that PPS encoding remaps on a very rapid time-scale and, more generally, that it is sensitive to sensory history, a key feature for any process contextualizing subsequent incoming sensory information (e.g., a Bayesian prior).
Collapse
Affiliation(s)
- Jean-Paul Noel
- Neuroscience Graduate Program, Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Center for Neural Science, New York University, New York City, NY 10003, USA
| | - Tommaso Bertoni
- MySpace Lab, Department of Clinical Neurosciences, University Hospital of Lausanne, University of Lausanne, Lausanne CH-1011, Switzerland
| | - Emily Terrebonne
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - Elisa Pellencin
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, Trento 38068, Italy
| | - Bruno Herbelin
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne, Lausanne CH-1015, Switzerland
- Center for Neuroprosthetics, Campus BioTech, Geneva CH-1202, Switzerland
| | - Carissa Cascio
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medial Center, Nashville, TN 37235, USA
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne, Lausanne CH-1015, Switzerland
- Center for Neuroprosthetics, Campus BioTech, Geneva CH-1202, Switzerland
| | - Elisa Magosso
- Department of Electrical, Electronic, and Information Engineering ``Guglielmo Marconi'', University of Bologna, Cesena 40126, Italy
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medial Center, Nashville, TN 37235, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37235, USA
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA
| | - Andrea Serino
- MySpace Lab, Department of Clinical Neurosciences, University Hospital of Lausanne, University of Lausanne, Lausanne CH-1011, Switzerland
| |
Collapse
|
19
|
Kimura A. Cross-modal modulation of cell activity by sound in first-order visual thalamic nucleus. J Comp Neurol 2020; 528:1917-1941. [PMID: 31983057 DOI: 10.1002/cne.24865] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Revised: 12/19/2019] [Accepted: 01/16/2020] [Indexed: 12/16/2022]
Abstract
Cross-modal auditory influence on cell activity in the primary visual cortex emerging at short latencies raises the possibility that the first-order visual thalamic nucleus, which is considered dedicated to unimodal visual processing, could contribute to cross-modal sensory processing, as has been indicated in the auditory and somatosensory systems. To test this hypothesis, the effects of sound stimulation on visual cell activity in the dorsal lateral geniculate nucleus were examined in anesthetized rats, using juxta-cellular recording and labeling techniques. Visual responses evoked by light (white LED) were modulated by sound (noise burst) given simultaneously or 50-400 ms after the light, even though sound stimuli alone did not evoke cell activity. Alterations of visual response were observed in 71% of cells (57/80) with regard to response magnitude, latency, and/or burst spiking. Suppression predominated in response magnitude modulation, but de novo responses were also induced by combined stimulation. Sound affected not only onset responses but also late responses. Late responses were modulated by sound given before or after onset responses. Further, visual responses evoked by the second light stimulation of a double flash with a 150-700 ms interval were also modulated by sound given together with the first light stimulation. In morphological analysis of labeled cells projection cells comparable to X-, Y-, and W-like cells and interneurons were all susceptible to auditory influence. These findings suggest that the first-order visual thalamic nucleus incorporates auditory influence into parallel and complex thalamic visual processing for cross-modal modulation of visual attention and perception.
Collapse
Affiliation(s)
- Akihisa Kimura
- Department of Physiology, Wakayama Medical University, Wakayama, Japan
| |
Collapse
|
20
|
Calma-Roddin N, Drury JE. Music, Language, and The N400: ERP Interference Patterns Across Cognitive Domains. Sci Rep 2020; 10:11222. [PMID: 32641708 PMCID: PMC7343814 DOI: 10.1038/s41598-020-66732-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 04/03/2020] [Indexed: 11/09/2022] Open
Abstract
Studies of the relationship of language and music have suggested these two systems may share processing resources involved in the computation/maintenance of abstract hierarchical structure (syntax). One type of evidence comes from ERP interference studies involving concurrent language/music processing showing interaction effects when both processing streams are simultaneously perturbed by violations (e.g., syntactically incorrect words paired with incongruent completion of a chord progression). Here, we employ this interference methodology to target the mechanisms supporting long term memory (LTM) access/retrieval in language and music. We used melody stimuli from previous work showing out-of-key or unexpected notes may elicit a musical analogue of language N400 effects, but only for familiar melodies, and not for unfamiliar ones. Target notes in these melodies were time-locked to visually presented target words in sentence contexts manipulating lexical/conceptual semantic congruity. Our study succeeded in eliciting expected N400 responses from each cognitive domain independently. Among several new findings we argue to be of interest, these data demonstrate that: (i) language N400 effects are delayed in onset by concurrent music processing only when melodies are familiar, and (ii) double violations with familiar melodies (but not with unfamiliar ones) yield a sub-additive N400 response. In addition: (iii) early negativities (RAN effects), which previous work has connected to musical syntax, along with the music N400, were together delayed in onset for familiar melodies relative to the timing of these effects reported in the previous music-only study using these same stimuli, and (iv) double violation cases involving unfamiliar/novel melodies also delayed the RAN effect onset. These patterns constitute the first demonstration of N400 interference effects across these domains and together contribute previously undocumented types of interactions to the available pool of findings relevant to understanding whether language and music may rely on shared underlying mechanisms.
Collapse
Affiliation(s)
- Nicole Calma-Roddin
- Department of Behavioral Sciences, New York Institute of Technology, Old Westbury, New York, USA.
- Department of Psychology, Stony Brook University, New York, USA.
| | - John E Drury
- School of Linguistic Sciences and Arts, Jiangsu Normal University, Xuzhou, China
| |
Collapse
|
21
|
Friedel EBN, Bach M, Heinrich SP. Attentional Interactions Between Vision and Hearing in Event-Related Responses to Crossmodal and Conjunct Oddballs. Multisens Res 2020; 33:251-275. [PMID: 31972541 DOI: 10.1163/22134808-20191329] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Accepted: 10/14/2019] [Indexed: 11/19/2022]
Abstract
Are alternation and co-occurrence of stimuli of different sensory modalities conspicuous? In a novel audio-visual oddball paradigm, the P300 was used as an index of the allocation of attention to investigate stimulus- and task-related interactions between modalities. Specifically, we assessed effects of modality alternation and the salience of conjunct oddball stimuli that were defined by the co-occurrence of both modalities. We presented (a) crossmodal audio-visual oddball sequences, where both oddballs and standards were unimodal, but of a different modality (i.e., visual oddball with auditory standard, or vice versa), and (b) oddball sequences where standards were randomly of either modality while the oddballs were a combination of both modalities (conjunct stimuli). Subjects were instructed to attend to one of the modalities (whether part of a conjunct stimulus or not). In addition, we also tested specific attention to the conjunct stimuli. P300-like responses occurred even when the oddball was of the unattended modality. The pattern of event-related potential (ERP) responses obtained with the two crossmodal oddball sequences switched symmetrically between stimulus modalities when the task modality was switched. Conjunct oddballs elicited no oddball response if only one modality was attended. However, when conjunctness was specifically attended, an oddball response was obtained. Crossmodal oddballs capture sufficient attention even when not attended. Conjunct oddballs, however, are not sufficiently salient to attract attention when the task is unimodal. Even when specifically attended, the processing of conjunctness appears to involve additional steps that delay the oddball response.
Collapse
Affiliation(s)
- Evelyn B N Friedel
- 1Eye Center, Medical Center, University of Freiburg, Germany.,2Faculty of Medicine, University of Freiburg, Germany
| | - Michael Bach
- 1Eye Center, Medical Center, University of Freiburg, Germany.,2Faculty of Medicine, University of Freiburg, Germany
| | - Sven P Heinrich
- 1Eye Center, Medical Center, University of Freiburg, Germany.,2Faculty of Medicine, University of Freiburg, Germany
| |
Collapse
|
22
|
Fleming JT, Noyce AL, Shinn-Cunningham BG. Audio-visual spatial alignment improves integration in the presence of a competing audio-visual stimulus. Neuropsychologia 2020; 146:107530. [PMID: 32574616 DOI: 10.1016/j.neuropsychologia.2020.107530] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Revised: 06/08/2020] [Accepted: 06/08/2020] [Indexed: 11/26/2022]
Abstract
In order to parse the world around us, we must constantly determine which sensory inputs arise from the same physical source and should therefore be perceptually integrated. Temporal coherence between auditory and visual stimuli drives audio-visual (AV) integration, but the role played by AV spatial alignment is less well understood. Here, we manipulated AV spatial alignment and collected electroencephalography (EEG) data while human subjects performed a free-field variant of the "pip and pop" AV search task. In this paradigm, visual search is aided by a spatially uninformative auditory tone, the onsets of which are synchronized to changes in the visual target. In Experiment 1, tones were either spatially aligned or spatially misaligned with the visual display. Regardless of AV spatial alignment, we replicated the key pip and pop result of improved AV search times. Mirroring the behavioral results, we found an enhancement of early event-related potentials (ERPs), particularly the auditory N1 component, in both AV conditions. We demonstrate that both top-down and bottom-up attention contribute to these N1 enhancements. In Experiment 2, we tested whether spatial alignment influences AV integration in a more challenging context with competing multisensory stimuli. An AV foil was added that visually resembled the target and was synchronized to its own stream of synchronous tones. The visual components of the AV target and AV foil occurred in opposite hemifields; the two auditory components were also in opposite hemifields and were either spatially aligned or spatially misaligned with the visual components to which they were synchronized. Search was fastest when the auditory and visual components of the AV target (and the foil) were spatially aligned. Attention modulated ERPs in both spatial conditions, but importantly, the scalp topography of early evoked responses shifted only when stimulus components were spatially aligned, signaling the recruitment of different neural generators likely related to multisensory integration. These results suggest that AV integration depends on AV spatial alignment when stimuli in both modalities compete for selective integration, a common scenario in real-world perception.
Collapse
Affiliation(s)
- Justin T Fleming
- Speech and Hearing Bioscience and Technology Program, Division of Medical Sciences, Harvard Medical School, Boston, MA, USA
| | - Abigail L Noyce
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA
| | | |
Collapse
|
23
|
The interplay between multisensory integration and perceptual decision making. Neuroimage 2020; 222:116970. [PMID: 32454204 DOI: 10.1016/j.neuroimage.2020.116970] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Revised: 03/23/2020] [Accepted: 05/15/2020] [Indexed: 01/15/2023] Open
Abstract
Facing perceptual uncertainty, the brain combines information from different senses to make optimal perceptual decisions and to guide behavior. However, decision making has been investigated mostly in unimodal contexts. Thus, how the brain integrates multisensory information during decision making is still unclear. Two opposing, but not mutually exclusive, scenarios are plausible: either the brain thoroughly combines the signals from different modalities before starting to build a supramodal decision, or unimodal signals are integrated during decision formation. To answer this question, we devised a paradigm mimicking naturalistic situations where human participants were exposed to continuous cacophonous audiovisual inputs containing an unpredictable signal cue in one or two modalities and had to perform a signal detection task or a cue categorization task. First, model-based analyses of behavioral data indicated that multisensory integration takes place alongside perceptual decision making. Next, using supervised machine learning on concurrently recorded EEG, we identified neural signatures of two processing stages: sensory encoding and decision formation. Generalization analyses across experimental conditions and time revealed that multisensory cues were processed faster during both stages. We further established that acceleration of neural dynamics during sensory encoding and decision formation was directly linked to multisensory integration. Our results were consistent across both signal detection and categorization tasks. Taken together, the results revealed a continuous dynamic interplay between multisensory integration and decision making processes (mixed scenario), with integration of multimodal information taking place both during sensory encoding as well as decision formation.
Collapse
|
24
|
Selective attention to sound features mediates cross-modal activation of visual cortices. Neuropsychologia 2020; 144:107498. [PMID: 32442445 DOI: 10.1016/j.neuropsychologia.2020.107498] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 03/14/2020] [Accepted: 05/12/2020] [Indexed: 11/20/2022]
Abstract
Contemporary schemas of brain organization now include multisensory processes both in low-level cortices as well as at early stages of stimulus processing. Evidence has also accumulated showing that unisensory stimulus processing can result in cross-modal effects. For example, task-irrelevant and lateralised sounds can activate visual cortices; a phenomenon referred to as the auditory-evoked contralateral occipital positivity (ACOP). Some claim this is an example of automatic attentional capture in visual cortices. Other results, however, indicate that context may play a determinant role. Here, we investigated whether selective attention to spatial features of sounds is a determining factor in eliciting the ACOP. We recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to four possible stimulus attributes: location, pitch, speaker identity or syllable. Sound acoustics were held constant, and their location was always equiprobable (50% left, 50% right). The only manipulation was to which sound dimension participants attended. We analysed the AEP data from healthy participants within an electrical neuroimaging framework. The presence of sound-elicited activations of visual cortices depended on the to-be-discriminated, goal-based dimension. The ACOP was elicited only when participants were required to discriminate sound location, but not when they attended to any of the non-spatial features. These results provide a further indication that the ACOP is not automatic. Moreover, our findings showcase the interplay between task-relevance and spatial (un)predictability in determining the presence of the cross-modal activation of visual cortices.
Collapse
|
25
|
Zhao S, Wang Y, Feng C, Feng W. Multiple phases of cross-sensory interactions associated with the audiovisual bounce-inducing effect. Biol Psychol 2019; 149:107805. [PMID: 31689465 DOI: 10.1016/j.biopsycho.2019.107805] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 10/15/2019] [Accepted: 10/28/2019] [Indexed: 12/30/2022]
Abstract
Using event-related potential (ERP) recordings, the present study investigated the cross-modal neural activities underlying the audiovisual bounce-inducing effect (ABE) via a novel experimental design wherein the audiovisual bouncing trials were induced solely by the ABE. The within-subject (percept-based) analysis showed that early cross-modal interactions within 100-200 ms after sound onset over fronto-central and occipital regions were associated with the occurrence of the ABE, but the cross-modal interaction at a later latency (ND250, 220-280 ms) over fronto-central region did not differ between ABE trials and non-ABE trials. The between-subject analysis indicated that the cross-modal interaction revealed by ND250 was larger for subjects who perceived the ABE more frequently. These findings suggest that the ABE is generated as a consequence of the rapid interplay between the variations of early cross-modal interactions and the general multisensory binding predisposition at an individual level.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China
| | - Yajie Wang
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China.
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China.
| |
Collapse
|
26
|
Cortical processes underlying the effects of static sound timing on perceived visual speed. Neuroimage 2019; 199:194-205. [DOI: 10.1016/j.neuroimage.2019.05.062] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Revised: 04/09/2019] [Accepted: 05/24/2019] [Indexed: 01/10/2023] Open
|
27
|
Noel JP, Serino A, Wallace MT. Increased Neural Strength and Reliability to Audiovisual Stimuli at the Boundary of Peripersonal Space. J Cogn Neurosci 2019; 31:1155-1172. [DOI: 10.1162/jocn_a_01334] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
The actionable space surrounding the body, referred to as peripersonal space (PPS), has been the subject of significant interest of late within the broader framework of embodied cognition. Neurophysiological and neuroimaging studies have shown the representation of PPS to be built from visuotactile and audiotactile neurons within a frontoparietal network and whose activity is modulated by the presence of stimuli in proximity to the body. In contrast to single-unit and fMRI studies, an area of inquiry that has received little attention is the EEG characterization associated with PPS processing. Furthermore, although PPS is encoded by multisensory neurons, to date there has been no EEG study systematically examining neural responses to unisensory and multisensory stimuli, as these are presented outside, near, and within the boundary of PPS. Similarly, it remains poorly understood whether multisensory integration is generally more likely at certain spatial locations (e.g., near the body) or whether the cross-modal tactile facilitation that occurs within PPS is simply due to a reduction in the distance between sensory stimuli when close to the body and in line with the spatial principle of multisensory integration. In the current study, to examine the neural dynamics of multisensory processing within and beyond the PPS boundary, we present auditory, visual, and audiovisual stimuli at various distances relative to participants' reaching limit—an approximation of PPS—while recording continuous high-density EEG. We question whether multisensory (vs. unisensory) processing varies as a function of stimulus–observer distance. Results demonstrate a significant increase of global field power (i.e., overall strength of response across the entire electrode montage) for stimuli presented at the PPS boundary—an increase that is largest under multisensory (i.e., audiovisual) conditions. Source localization of the major contributors to this global field power difference suggests neural generators in the intraparietal sulcus and insular cortex, hubs for visuotactile and audiotactile PPS processing. Furthermore, when neural dynamics are examined in more detail, changes in the reliability of evoked potentials in centroparietal electrodes are predictive on a subject-by-subject basis of the later changes in estimated current strength at the intraparietal sulcus linked to stimulus proximity to the PPS boundary. Together, these results provide a previously unrealized view into the neural dynamics and temporal code associated with the encoding of nontactile multisensory around the PPS boundary.
Collapse
Affiliation(s)
| | - Andrea Serino
- University of Lausanne
- Ecole Polytechnique Federale de Lausanne
| | | |
Collapse
|
28
|
Noel JP, Chatelle C, Perdikis S, Jöhr J, Lopes Da Silva M, Ryvlin P, De Lucia M, Millán JDR, Diserens K, Serino A. Peri-personal space encoding in patients with disorders of consciousness and cognitive-motor dissociation. NEUROIMAGE-CLINICAL 2019; 24:101940. [PMID: 31357147 PMCID: PMC6664240 DOI: 10.1016/j.nicl.2019.101940] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Revised: 07/13/2019] [Accepted: 07/17/2019] [Indexed: 01/06/2023]
Abstract
Behavioral assessments of consciousness based on overt command following cannot differentiate patients with disorders of consciousness (DOC) from those who demonstrate a dissociation between intent/awareness and motor capacity: cognitive motor dissociation (CMD). We argue that delineation of peri-personal space (PPS) – the multisensory-motor space immediately surrounding the body – may differentiate these patients due to its central role in mediating human-environment interactions, and putatively in scaffolding a minimal form of selfhood. In Experiment 1, we determined a normative physiological index of PPS by recording electrophysiological (EEG) responses to tactile, auditory, or audio-tactile stimulation at different distances (5 vs. 75 cm) in healthy volunteers (N = 19). Contrasts between paired (AT) and summed (A + T) responses demonstrated multisensory supra-additivity when AT stimuli were presented near, i.e., within the PPS, and highlighted somatosensory-motor sensors as electrodes of interest. In Experiment 2, we recorded EEG in patients behaviorally diagnosed as DOC or putative CMD (N = 17, 30 sessions). The PPS-measure developed in Experiment 1 was analyzed in relation with both standard clinical diagnosis (i.e., Coma Recovery Scale; CRS-R) and a measure of neural complexity associated with consciousness. Results demonstrated a significant correlation between the PPS measure and neural complexity, but not with the CRS-R, highlighting the added value of the physiological recordings. Further, multisensory processing in PPS was preserved in putative CMD but not in DOC patients. Together, the findings suggest that indexing PPS allows differentiating between groups of patients whom both show overt motor impairments (DOC and CMD) but putatively distinct levels of awareness or motor intent. Behavioral assessments confound consciousness and motor output. We suggest that multisensory coding of actionable space may dissociate these two. We develop an electrophysiological marker of peri-personal space. Then use this marker to distinguish impairments in consciousness and motor output.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Camille Chatelle
- Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA, USA; Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Coma Science Group, GIGA Consciousness, University and University Hospital of Liège, Liège, Belgium
| | - Serafeim Perdikis
- Center for Neuroprosthetics, School of Engineering, Ecole Polytechnique Federale de Lausanne (EPFL), Geneva, Switzerland; Brain-Computer Interfaces and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, UK
| | - Jane Jöhr
- Acute Neurorehabilitation Unit, Neurology, Department of and Clinical Neurosciences, University Hospital of Lausanne, Lausanne, Switzerland
| | - Marina Lopes Da Silva
- Acute Neurorehabilitation Unit, Neurology, Department of and Clinical Neurosciences, University Hospital of Lausanne, Lausanne, Switzerland
| | - Philippe Ryvlin
- Acute Neurorehabilitation Unit, Neurology, Department of and Clinical Neurosciences, University Hospital of Lausanne, Lausanne, Switzerland
| | - Marzia De Lucia
- Laboratoire de Recherche en Neuroimagerie, Department of Clinical Neurosciences, University Hospital of Lausanne, University of Lausanne, Lausanne, Switzerland
| | - José Del R Millán
- Center for Neuroprosthetics, School of Engineering, Ecole Polytechnique Federale de Lausanne (EPFL), Geneva, Switzerland
| | - Karin Diserens
- Acute Neurorehabilitation Unit, Neurology, Department of and Clinical Neurosciences, University Hospital of Lausanne, Lausanne, Switzerland.
| | - Andrea Serino
- MySpace Lab, Department of Clinical Neurosciences, University Hospital of Lausanne, University of Lausanne, Lausanne, Switzerland.
| |
Collapse
|
29
|
Quercia P, Pozzo T, Marino A, Guillemant AL, Cappe C, Gueugneau N. Alteration in binocular fusion modifies audiovisual integration in children. Clin Ophthalmol 2019; 13:1137-1145. [PMID: 31308621 PMCID: PMC6613607 DOI: 10.2147/opth.s201747] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Accepted: 05/08/2019] [Indexed: 11/27/2022] Open
Abstract
Background: In the field of multisensory integration, vision is generally thought to dominate audiovisual interactions, at least in spatial tasks, but the role of binocular fusion in audiovisual integration has not yet been studied. Methods: Using the Maddox test, a classical ophthalmological test used to subjectively detect a latent unilateral eye deviation, we checked whether an alteration in binocular vision in young patients would be able to change audiovisual integration. The study was performed on a group of ten children (five males and five females aged 11.3±1.6 years) with normal binocular vision, and revealed a visual phenomenon consisting of stochastic disappearanceof part of a visual scene caused by auditory stimulation. Results: Indeed, during the Maddox test, brief sounds induced transient visual scotomas (VSs) in the visual field of the eye in front of where the Maddox rod was placed. We found a significant correlation between the modification of binocular vision and VS occurrence. No significant difference was detected in the percentage or location of VS occurrence between the right and left eye using the Maddox rod test orbetween sound frequencies. Conclusion: The results indicate a specific role of the oculomotor system in audiovisual integration in children. This convenient protocol may also have significant interest for clinical investigations of developmental pathologies where relationships between vision and hearing are specifically affected.
Collapse
Affiliation(s)
- P Quercia
- INSERM Unit 1093, Cognition-Action-Plasticité Sensorimotrice, University of Burgundy-Franche Comté, Dijon 21078, France
| | - T Pozzo
- IIT@UniFe Center for Translational Neurophysiology, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - A Marino
- Private office, Vicenza 36100, Italy
| | - A L Guillemant
- INSERM Unit 1093, Cognition-Action-Plasticité Sensorimotrice, University of Burgundy-Franche Comté, Dijon 21078, France
| | - C Cappe
- Brain and Cognition Research Center, CerCo, Toulouse, France
| | - N Gueugneau
- INSERM Unit 1093, Cognition-Action-Plasticité Sensorimotrice, University of Burgundy-Franche Comté, Dijon 21078, France
| |
Collapse
|
30
|
Domínguez-Borràs J, Guex R, Méndez-Bértolo C, Legendre G, Spinelli L, Moratti S, Frühholz S, Mégevand P, Arnal L, Strange B, Seeck M, Vuilleumier P. Human amygdala response to unisensory and multisensory emotion input: No evidence for superadditivity from intracranial recordings. Neuropsychologia 2019; 131:9-24. [PMID: 31158367 DOI: 10.1016/j.neuropsychologia.2019.05.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 05/15/2019] [Accepted: 05/28/2019] [Indexed: 12/14/2022]
Abstract
The amygdala is crucially implicated in processing emotional information from various sensory modalities. However, there is dearth of knowledge concerning the integration and relative time-course of its responses across different channels, i.e., for auditory, visual, and audiovisual input. Functional neuroimaging data in humans point to a possible role of this region in the multimodal integration of emotional signals, but direct evidence for anatomical and temporal overlap of unisensory and multisensory-evoked responses in amygdala is still lacking. We recorded event-related potentials (ERPs) and oscillatory activity from 9 amygdalae using intracranial electroencephalography (iEEG) in patients prior to epilepsy surgery, and compared electrophysiological responses to fearful, happy, or neutral stimuli presented either in voices alone, faces alone, or voices and faces simultaneously delivered. Results showed differential amygdala responses to fearful stimuli, in comparison to neutral, reaching significance 100-200 ms post-onset for auditory, visual and audiovisual stimuli. At later latencies, ∼400 ms post-onset, amygdala response to audiovisual information was also amplified in comparison to auditory or visual stimuli alone. Importantly, however, we found no evidence for either super- or subadditivity effects in any of the bimodal responses. These results suggest, first, that emotion processing in amygdala occurs at globally similar early stages of perceptual processing for auditory, visual, and audiovisual inputs; second, that overall larger responses to multisensory information occur at later stages only; and third, that the underlying mechanisms of this multisensory gain may reflect a purely additive response to concomitant visual and auditory inputs. Our findings provide novel insights on emotion processing across the sensory pathways, and their convergence within the limbic system.
Collapse
Affiliation(s)
- Judith Domínguez-Borràs
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | - Raphaël Guex
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | | | - Guillaume Legendre
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Laurent Spinelli
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Stephan Moratti
- Department of Experimental Psychology, Complutense University of Madrid, Spain; Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain.
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Switzerland.
| | - Pierre Mégevand
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Luc Arnal
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Bryan Strange
- Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain; Department of Neuroimaging, Alzheimer's Disease Research Centre, Reina Sofia-CIEN Foundation, Madrid, Spain.
| | - Margitta Seeck
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Patrik Vuilleumier
- Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| |
Collapse
|
31
|
Galindo-Leon EE, Stitt I, Pieper F, Stieglitz T, Engler G, Engel AK. Context-specific modulation of intrinsic coupling modes shapes multisensory processing. SCIENCE ADVANCES 2019; 5:eaar7633. [PMID: 30989107 PMCID: PMC6457939 DOI: 10.1126/sciadv.aar7633] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2017] [Accepted: 02/14/2019] [Indexed: 06/05/2023]
Abstract
Intrinsically generated patterns of coupled neuronal activity are associated with the dynamics of specific brain states. Sensory inputs are extrinsic factors that can perturb these intrinsic coupling modes, creating a complex scenario in which forthcoming stimuli are processed. Studying this intrinsic-extrinsic interplay is necessary to better understand perceptual integration and selection. Here, we show that this interplay leads to a reconfiguration of functional cortical connectivity that acts as a mechanism to facilitate stimulus processing. Using audiovisual stimulation in anesthetized ferrets, we found that this reconfiguration of coupling modes is context specific, depending on long-term modulation by repetitive sensory inputs. These reconfigured coupling modes lead to changes in latencies and power of local field potential responses that support multisensory integration. Our study demonstrates that this interplay extends across multiple time scales and involves different types of intrinsic coupling. These results suggest a previously unknown large-scale mechanism that facilitates multisensory integration.
Collapse
Affiliation(s)
- Edgar E. Galindo-Leon
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Iain Stitt
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Florian Pieper
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Thomas Stieglitz
- Department of Microsystems Engineering, University of Freiburg, 79110 Freiburg, Germany
| | - Gerhard Engler
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Andreas K. Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| |
Collapse
|
32
|
Van der Stoep N, Van der Stigchel S, Van Engelen RC, Biesbroek JM, Nijboer TCW. Impairments in Multisensory Integration after Stroke. J Cogn Neurosci 2019; 31:885-899. [PMID: 30883294 DOI: 10.1162/jocn_a_01389] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
The integration of information from multiple senses leads to a plethora of behavioral benefits, most predominantly to faster and better detection, localization, and identification of events in the environment. Although previous studies of multisensory integration (MSI) in humans have provided insights into the neural underpinnings of MSI, studies of MSI at a behavioral level in individuals with brain damage are scarce. Here, a well-known psychophysical paradigm (the redundant target paradigm) was employed to quantify MSI in a group of stroke patients. The relation between MSI and lesion location was analyzed using lesion subtraction analysis. Twenty-one patients with ischemic infarctions and 14 healthy control participants responded to auditory, visual, and audiovisual targets in the left and right visual hemifield. Responses to audiovisual targets were faster than to unisensory targets. This could be due to MSI or statistical facilitation. Comparing the audiovisual RTs to the winner of a race between unisensory signals allowed us to determine whether participants could integrate auditory and visual information. The results indicated that (1) 33% of the patients showed an impairment in MSI; (2) patients with MSI impairment had left hemisphere and brainstem/cerebellar lesions; and (3) the left caudate, left pallidum, left putamen, left thalamus, left insula, left postcentral and precentral gyrus, left central opercular cortex, left amygdala, and left OFC were more often damaged in patients with MSI impairments. These results are the first to demonstrate the impact of brain damage on MSI in stroke patients using a well-established psychophysical paradigm.
Collapse
Affiliation(s)
| | | | | | | | - Tanja C W Nijboer
- Helmholtz Institute, Utrecht University.,Brain Center Rudolph Magnus, University Medical Center, Utrecht University.,Center for Brain Rehabilitation Medicine, Utrecht Medical Center, Utrecht University
| |
Collapse
|
33
|
Abstract
Real-world environments are typically dynamic, complex, and multisensory in nature and require the support of top-down attention and memory mechanisms for us to be able to drive a car, make a shopping list, or pour a cup of coffee. Fundamental principles of perception and functional brain organization have been established by research utilizing well-controlled but simplified paradigms with basic stimuli. The last 30 years ushered a revolution in computational power, brain mapping, and signal processing techniques. Drawing on those theoretical and methodological advances, over the years, research has departed more and more from traditional, rigorous, and well-understood paradigms to directly investigate cognitive functions and their underlying brain mechanisms in real-world environments. These investigations typically address the role of one or, more recently, multiple attributes of real-world environments. Fundamental assumptions about perception, attention, or brain functional organization have been challenged-by studies adapting the traditional paradigms to emulate, for example, the multisensory nature or varying relevance of stimulation or dynamically changing task demands. Here, we present the state of the field within the emerging heterogeneous domain of real-world neuroscience. To be precise, the aim of this Special Focus is to bring together a variety of the emerging "real-world neuroscientific" approaches. These approaches differ in their principal aims, assumptions, or even definitions of "real-world neuroscience" research. Here, we showcase the commonalities and distinctive features of the different "real-world neuroscience" approaches. To do so, four early-career researchers and the speakers of the Cognitive Neuroscience Society 2017 Meeting symposium under the same title answer questions pertaining to the added value of such approaches in bringing us closer to accurate models of functional brain organization and cognitive functions.
Collapse
Affiliation(s)
- Pawel J Matusz
- University Hospital Center and University of Lausanne
- University of Applied Sciences Western Switzerland (HES SO Valais)
| | | | | | | |
Collapse
|
34
|
Stronger responses in the visual cortex of sighted compared to blind individuals during auditory space representation. Sci Rep 2019; 9:1935. [PMID: 30760758 PMCID: PMC6374481 DOI: 10.1038/s41598-018-37821-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Accepted: 12/11/2018] [Indexed: 01/02/2023] Open
Abstract
It has been previously shown that the interaction between vision and audition involves early sensory cortices. However, the functional role of these interactions and their modulation due to sensory impairment is not yet understood. To shed light on the impact of vision on auditory spatial processing, we recorded ERPs and collected psychophysical responses during space and time bisection tasks in sighted and blind participants. They listened to three consecutive sounds and judged whether the second sound was either spatially or temporally further from the first or the third sound. We demonstrate that spatial metric representation of sounds elicits an early response of the visual cortex (P70) which is different between sighted and visually deprived individuals. Indeed, only in sighted and not in blind people P70 is strongly selective for the spatial position of sounds, mimicking many aspects of the visual-evoked C1. These results suggest that early auditory processing associated with the construction of spatial maps is mediated by visual experience. The lack of vision might impair the projection of multi-sensory maps on the retinotopic maps used by the visual cortex.
Collapse
|
35
|
Xu W, Kolozsvári OB, Oostenveld R, Leppänen PHT, Hämäläinen JA. Audiovisual Processing of Chinese Characters Elicits Suppression and Congruency Effects in MEG. Front Hum Neurosci 2019; 13:18. [PMID: 30787872 PMCID: PMC6372538 DOI: 10.3389/fnhum.2019.00018] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Accepted: 01/16/2019] [Indexed: 11/13/2022] Open
Abstract
Learning to associate written letters/characters with speech sounds is crucial for reading acquisition. Most previous studies have focused on audiovisual integration in alphabetic languages. Less is known about logographic languages such as Chinese characters, which map onto mostly syllable-based morphemes in the spoken language. Here we investigated how long-term exposure to native language affects the underlying neural mechanisms of audiovisual integration in a logographic language using magnetoencephalography (MEG). MEG sensor and source data from 12 adult native Chinese speakers and a control group of 13 adult Finnish speakers were analyzed for audiovisual suppression (bimodal responses vs. sum of unimodal responses) and congruency (bimodal incongruent responses vs. bimodal congruent responses) effects. The suppressive integration effect was found in the left angular and supramarginal gyri (205-365 ms), left inferior frontal and left temporal cortices (575-800 ms) in the Chinese group. The Finnish group showed a distinct suppression effect only in the right parietal and occipital cortices at a relatively early time window (285-460 ms). The congruency effect was only observed in the Chinese group in left inferior frontal and superior temporal cortex in a late time window (about 500-800 ms) probably related to modulatory feedback from multi-sensory regions and semantic processing. The audiovisual integration in a logographic language showed a clear resemblance to that in alphabetic languages in the left superior temporal cortex, but with activation specific to the logographic stimuli observed in the left inferior frontal cortex. The current MEG study indicated that learning of logographic languages has a large impact on the audiovisual integration of written characters with some distinct features compared to previous results on alphabetic languages.
Collapse
Affiliation(s)
- Weiyong Xu
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Jyväskylä Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Orsolya Beatrix Kolozsvári
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Jyväskylä Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Robert Oostenveld
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
- NatMEG, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Paavo Herman Tapio Leppänen
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Jyväskylä Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Jarmo Arvid Hämäläinen
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Jyväskylä Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| |
Collapse
|
36
|
Caron-Desrochers L, Schönwiesner M, Focke K, Lehmann A. Assessing visual modulation along the human subcortical auditory pathway. Neurosci Lett 2018; 685:12-17. [PMID: 30009874 DOI: 10.1016/j.neulet.2018.07.020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Revised: 07/10/2018] [Accepted: 07/12/2018] [Indexed: 11/17/2022]
Abstract
Experience of the world is inherently multisensory. It has been suggested that audiovisual modulation occurs as early as subcortical auditory stages. However, this was based on the frequency-following response, a measure recently found to be significantly generated from cortical sources. It therefore remains unclear whether subcortical auditory processing can indeed be modulated by visual information. We aimed to trace visual modulation along the auditory pathway by comparing auditory brainstem response (ABR) and middle-latency response (MLR) between unimodal auditory and multimodal audiovisual conditions. EEG activity was recorded while participants attended auditory clicks and visual flashes, either synchronous or asynchronous. No differences between auditory and audiovisual responses were observed at ABR or MLR levels. It suggested that ascending auditory processing does not seem to be modulated by visual cues at subcortical levels, at least for rudimentary stimuli. Multimodal modulation in the auditory brainstem observed in previous studies might therefore originate from cortical sources and top-down processes. More studies are needed to further disentangle subcortical and cortical influences on audiovisual modulation along the auditory pathway.
Collapse
Affiliation(s)
- Laura Caron-Desrochers
- International Laboratory for Brain, Music and Sound Research, Department of Psychology, University of Montreal, Montreal, Canada; Center for Research on Brain, Language and Music, Montreal, Canada.
| | - Marc Schönwiesner
- International Laboratory for Brain, Music and Sound Research, Department of Psychology, University of Montreal, Montreal, Canada; Center for Research on Brain, Language and Music, Montreal, Canada; Department of Biology, University of Leipzig, Leipzig, Germany
| | - Kristin Focke
- International Laboratory for Brain, Music and Sound Research, Department of Psychology, University of Montreal, Montreal, Canada; Center for Research on Brain, Language and Music, Montreal, Canada
| | - Alexandre Lehmann
- International Laboratory for Brain, Music and Sound Research, Department of Psychology, University of Montreal, Montreal, Canada; Center for Research on Brain, Language and Music, Montreal, Canada; Department of Otolaryngology Head and Neck Surgery, McGill University, Canada
| |
Collapse
|
37
|
Zhao S, Wang Y, Xu H, Feng C, Feng W. Early cross-modal interactions underlie the audiovisual bounce-inducing effect. Neuroimage 2018; 174:208-218. [PMID: 29567502 DOI: 10.1016/j.neuroimage.2018.03.036] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2017] [Revised: 02/14/2018] [Accepted: 03/17/2018] [Indexed: 11/15/2022] Open
Abstract
Two identical visual disks moving towards one another on a two-dimensional display can be perceived as either "streaming through" or "bouncing off" each other after their coincidence/overlapping. A brief sound presented at the moment of the coincidence of the disks could strikingly bias the percept towards bouncing, which was termed the audiovisual bounce-inducing effect (ABE). Although the ABE has been studied intensively since its discovery, the debate about its origin is still unresolved so far. The present study used event-related potential (ERP) recordings to investigate whether or not early neural activities associated with cross-modal interactions play a role on the ABE. The results showed that the fronto-central P2 component ∼200 ms before the coincidence of the disks was predictive of subsequent streaming or bouncing percept in the unimodal visual display but not in auditory-visual display. More importantly, the cross-modal interactions revealed by the fronto-central positivity PD170 (125-175 ms after sound onset), as well as the occipital positivity PD190 (180-200 ms), were substantially enhanced on bouncing trials compared to streaming trials in the auditory-visual display. These findings provide direct electrophysiological evidence that early cross-modal interactions contribute to the origin of ABE phenomenon at the perceptual stage of processing.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China
| | - Yajie Wang
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China
| | - Hongyuan Xu
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China.
| |
Collapse
|
38
|
Noel JP, Simon D, Thelen A, Maier A, Blake R, Wallace MT. Probing Electrophysiological Indices of Perceptual Awareness across Unisensory and Multisensory Modalities. J Cogn Neurosci 2018; 30:814-828. [PMID: 29488853 PMCID: PMC10804124 DOI: 10.1162/jocn_a_01247] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2024]
Abstract
The neural underpinnings of perceptual awareness have been extensively studied using unisensory (e.g., visual alone) stimuli. However, perception is generally multisensory, and it is unclear whether the neural architecture uncovered in these studies directly translates to the multisensory domain. Here, we use EEG to examine brain responses associated with the processing of visual, auditory, and audiovisual stimuli presented near threshold levels of detectability, with the aim of deciphering similarities and differences in the neural signals indexing the transition into perceptual awareness across vision, audition, and combined visual-auditory (multisensory) processing. More specifically, we examine (1) the presence of late evoked potentials (∼>300 msec), (2) the across-trial reproducibility, and (3) the evoked complexity associated with perceived versus nonperceived stimuli. Results reveal that, although perceived stimuli are associated with the presence of late evoked potentials across each of the examined sensory modalities, between-trial variability and EEG complexity differed for unisensory versus multisensory conditions. Whereas across-trial variability and complexity differed for perceived versus nonperceived stimuli in the visual and auditory conditions, this was not the case for the multisensory condition. Taken together, these results suggest that there are fundamental differences in the neural correlates of perceptual awareness for unisensory versus multisensory stimuli. Specifically, the work argues that the presence of late evoked potentials, as opposed to neural reproducibility or complexity, most closely tracks perceptual awareness regardless of the nature of the sensory stimulus. In addition, the current findings suggest a greater similarity between the neural correlates of perceptual awareness of unisensory (visual and auditory) stimuli when compared with multisensory stimuli.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Neuroscience Graduate Program, Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - David Simon
- Neuroscience Graduate Program, Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - Antonia Thelen
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
| | - Alexander Maier
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA
| | - Randolph Blake
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN 37235, USA
- Department of Psychology, Vanderbilt University, Nashville, TN 37235, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37235, USA
- Department of Psychiatry, Vanderbilt University Medical Center, Nashville, TN 37235, USA
| |
Collapse
|
39
|
Henschke JU, Ohl FW, Budinger E. Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging. Front Aging Neurosci 2018; 10:52. [PMID: 29551970 PMCID: PMC5840148 DOI: 10.3389/fnagi.2018.00052] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Accepted: 02/15/2018] [Indexed: 11/22/2022] Open
Abstract
During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals.
Collapse
Affiliation(s)
- Julia U Henschke
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Department Genetics, Leibniz Institute for Neurobiology, Magdeburg, Germany.,German Center for Neurodegenerative Diseases within the Helmholtz Association, Magdeburg, Germany.,Institute of Cognitive Neurology and Dementia Research (IKND), Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Frank W Ohl
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany.,Institute of Biology, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Eike Budinger
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| |
Collapse
|
40
|
Simon DM, Wallace MT. Integration and Temporal Processing of Asynchronous Audiovisual Speech. J Cogn Neurosci 2018; 30:319-337. [DOI: 10.1162/jocn_a_01205] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Multisensory integration of visual mouth movements with auditory speech is known to offer substantial perceptual benefits, particularly under challenging (i.e., noisy) acoustic conditions. Previous work characterizing this process has found that ERPs to auditory speech are of shorter latency and smaller magnitude in the presence of visual speech. We sought to determine the dependency of these effects on the temporal relationship between the auditory and visual speech streams using EEG. We found that reductions in ERP latency and suppression of ERP amplitude are maximal when the visual signal precedes the auditory signal by a small interval and that increasing amounts of asynchrony reduce these effects in a continuous manner. Time–frequency analysis revealed that these effects are found primarily in the theta (4–8 Hz) and alpha (8–12 Hz) bands, with a central topography consistent with auditory generators. Theta effects also persisted in the lower portion of the band (3.5–5 Hz), and this late activity was more frontally distributed. Importantly, the magnitude of these late theta oscillations not only differed with the temporal characteristics of the stimuli but also served to predict participants' task performance. Our analysis thus reveals that suppression of single-trial brain responses by visual speech depends strongly on the temporal concordance of the auditory and visual inputs. It further illustrates that processes in the lower theta band, which we suggest as an index of incongruity processing, might serve to reflect the neural correlates of individual differences in multisensory temporal perception.
Collapse
|
41
|
Murray MM, Thelen A, Ionta S, Wallace MT. Contributions of Intraindividual and Interindividual Differences to Multisensory Processes. J Cogn Neurosci 2018; 31:360-376. [PMID: 29488852 DOI: 10.1162/jocn_a_01246] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Most evidence on the neural and perceptual correlates of sensory processing derives from studies that have focused on only a single sensory modality and averaged the data from groups of participants. Although valuable, such studies ignore the substantial interindividual and intraindividual differences that are undoubtedly at play. Such variability plays an integral role in both the behavioral/perceptual realms and in the neural correlates of these processes, but substantially less is known when compared with group-averaged data. Recently, it has been shown that the presentation of stimuli from two or more sensory modalities (i.e., multisensory stimulation) not only results in the well-established performance gains but also gives rise to reductions in behavioral and neural response variability. To better understand the relationship between neural and behavioral response variability under multisensory conditions, this study investigated both behavior and brain activity in a task requiring participants to discriminate moving versus static stimuli presented in either a unisensory or multisensory context. EEG data were analyzed with respect to intraindividual and interindividual differences in RTs. The results showed that trial-by-trial variability of RTs was significantly reduced under audiovisual presentation conditions as compared with visual-only presentations across all participants. Intraindividual variability of RTs was linked to changes in correlated activity between clusters within an occipital to frontal network. In addition, interindividual variability of RTs was linked to differential recruitment of medial frontal cortices. The present findings highlight differences in the brain networks that support behavioral benefits during unisensory versus multisensory motion detection and provide an important view into the functional dynamics within neuronal networks underpinning intraindividual performance differences.
Collapse
Affiliation(s)
- Micah M Murray
- Vaudois University Hospital Center and University of Lausanne.,Center for Biomedical Imaging of Lausanne and Geneva.,Fondation Asile des Aveugles and University of Lausanne.,Vanderbilt University Medical Center
| | | | - Silvio Ionta
- Vaudois University Hospital Center and University of Lausanne.,Fondation Asile des Aveugles and University of Lausanne.,ETH Zürich
| | - Mark T Wallace
- Vanderbilt University Medical Center.,Vanderbilt University
| |
Collapse
|
42
|
Starke J, Ball F, Heinze HJ, Noesselt T. The spatio-temporal profile of multisensory integration. Eur J Neurosci 2017; 51:1210-1223. [PMID: 29057531 DOI: 10.1111/ejn.13753] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2017] [Revised: 10/13/2017] [Accepted: 10/16/2017] [Indexed: 12/29/2022]
Abstract
Task-irrelevant visual stimuli can enhance auditory perception. However, while there is some neurophysiological evidence for mechanisms that underlie the phenomenon, the neural basis of visually induced effects on auditory perception remains unknown. Combining fMRI and EEG with psychophysical measurements in two independent studies, we identified the neural underpinnings and temporal dynamics of visually induced auditory enhancement. Lower- and higher-intensity sounds were paired with a non-informative visual stimulus, while participants performed an auditory detection task. Behaviourally, visual co-stimulation enhanced auditory sensitivity. Using fMRI, enhanced BOLD signals were observed in primary auditory cortex for low-intensity audiovisual stimuli which scaled with subject-specific enhancement in perceptual sensitivity. Concordantly, a modulation of event-related potentials could already be observed over frontal electrodes at an early latency (30-80 ms), which again scaled with subject-specific behavioural benefits. Later modulations starting around 280 ms, that is in the time range of the P3, did not fit this pattern of brain-behaviour correspondence. Hence, the latency of the corresponding fMRI-EEG brain-behaviour modulation points at an early interplay of visual and auditory signals in low-level auditory cortex, potentially mediated by crosstalk at the level of the thalamus. However, fMRI signals in primary auditory cortex, auditory thalamus and the P50 for higher-intensity auditory stimuli were also elevated by visual co-stimulation (in the absence of any behavioural effect) suggesting a general, intensity-independent integration mechanism. We propose that this automatic interaction occurs at the level of the thalamus and might signify a first step of audiovisual interplay necessary for visually induced perceptual enhancement of auditory perception.
Collapse
Affiliation(s)
- Johanna Starke
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Department of Neurology, Faculty of Medicine, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Felix Ball
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Department of Neurology, Faculty of Medicine, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Hans-Jochen Heinze
- Department of Neurology, Faculty of Medicine, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Toemme Noesselt
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| |
Collapse
|
43
|
Zou Z, Chau BKH, Ting KH, Chan CCH. Aging Effect on Audiovisual Integrative Processing in Spatial Discrimination Task. Front Aging Neurosci 2017; 9:374. [PMID: 29184494 PMCID: PMC5694625 DOI: 10.3389/fnagi.2017.00374] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2017] [Accepted: 11/01/2017] [Indexed: 11/13/2022] Open
Abstract
Multisensory integration is an essential process that people employ daily, from conversing in social gatherings to navigating the nearby environment. The aim of this study was to investigate the impact of aging on modulating multisensory integrative processes using event-related potential (ERP), and the validity of the study was improved by including “noise” in the contrast conditions. Older and younger participants were involved in perceiving visual and/or auditory stimuli that contained spatial information. The participants responded by indicating the spatial direction (far vs. near and left vs. right) conveyed in the stimuli using different wrist movements. electroencephalograms (EEGs) were captured in each task trial, along with the accuracy and reaction time of the participants’ motor responses. Older participants showed a greater extent of behavioral improvements in the multisensory (as opposed to unisensory) condition compared to their younger counterparts. Older participants were found to have fronto-centrally distributed super-additive P2, which was not the case for the younger participants. The P2 amplitude difference between the multisensory condition and the sum of the unisensory conditions was found to correlate significantly with performance on spatial discrimination. The results indicated that the age-related effect modulated the integrative process in the perceptual and feedback stages, particularly the evaluation of auditory stimuli. Audiovisual (AV) integration may also serve a functional role during spatial-discrimination processes to compensate for the compromised attention function caused by aging.
Collapse
Affiliation(s)
- Zhi Zou
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Bolton K H Chau
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Kin-Hung Ting
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Chetwyn C H Chan
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| |
Collapse
|
44
|
Boyle SC, Kayser SJ, Kayser C. Neural correlates of multisensory reliability and perceptual weights emerge at early latencies during audio-visual integration. Eur J Neurosci 2017; 46:2565-2577. [PMID: 28940728 PMCID: PMC5725738 DOI: 10.1111/ejn.13724] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2017] [Revised: 09/11/2017] [Accepted: 09/18/2017] [Indexed: 12/24/2022]
Abstract
To make accurate perceptual estimates, observers must take the reliability of sensory information into account. Despite many behavioural studies showing that subjects weight individual sensory cues in proportion to their reliabilities, it is still unclear when during a trial neuronal responses are modulated by the reliability of sensory information or when they reflect the perceptual weights attributed to each sensory input. We investigated these questions using a combination of psychophysics, EEG‐based neuroimaging and single‐trial decoding. Our results show that the weighted integration of sensory information in the brain is a dynamic process; effects of sensory reliability on task‐relevant EEG components were evident 84 ms after stimulus onset, while neural correlates of perceptual weights emerged 120 ms after stimulus onset. These neural processes had different underlying sources, arising from sensory and parietal regions, respectively. Together these results reveal the temporal dynamics of perceptual and neural audio‐visual integration and support the notion of temporally early and functionally specific multisensory processes in the brain.
Collapse
Affiliation(s)
- Stephanie C Boyle
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| | - Stephanie J Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| |
Collapse
|
45
|
Spatial localization of sound elicits early responses from occipital visual cortex in humans. Sci Rep 2017; 7:10415. [PMID: 28874681 PMCID: PMC5585168 DOI: 10.1038/s41598-017-09142-z] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2017] [Accepted: 07/20/2017] [Indexed: 11/08/2022] Open
Abstract
Much evidence points to an interaction between vision and audition at early cortical sites. However, the functional role of these interactions is not yet understood. Here we show an early response of the occipital cortex to sound that it is strongly linked to the spatial localization task performed by the observer. The early occipital response to a sound, usually absent, increased by more than 10-fold when presented during a space localization task, but not during a time localization task. The response amplification was not only specific to the task, but surprisingly also to the position of the stimulus in the two hemifields. We suggest that early occipital processing of sound is linked to the construction of an audio spatial map that may utilize the visual map of the occipital cortex.
Collapse
|
46
|
Interoceptive signals impact visual processing: Cardiac modulation of visual body perception. Neuroimage 2017; 158:176-185. [DOI: 10.1016/j.neuroimage.2017.06.064] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Revised: 06/19/2017] [Accepted: 06/22/2017] [Indexed: 11/19/2022] Open
|
47
|
Semantic congruent audiovisual integration during the encoding stage of working memory: an ERP and sLORETA study. Sci Rep 2017; 7:5112. [PMID: 28698594 PMCID: PMC5505990 DOI: 10.1038/s41598-017-05471-1] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Accepted: 05/31/2017] [Indexed: 11/09/2022] Open
Abstract
Although multisensory integration is an inherent component of functional brain organization, multisensory integration during working memory (WM) has attracted little attention. The present study investigated the neural properties underlying the multisensory integration of WM by comparing semantically related bimodal stimulus presentations with unimodal stimulus presentations and analysing the results using the standardized low-resolution brain electromagnetic tomography (sLORETA) source location approach. The results showed that the memory retrieval reaction times during congruent audiovisual conditions were faster than those during unisensory conditions. Moreover, our findings indicated that the event-related potential (ERP) for simultaneous audiovisual stimuli differed from the ERP for the sum of unisensory constituents during the encoding stage and occurred within a 236-530 ms timeframe over the frontal and parietal-occipital electrodes. The sLORETA images revealed a distributed network of brain areas that participate in the multisensory integration of WM. These results suggested that information inputs from different WM subsystems yielded nonlinear multisensory interactions and became integrated during the encoding stage. The multicomponent model of WM indicates that the central executive could play a critical role in the integration of information from different slave systems.
Collapse
|
48
|
Being First Matters: Topographical Representational Similarity Analysis of ERP Signals Reveals Separate Networks for Audiovisual Temporal Binding Depending on the Leading Sense. J Neurosci 2017; 37:5274-5287. [PMID: 28450537 PMCID: PMC5456109 DOI: 10.1523/jneurosci.2926-16.2017] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Revised: 02/20/2017] [Accepted: 02/25/2017] [Indexed: 11/30/2022] Open
Abstract
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA).
Collapse
|
49
|
A dynamical framework to relate perceptual variability with multisensory information processing. Sci Rep 2016; 6:31280. [PMID: 27502974 PMCID: PMC4977493 DOI: 10.1038/srep31280] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2016] [Accepted: 07/15/2016] [Indexed: 11/29/2022] Open
Abstract
Multisensory processing involves participation of individual sensory streams, e.g., vision, audition to facilitate perception of environmental stimuli. An experimental realization of the underlying complexity is captured by the “McGurk-effect”- incongruent auditory and visual vocalization stimuli eliciting perception of illusory speech sounds. Further studies have established that time-delay between onset of auditory and visual signals (AV lag) and perturbations in the unisensory streams are key variables that modulate perception. However, as of now only few quantitative theoretical frameworks have been proposed to understand the interplay among these psychophysical variables or the neural systems level interactions that govern perceptual variability. Here, we propose a dynamic systems model consisting of the basic ingredients of any multisensory processing, two unisensory and one multisensory sub-system (nodes) as reported by several researchers. The nodes are connected such that biophysically inspired coupling parameters and time delays become key parameters of this network. We observed that zero AV lag results in maximum synchronization of constituent nodes and the degree of synchronization decreases when we have non-zero lags. The attractor states of this network can thus be interpreted as the facilitator for stabilizing specific perceptual experience. Thereby, the dynamic model presents a quantitative framework for understanding multisensory information processing.
Collapse
|
50
|
Rigato S, Rieger G, Romei V. Multisensory signalling enhances pupil dilation. Sci Rep 2016; 6:26188. [PMID: 27189316 PMCID: PMC4870616 DOI: 10.1038/srep26188] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2015] [Accepted: 04/27/2016] [Indexed: 11/21/2022] Open
Abstract
Detecting and integrating information across the senses is an advantageous mechanism to efficiently respond to the environment. In this study, a simple auditory-visual detection task was employed to test whether pupil dilation, generally associated with successful target detection, could be used as a reliable measure for studying multisensory integration processing in humans. We recorded reaction times and pupil dilation in response to a series of visual and auditory stimuli, which were presented either alone or in combination. The results indicated faster reaction times and larger pupil diameter to the presentation of combined auditory and visual stimuli than the same stimuli when presented in isolation. Moreover, the responses to the multisensory condition exceeded the linear summation of the responses obtained in each unimodal condition. Importantly, faster reaction times corresponded to larger pupil dilation, suggesting that also the latter can be a reliable measure of multisensory processes. This study will serve as a foundation for the investigation of auditory-visual integration in populations where simple reaction times cannot be collected, such as developmental and clinical populations.
Collapse
Affiliation(s)
- Silvia Rigato
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, CO4 3SQ, UK
| | - Gerulf Rieger
- Social and Health Psychology, Department of Psychology University of Essex, Colchester, CO4 3SQ, UK
| | - Vincenzo Romei
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, CO4 3SQ, UK
| |
Collapse
|