1
|
Cai B, Tang X, Wang A, Zhang M. Semantically congruent bimodal presentation modulates cognitive control over attentional guidance by working memory. Mem Cognit 2024; 52:1065-1078. [PMID: 38308161 DOI: 10.3758/s13421-024-01521-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/13/2024] [Indexed: 02/04/2024]
Abstract
Although previous studies have well established that audiovisual enhancement has a promoting effect on working memory and selective attention, there remains an open question about the influence of audiovisual enhancement on attentional guidance by working memory. To address this issue, the present study adopted a dual-task paradigm that combines a working memory task and a visual search task, in which the content of working memory was presented in audiovisual or visual modalities. Given the importance of search speed in memory-driven attentional suppression, we divided participants into two groups based on their reaction time (RT) in neutral trials and examined whether audiovisual enhancement in attentional suppression was modulated by search speed. The results showed that the slow search group exhibited a robust memory-driven attentional suppression effect, and the suppression effect started earlier and its magnitude was greater in the audiovisual condition than in the visual-only condition. However, among the faster search group, the suppression effect only occurred in the trials with longer RTs in the visual-only condition, and its temporal dynamics were selectively improved in the audiovisual condition. Furthermore, audiovisual enhancement of memory-driven attention evolved over time. These findings suggest that semantically congruent bimodal presentation can progressively facilitate the strength and temporal dynamics of memory-driven attentional suppression, and that search speed plays an important role in this process. This may be due to a synergistic effect between multisensory working memory representation and top-down suppression mechanism. The present study demonstrates the flexible role of audiovisual enhancement on cognitive control over memory-driven attention.
Collapse
Affiliation(s)
- Biye Cai
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
2
|
Zhao S, Zhou Y, Ma F, Xie J, Feng C, Feng W. The dissociation of semantically congruent and incongruent cross-modal effects on the visual attentional blink. Front Neurosci 2023; 17:1295010. [PMID: 38161792 PMCID: PMC10755906 DOI: 10.3389/fnins.2023.1295010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024] Open
Abstract
Introduction Recent studies have found that the sound-induced alleviation of visual attentional blink, a well-known phenomenon exemplifying the beneficial influence of multisensory integration on time-based attention, was larger when that sound was semantically congruent relative to incongruent with the second visual target (T2). Although such an audiovisual congruency effect has been attributed mainly to the semantic conflict carried by the incongruent sound restraining that sound from facilitating T2 processing, it is still unclear whether the integrated semantic information carried by the congruent sound benefits T2 processing. Methods To dissociate the congruence-induced benefit and incongruence-induced reduction in the alleviation of visual attentional blink at the behavioral and neural levels, the present study combined behavioral measures and event-related potential (ERP) recordings in a visual attentional blink task wherein the T2-accompanying sound, when delivered, could be semantically neutral in addition to congruent or incongruent with respect to T2. Results The behavioral data clearly showed that compared to the neutral sound, the congruent sound improved T2 discrimination during the blink to a higher degree while the incongruent sound improved it to a lesser degree. The T2-locked ERP data revealed that the early occipital cross-modal N195 component (192-228 ms after T2 onset) was uniquely larger in the congruent-sound condition than in the neutral-sound and incongruent-sound conditions, whereas the late parietal cross-modal N440 component (400-500 ms) was prominent only in the incongruent-sound condition. Discussion These findings provide strong evidence that the modulating effect of audiovisual semantic congruency on the sound-induced alleviation of visual attentional blink contains not only a late incongruence-induced cost but also an early congruence-induced benefit, thereby demonstrating for the first time an unequivocal congruent-sound-induced benefit in alleviating the limitation of time-based visual attention.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Yuxin Zhou
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Fangfang Ma
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Jimei Xie
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
- Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
3
|
Wafaie K, Mohammed H, Xinrui W, Zhou J, El Sergani AM, Yiqiang Q. Compliance with retainer wear using audiovisual integration and reminder: a randomized clinical trial. Sci Rep 2023; 13:8543. [PMID: 37237095 DOI: 10.1038/s41598-023-35686-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 05/22/2023] [Indexed: 05/28/2023] Open
Abstract
Active audiovisual representation of instructions ensures vibrant knowledge acquisition and improves acquaintance needed for self-care with retainer wear. The aim of this trial is to assess the impact of audiovisual instructions with additional weekly electronic reminder messages on improving adherence to instructed wear time of Hawley retainer, periodontal outcomes, and participants' experiences. Fifty-two participants (mean age 26.1 y) planned for removable retention, were randomly assigned to two parallel groups to receive either (1) audiovisual instructions with an additional weekly reminder, or (2) verbal instructions alone. Each participant received a Hawley retainer equipped with a TheraMon microsensor and was instructed to wear it for 22 h daily. Participants were monitored for adherence to the wear time after 3 (T1) and 6 months (T2), and had their periodontal health and experiences assessed at T2. Overall, the mean objectively measured daily wear time at T1 was 14.9 (± 4.9 h), and 14.3 (± 5.4 h) at T2. After 3 months, no significant differences were found between the groups (p = 0.065), however, a significant difference favoring better compliance with wear instructions was observed in the audiovisual group after 6 months (p = 0.033). A non-significant difference was observed between both groups regarding the gingival (p = 0.165) and plaque index scores (p = 0.173). Participants' experiences were similar in both groups, except for satisfaction with the way of delivering instructions, being favorably reported in the audiovisual group. Audiovisual instructions with weekly reminders seem to have a significant effect on patient compliance in the longer term.Trial registration: TCTR20230220002.
Collapse
Affiliation(s)
- Khaled Wafaie
- Department of Orthodontics, Faculty of Dentistry, First Affiliated Hospital of Zhengzhou University, No. 1 Jianshe East Road, Erqi District, Zhengzhou, Henan, China.
| | - Hisham Mohammed
- Department of Oral Sciences, Faculty of Dentistry, University of Otago, Dunedin, New Zealand
| | - Wang Xinrui
- Department of Orthodontics, Faculty of Dentistry, First Affiliated Hospital of Zhengzhou University, No. 1 Jianshe East Road, Erqi District, Zhengzhou, Henan, China
| | - Jinshu Zhou
- Department of Orthodontics, Faculty of Dentistry, First Affiliated Hospital of Zhengzhou University, No. 1 Jianshe East Road, Erqi District, Zhengzhou, Henan, China
| | - Ahmed M El Sergani
- Department of Oral and Craniofacial Sciences, University of Pittsburgh School of Dental Medicine, Pittsburgh, USA
| | - Qiao Yiqiang
- Department of Orthodontics, Faculty of Dentistry, First Affiliated Hospital of Zhengzhou University, No. 1 Jianshe East Road, Erqi District, Zhengzhou, Henan, China.
| |
Collapse
|
4
|
Williams AM, Angeloni CF, Geffen MN. Sound Improves Neuronal Encoding of Visual Stimuli in Mouse Primary Visual Cortex. J Neurosci 2023; 43:2885-2906. [PMID: 36944489 PMCID: PMC10124961 DOI: 10.1523/jneurosci.2444-21.2023] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 02/14/2023] [Accepted: 02/23/2023] [Indexed: 03/23/2023] Open
Abstract
In everyday life, we integrate visual and auditory information in routine tasks such as navigation and communication. While concurrent sound can improve visual perception, the neuronal correlates of audiovisual integration are not fully understood. Specifically, it remains unclear whether neuronal firing patters in the primary visual cortex (V1) of awake animals demonstrate similar sound-induced improvement in visual discriminability. Furthermore, presentation of sound is associated with movement in the subjects, but little is understood about whether and how sound-associated movement affects audiovisual integration in V1. Here, we investigated how sound and movement interact to modulate V1 visual responses in awake, head-fixed mice and whether this interaction improves neuronal encoding of the visual stimulus. We presented visual drifting gratings with and without simultaneous auditory white noise to awake mice while recording mouse movement and V1 neuronal activity. Sound modulated activity of 80% of light-responsive neurons, with 95% of neurons increasing activity when the auditory stimulus was present. A generalized linear model (GLM) revealed that sound and movement had distinct and complementary effects of the neuronal visual responses. Furthermore, decoding of the visual stimulus from the neuronal activity was improved with sound, an effect that persisted even when controlling for movement. These results demonstrate that sound and movement modulate visual responses in complementary ways, improving neuronal representation of the visual stimulus. This study clarifies the role of movement as a potential confound in neuronal audiovisual responses and expands our knowledge of how multimodal processing is mediated at a neuronal level in the awake brain.SIGNIFICANCE STATEMENT Sound and movement are both known to modulate visual responses in the primary visual cortex; however, sound-induced movement has largely remained unaccounted for as a potential confound in audiovisual studies in awake animals. Here, authors found that sound and movement both modulate visual responses in an important visual brain area, the primary visual cortex, in distinct, yet complementary ways. Furthermore, sound improved encoding of the visual stimulus even when accounting for movement. This study reconciles contrasting theories on the mechanism underlying audiovisual integration and asserts the primary visual cortex as a key brain region participating in tripartite sensory interactions.
Collapse
Affiliation(s)
- Aaron M Williams
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| | - Christopher F Angeloni
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Maria N Geffen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| |
Collapse
|
5
|
He Y, Yang T, He C, Sun K, Guo Y, Wang X, Bai L, Xue T, Xu T, Guo Q, Liao Y, Liu X, Wu S. Effects of audiovisual interactions on working memory: Use of the combined N-back + Go/NoGo paradigm. Front Psychol 2023; 14:1080788. [PMID: 36874804 PMCID: PMC9982107 DOI: 10.3389/fpsyg.2023.1080788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 01/27/2023] [Indexed: 02/19/2023] Open
Abstract
Background Approximately 94% of sensory information acquired by humans originates from the visual and auditory channels. Such information can be temporarily stored and processed in working memory, but this system has limited capacity. Working memory plays an important role in higher cognitive functions and is controlled by central executive function. Therefore, elucidating the influence of the central executive function on information processing in working memory, such as in audiovisual integration, is of great scientific and practical importance. Purpose This study used a paradigm that combined N-back and Go/NoGo tasks, using simple Arabic numerals as stimuli, to investigate the effects of cognitive load (modulated by varying the magnitude of N) and audiovisual integration on the central executive function of working memory as well as their interaction. Methods Sixty college students aged 17-21 years were enrolled and performed both unimodal and bimodal tasks to evaluate the central executive function of working memory. The order of the three cognitive tasks was pseudorandomized, and a Latin square design was used to account for order effects. Finally, working memory performance, i.e., reaction time and accuracy, was compared between unimodal and bimodal tasks with repeated-measures analysis of variance (ANOVA). Results As cognitive load increased, the presence of auditory stimuli interfered with visual working memory by a moderate to large extent; similarly, as cognitive load increased, the presence of visual stimuli interfered with auditory working memory by a moderate to large effect size. Conclusion Our study supports the theory of competing resources, i.e., that visual and auditory information interfere with each other and that the magnitude of this interference is primarily related to cognitive load.
Collapse
Affiliation(s)
- Yang He
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Tianqi Yang
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Chunyan He
- Department of Nursing, Fourth Military Medical University, Xi'an, China
| | - Kewei Sun
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Yaning Guo
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Xiuchao Wang
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Lifeng Bai
- Faculty of Humanities and Social Sciences, Aviation University of Air Force, Changchun, China
| | - Ting Xue
- Faculty of Humanities and Social Sciences, Aviation University of Air Force, Changchun, China
| | - Tao Xu
- Psychology Section, Secondary Sanatorium of Air Force Healthcare Center for Special Services, Hangzhou, China
| | - Qingjun Guo
- Psychology Section, Secondary Sanatorium of Air Force Healthcare Center for Special Services, Hangzhou, China
| | - Yang Liao
- Air Force Medical Center, Air Force Medical University, Beijing, China
| | - Xufeng Liu
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Shengjun Wu
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
6
|
He Y, Guo Z, Wang X, Sun K, Lin X, Wang X, Li F, Guo Y, Feng T, Zhang J, Li C, Tian W, Liu X, Wu S. Effects of Audiovisual Interactions on Working Memory Task Performance—Interference or Facilitation. Brain Sci 2022; 12:brainsci12070886. [PMID: 35884692 PMCID: PMC9313432 DOI: 10.3390/brainsci12070886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 06/26/2022] [Accepted: 07/01/2022] [Indexed: 11/16/2022] Open
Abstract
(1) Background: The combined n-back + Go/NoGo paradigm was used to investigate whether audiovisual interactions interfere with or facilitate WM. (2) Methods: College students were randomly assigned to perform the working memory task based on either a single (visual or auditory) or dual (audiovisual) stimulus. Reaction times, accuracy, and WM performance were compared across the two groups to investigate effects of audiovisual interactions. (3) Results: With low cognitive load (2-back), auditory stimuli had no effect on visual working memory, whereas visual stimuli had a small effect on auditory working memory. With high cognitive load (3-back), auditory stimuli interfered (large effect size) with visual WM, and visual stimuli interfered (medium effect size) with auditory WM. (4) Conclusions: Audiovisual effects on WM follow the resource competition theory, and the cognitive load of a visual stimulus is dominated by competition; vision always interferes with audition, and audition conditionally interferes with vision. With increased visual cognitive load, competitive effects of audiovisual interactions were more obvious than those with auditory stimuli. Compared with visual stimuli, audiovisual stimuli showed significant interference only when visual cognitive load was high. With low visual cognitive load, the two stimulus components neither facilitated nor interfered with the other in accordance with a speed–accuracy trade-off.
Collapse
Affiliation(s)
- Yang He
- Department of Military Medical Psychology, Air Force Medical University, Xi'an 710032, China
| | - Zhihua Guo
- Department of Military Medical Psychology, Air Force Medical University, Xi'an 710032, China
| | - Xinlu Wang
- Department of Military Medical Psychology, Air Force Medical University, Xi'an 710032, China
| | - Kewei Sun
- Department of Military Medical Psychology, Air Force Medical University, Xi'an 710032, China
| | - Xinxin Lin
- Department of Military Medical Psychology, Air Force Medical University, Xi'an 710032, China
| | - Xiuchao Wang
- Department of Military Medical Psychology, Air Force Medical University, Xi'an 710032, China
| | - Fengzhan Li
- Department of Military Medical Psychology, Air Force Medical University, Xi'an 710032, China
| | - Yaning Guo
- Department of Military Medical Psychology, Air Force Medical University, Xi'an 710032, China
| | - Tingwei Feng
- Department of Military Medical Psychology, Air Force Medical University, Xi'an 710032, China
| | - Junpeng Zhang
- Department of Military Medical Psychology, Air Force Medical University, Xi'an 710032, China
| | - Congchong Li
- School of Public Health, Shaanxi University of Chinese Medicine, Xianyang 712046, China
| | - Wenqing Tian
- School of Public Health, Shaanxi University of Chinese Medicine, Xianyang 712046, China
| | - Xufeng Liu
- Department of Military Medical Psychology, Air Force Medical University, Xi'an 710032, China
| | - Shengjun Wu
- Department of Military Medical Psychology, Air Force Medical University, Xi'an 710032, China
| |
Collapse
|
7
|
Semantically congruent audiovisual integration with modal-based attention accelerates auditory short-term memory retrieval. Atten Percept Psychophys 2022; 84:1625-1634. [PMID: 35641858 DOI: 10.3758/s13414-021-02437-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/28/2021] [Indexed: 11/08/2022]
Abstract
Evidence has shown that multisensory integration benefits to unisensory perception performance are asymmetric and that auditory perception performance can receive more multisensory benefits, especially when the attention focus is directed toward a task-irrelevant visual stimulus. At present, whether the benefits of semantically (in)congruent multisensory integration with modal-based attention for subsequent unisensory short-term memory (STM) retrieval are also asymmetric remains unclear. Using a delayed matching-to-sample paradigm, the present study investigated this issue by manipulating the attention focus during multisensory memory encoding. The results revealed that both visual and auditory STM retrieval reaction times were faster under semantically congruent multisensory conditions than under unisensory memory encoding conditions. We suggest that coherent multisensory representation formation might be optimized by restricted multisensory encoding and can be rapidly triggered by subsequent unisensory memory retrieval demands. Crucially, auditory STM retrieval is exclusively accelerated by semantically congruent multisensory memory encoding, indicating that the less effective sensory modality of memory retrieval relies more on the coherent prior formation of a multisensory representation optimized by modal-based attention.
Collapse
|
8
|
Bigelow J, Morrill RJ, Olsen T, Hasenstaub AR. Visual modulation of firing and spectrotemporal receptive fields in mouse auditory cortex. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100040. [PMID: 36518337 PMCID: PMC9743056 DOI: 10.1016/j.crneur.2022.100040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 04/26/2022] [Accepted: 05/06/2022] [Indexed: 10/18/2022] Open
Abstract
Recent studies have established significant anatomical and functional connections between visual areas and primary auditory cortex (A1), which may be important for cognitive processes such as communication and spatial perception. These studies have raised two important questions: First, which cell populations in A1 respond to visual input and/or are influenced by visual context? Second, which aspects of sound encoding are affected by visual context? To address these questions, we recorded single-unit activity across cortical layers in awake mice during exposure to auditory and visual stimuli. Neurons responsive to visual stimuli were most prevalent in the deep cortical layers and included both excitatory and inhibitory cells. The overwhelming majority of these neurons also responded to sound, indicating unimodal visual neurons are rare in A1. Other neurons for which sound-evoked responses were modulated by visual context were similarly excitatory or inhibitory but more evenly distributed across cortical layers. These modulatory influences almost exclusively affected sustained sound-evoked firing rate (FR) responses or spectrotemporal receptive fields (STRFs); transient FR changes at stimulus onset were rarely modified by visual context. Neuron populations with visually modulated STRFs and sustained FR responses were mostly non-overlapping, suggesting spectrotemporal feature selectivity and overall excitability may be differentially sensitive to visual context. The effects of visual modulation were heterogeneous, increasing and decreasing STRF gain in roughly equal proportions of neurons. Our results indicate visual influences are surprisingly common and diversely expressed throughout layers and cell types in A1, affecting nearly one in five neurons overall.
Collapse
Affiliation(s)
- James Bigelow
- Coleman Memorial Laboratory, University of California, San Francisco, USA
- Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco, 94143, USA
| | - Ryan J. Morrill
- Coleman Memorial Laboratory, University of California, San Francisco, USA
- Neuroscience Graduate Program, University of California, San Francisco, USA
- Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco, 94143, USA
| | - Timothy Olsen
- Coleman Memorial Laboratory, University of California, San Francisco, USA
- Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco, 94143, USA
| | - Andrea R. Hasenstaub
- Coleman Memorial Laboratory, University of California, San Francisco, USA
- Neuroscience Graduate Program, University of California, San Francisco, USA
- Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco, 94143, USA
| |
Collapse
|
9
|
Semantic congruent audiovisual integration during the encoding stage of working memory: an ERP and sLORETA study. Sci Rep 2017; 7:5112. [PMID: 28698594 PMCID: PMC5505990 DOI: 10.1038/s41598-017-05471-1] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Accepted: 05/31/2017] [Indexed: 11/09/2022] Open
Abstract
Although multisensory integration is an inherent component of functional brain organization, multisensory integration during working memory (WM) has attracted little attention. The present study investigated the neural properties underlying the multisensory integration of WM by comparing semantically related bimodal stimulus presentations with unimodal stimulus presentations and analysing the results using the standardized low-resolution brain electromagnetic tomography (sLORETA) source location approach. The results showed that the memory retrieval reaction times during congruent audiovisual conditions were faster than those during unisensory conditions. Moreover, our findings indicated that the event-related potential (ERP) for simultaneous audiovisual stimuli differed from the ERP for the sum of unisensory constituents during the encoding stage and occurred within a 236-530 ms timeframe over the frontal and parietal-occipital electrodes. The sLORETA images revealed a distributed network of brain areas that participate in the multisensory integration of WM. These results suggested that information inputs from different WM subsystems yielded nonlinear multisensory interactions and became integrated during the encoding stage. The multicomponent model of WM indicates that the central executive could play a critical role in the integration of information from different slave systems.
Collapse
|
10
|
Juan C, Cappe C, Alric B, Roby B, Gilardeau S, Barone P, Girard P. The variability of multisensory processes of natural stimuli in human and non-human primates in a detection task. PLoS One 2017; 12:e0172480. [PMID: 28212416 PMCID: PMC5315309 DOI: 10.1371/journal.pone.0172480] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Accepted: 02/06/2017] [Indexed: 11/19/2022] Open
Abstract
Background Behavioral studies in both human and animals generally converge to the dogma that multisensory integration improves reaction times (RTs) in comparison to unimodal stimulation. These multisensory effects depend on diverse conditions among which the most studied were the spatial and temporal congruences. Further, most of the studies are using relatively simple stimuli while in everyday life, we are confronted to a large variety of complex stimulations constantly changing our attentional focus over time, a modality switch that can impact on stimuli detection. In the present study, we examined the potential sources of the variability in reaction times and multisensory gains with respect to the intrinsic features of a large set of natural stimuli. Methodology/Principle findings Rhesus macaque monkeys and human subjects performed a simple audio-visual stimulus detection task in which a large collection of unimodal and bimodal natural stimuli with semantic specificities was presented at different saliencies. Although we were able to reproduce the well-established redundant signal effect, we failed to reveal a systematic violation of the race model which is considered to demonstrate multisensory integration. In both monkeys and human species, our study revealed a large range of multisensory gains, with negative and positive values. While modality switch has clear effects on reaction times, one of the main causes of the variability of multisensory gains appeared to be linked to the intrinsic physical parameters of the stimuli. Conclusion/Significance Based on the variability of multisensory benefits, our results suggest that the neuronal mechanisms responsible of the redundant effect (interactions vs. integration) are highly dependent on the stimulus complexity suggesting different implications of uni- and multisensory brain regions. Further, in a simple detection task, the semantic values of individual stimuli tend to have no significant impact on task performances, an effect which is probably present in more cognitive tasks.
Collapse
Affiliation(s)
- Cécile Juan
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Céline Cappe
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Baptiste Alric
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Benoit Roby
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Sophie Gilardeau
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Pascal Barone
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Pascal Girard
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
- INSERM, Toulouse, France
- * E-mail:
| |
Collapse
|