1
|
Vannasing P, Dionne-Dostie E, Tremblay J, Paquette N, Collignon O, Gallagher A. Electrophysiological responses of audiovisual integration from infancy to adulthood. Brain Cogn 2024; 178:106180. [PMID: 38815526 DOI: 10.1016/j.bandc.2024.106180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 05/17/2024] [Accepted: 05/17/2024] [Indexed: 06/01/2024]
Abstract
Our ability to merge information from different senses into a unified percept is a crucial perceptual process for efficient interaction with our multisensory environment. Yet, the developmental process underlying how the brain implements multisensory integration (MSI) remains poorly known. This cross-sectional study aims to characterize the developmental patterns of audiovisual events in 131 individuals aged from 3 months to 30 years. Electroencephalography (EEG) was recorded during a passive task, including simple auditory, visual, and audiovisual stimuli. In addition to examining age-related variations in MSI responses, we investigated Event-Related Potentials (ERPs) linked with auditory and visual stimulation alone. This was done to depict the typical developmental trajectory of unisensory processing from infancy to adulthood within our sample and to contextualize the maturation effects of MSI in relation to unisensory development. Comparing the neural response to audiovisual stimuli to the sum of the unisensory responses revealed signs of MSI in the ERPs, more specifically between the P2 and N2 components (P2 effect). Furthermore, adult-like MSI responses emerge relatively late in the development, around 8 years old. The automatic integration of simple audiovisual stimuli is a long developmental process that emerges during childhood and continues to mature during adolescence with ERP latencies decreasing with age.
Collapse
Affiliation(s)
- Phetsamone Vannasing
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Emmanuelle Dionne-Dostie
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Julie Tremblay
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Natacha Paquette
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Olivier Collignon
- Institute of Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-La-Neuve, Belgium; School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| | - Anne Gallagher
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada; Cerebrum, Department of Psychology, University of Montreal, Montreal, Qc, Canada.
| |
Collapse
|
2
|
Peng Y, Wang C, Qiu R, Jiang M, Wan X. Influence of flavor information on visual search: Attentional capture by and suppression of flavor-associated colors. Biol Psychol 2024; 190:108821. [PMID: 38789028 DOI: 10.1016/j.biopsycho.2024.108821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 05/20/2024] [Accepted: 05/21/2024] [Indexed: 05/26/2024]
Abstract
Numerous studies have demonstrated the impact of flavor cues on visual search, yet the underlying mechanisms remain elusive. In this experiment, we used event-related potentials (ERPs) to examine whether, and if so, how flavor information could lead to attentional capture by, and suppression of, flavor-associated colors. The participants were asked to taste certain flavored beverages and subsequently complete a shape-based visual search task, while their neural activities were simultaneously recorded. The behavioral results revealed that the participants made slower responses when a distractor in the flavor-associated color (DFAC) was present, suggesting an attentional bias toward the flavor-associated color. The ERP results revealed that the N2pc was detected if the target and the DFAC were shown in the same visual field (e.g. both target and DFCA on the right side of the screen), when the pairings between flavor cues and target colors were incongruent. However, the N2pc was not observed if the target and the DFAC were shown in the opposite visual fields (e.g. target on the right and DFCA on the left side of the screen) for the incongruent color-flavor pairings. Moreover, the distractor positivity (Pd) was observed if the target and the DFAC were shown in the opposite visual field for the congruent color-flavor pairings. These results suggest that both attentional capture and suppression are involved in the influence of flavor information on visual search. Collectively, these findings provide initial electrophysiological evidence on the mechanisms of the crossmodal influence of flavor cues on visual search.
Collapse
Affiliation(s)
- Yubin Peng
- Department of Psychological and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Chujun Wang
- Department of Psychological and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Ruyi Qiu
- Department of Psychology, Hunan University of Chinese Medicine, Changsha, China
| | - Minghu Jiang
- Department of Chinese Language and Literature, Tsinghua University, Beijing, China
| | - Xiaoang Wan
- Department of Psychological and Cognitive Sciences, Tsinghua University, Beijing, China.
| |
Collapse
|
3
|
Zhao S, Ma F, Xie J, Zhou Y, Feng C, Feng W. The stimulus-driven and representation-driven cross-modal attentional spreading are both modulated by audiovisual temporal synchrony. Psychophysiology 2024; 61:e14527. [PMID: 38243583 DOI: 10.1111/psyp.14527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 11/18/2023] [Accepted: 01/03/2024] [Indexed: 01/21/2024]
Abstract
Multisensory integration and attention can interact in a way that attention to the visual constituent of a multisensory object results in an attentional spreading to its ignored auditory constituent, which can be either stimulus-driven or representation-driven depending on whether the object's visual constituent receives extra representation-based selective attention. Previous research using simple unrelated audiovisual combinations has shown that the stimulus-driven attentional spreading is contingent on audiovisual temporal simultaneity. However, little is known about whether this temporal constraint applies also to the representation-driven attentional spreading, and whether it holds for the stimulus-driven process elicited by real-life multisensory objects. The current event-related potential study investigated these questions by systematically manipulating the visual-to-auditory stimulus onset asynchrony (SOA: 0/100/300 ms) in an object-selective visual recognition task wherein the representation-driven and stimulus-driven spreading processes, measured as two distinct auditory negative difference (Nd) components, could be isolated independently. Our results showed that both the representation-driven and stimulus-driven Nds decreased as the SOA increased. Interestingly, the representation-driven Nd was completely absent, whereas the stimulus-driven Nd was still robust, when the auditory constituents were delayed by 300 ms. These findings not only indicate that the role of audiovisual simultaneity in the representation-driven attentional spreading has been underestimated, but also suggest that learned associations between the unisensory constituents of real-life objects render the stimulus-driven attentional spreading more tolerant of audiovisual asynchrony.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Fangfang Ma
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Jimei Xie
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Yuxin Zhou
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
- Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu, China
| |
Collapse
|
4
|
Yang H, Cai B, Tan W, Luo L, Zhang Z. Pitch Improvement in Attentional Blink: A Study across Audiovisual Asymmetries. Behav Sci (Basel) 2024; 14:145. [PMID: 38392498 PMCID: PMC10885858 DOI: 10.3390/bs14020145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 02/07/2024] [Accepted: 02/16/2024] [Indexed: 02/24/2024] Open
Abstract
Attentional blink (AB) is a phenomenon in which the perception of a second target is impaired when it appears within 200-500 ms after the first target. Sound affects an AB and is accompanied by the appearance of an asymmetry during audiovisual integration, but it is not known whether this is related to the tonal representation of sound. The aim of the present study was to investigate the effect of audiovisual asymmetry on attentional blink and whether the presentation of pitch improves the ability to detect a target during an AB that is accompanied by audiovisual asymmetry. The results showed that as the lag increased, the subject's target recognition improved and the pitch produced further improvements. These improvements exhibited a significant asymmetry across the audiovisual channel. Our findings could contribute to better utilizations of audiovisual integration resources to improve attentional transients and auditory recognition decline, which could be useful in areas such as driving and education.
Collapse
Affiliation(s)
- Haoping Yang
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
- Suzhou Cognitive Psychology Co-Operative Society, Soochow University, Suzhou 215021, China
| | - Biye Cai
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
| | - Wenjie Tan
- Suzhou Cognitive Psychology Co-Operative Society, Soochow University, Suzhou 215021, China
- Department of Physical Education, South China University of Technology, Guangzhou 518100, China
| | - Li Luo
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
| | - Zonghao Zhang
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
| |
Collapse
|
5
|
Wu Y, Gao M, Wang X, Tang X. Spatial attention modulates multisensory integration: The dissociation between exogenous and endogenous orienting. Q J Exp Psychol (Hove) 2024; 77:418-432. [PMID: 37092806 DOI: 10.1177/17470218231173925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2023]
Abstract
Previous studies have separately found that exogenous orienting decreases multisensory integration (MSI), while endogenous orienting enhances MSI. It is currently unclear, however, why the two orientations have opposite effects on MSI. In the current study, we investigated the interaction between spatial attention and MSI in two experiments based on the cue-target paradigm. Experiment 1 separated exogenous and endogenous orienting to investigate the effect of spatial attention on MSI by varying the predictability of the cue. Experiment 2 further explored the effect of endogenous orienting on MSI. We found that exogenous orienting induced by the directionality of the cue decreased MSI, while endogenous orienting induced by the predictability of the cue enhanced MSI. The role of spatial orienting need and spatial attention bias in the modulation of MSI by exogenous and endogenous orienting was discussed. The present study sheds new light on how spatial attention modulates MSI processes.
Collapse
Affiliation(s)
- Yingnan Wu
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Min Gao
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Xueli Wang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
- School of Foreign Languages, Ningbo University of Technology, Ningbo, China
| |
Collapse
|
6
|
Alwashmi K, Meyer G, Rowe F, Ward R. Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality. Neuroimage 2024; 285:120483. [PMID: 38048921 DOI: 10.1016/j.neuroimage.2023.120483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 11/18/2023] [Accepted: 12/01/2023] [Indexed: 12/06/2023] Open
Abstract
The integration of information from different sensory modalities is a fundamental process that enhances perception and performance in real and virtual environments (VR). Understanding these mechanisms, especially during learning tasks that exploit novel multisensory cue combinations provides opportunities for the development of new rehabilitative interventions. This study aimed to investigate how functional brain changes support behavioural performance improvements during an audio-visual (AV) learning task. Twenty healthy participants underwent a 30 min daily VR training for four weeks. The task was an AV adaptation of a 'scanning training' paradigm that is commonly used in hemianopia rehabilitation. Functional magnetic resonance imaging (fMRI) and performance data were collected at baseline, after two and four weeks of training, and four weeks post-training. We show that behavioural performance, operationalised as mean reaction time reduction in VR, significantly improves. In separate tests in a controlled laboratory environment, we showed that the behavioural performance gains in the VR training environment transferred to a significant mean RT reduction for the trained AV voluntary task on a computer screen. Enhancements were observed in both the visual-only and AV conditions, with the latter demonstrating a faster response time supported by the presence of audio cues. The behavioural learning effect also transfers to two additional tasks that were tested: a visual search task and an involuntary visual task. Our fMRI results reveal an increase in functional activation (BOLD signal) in multisensory brain regions involved in early-stage AV processing: the thalamus, the caudal inferior parietal lobe and cerebellum. These functional changes were only observed for the trained, multisensory, task and not for unimodal visual stimulation. Functional activation changes in the thalamus were significantly correlated to behavioural performance improvements. This study demonstrates that incorporating spatial auditory cues to voluntary visual training in VR leads to augmented brain activation changes in multisensory integration, resulting in measurable performance gains across tasks. The findings highlight the potential of VR-based multisensory training as an effective method for enhancing cognitive function and as a potentially valuable tool in rehabilitative programmes.
Collapse
Affiliation(s)
- Kholoud Alwashmi
- Faculty of Health and Life Sciences, University of Liverpool, United Kingdom; Department of Radiology, Princess Nourah bint Abdulrahman University, Saudi Arabia.
| | - Georg Meyer
- Digital Innovation Facility, University of Liverpool, United Kingdom
| | - Fiona Rowe
- Institute of Population Health, University of Liverpool, United Kingdom
| | - Ryan Ward
- Digital Innovation Facility, University of Liverpool, United Kingdom; School Computer Science and Mathematics, Liverpool John Moores University, United Kingdom
| |
Collapse
|
7
|
Sun Y, Bo Q, Mao Z, Tian Q, Dong F, Li L, Wang C. Different levels of prepulse inhibition among patients with first-episode schizophrenia, bipolar disorder and major depressive disorder. J Psychiatry Neurosci 2024; 49:E1-E10. [PMID: 38238035 PMCID: PMC10803101 DOI: 10.1503/jpn.230083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 08/10/2023] [Accepted: 09/27/2023] [Indexed: 01/23/2024] Open
Abstract
BACKGROUND Deficits in prepulse inhibition may be a common feature in first-episode schizophrenia, bipolar disorder (BD) and major depressive disorder (MDD). We sought to explore the levels and viability of prepulse inhibition to differentiate first-episode schizophrenia, BD and MDD in patient populations. METHODS We tested patients with first-episode schizophrenia, BD or MDD and healthy controls using prepulse inhibition paradigms, namely perceived spatial co-location (PSC-PPI) and perceived spatial separation (PSS-PPI). RESULTS We included 53 patients with first-episode schizophrenia, 30 with BD and 25 with MDD, as well as 82 healthy controls. The PSS-PPI indicated that the levels of prepulse inhibition were smallest to largest, respectively, in the first-episode schizophrenia, BD, MDD and control groups. Relative to the healthy controls, the prepulse inhibition deficits in the first-episode schizophrenia group were significant (p < 0.001), but the prepulse inhibitions were similar between patients with BD and healthy controls, and between patients with MDD and healthy controls. The receiver operating characteristic curve analysis showed that PSS-PPI (area under the curve [AUC] 0.73, p < 0.001) and latency (AUC 0.72, p < 0.001) were significant for differentiating patients with first-episode schizophrenia or BD from healthy controls. LIMITATIONS The demographics of the 4 groups were not ideally matched. We did not perform cognitive assessments. The possible confounding effect of medications on prepulse inhibition could not be eliminated. CONCLUSION The level of prepulse inhibition among patients with first-episode schizophrenia was the lowest, with levels among patients with BD, patients with MDD and healthy controls increasingly higher. The PSS-PPI paradigm was more effective than PSC-PPI to recognize deficits in prepulse inhibition. These results provide a basis for further research on biological indicators that can assist differential diagnoses in psychosis.
Collapse
Affiliation(s)
- Yue Sun
- From the National Clinical Research Center for Mental Disorders and Beijing Key Laboratory of Mental Disorders, and Beijing Institute for Brain Disorders Center of Schizophrenia, Beijing Anding Hospital, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the Advanced Innovation Center for Human Brain Protection, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China (Li)
| | - Qijing Bo
- From the National Clinical Research Center for Mental Disorders and Beijing Key Laboratory of Mental Disorders, and Beijing Institute for Brain Disorders Center of Schizophrenia, Beijing Anding Hospital, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the Advanced Innovation Center for Human Brain Protection, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China (Li)
| | - Zhen Mao
- From the National Clinical Research Center for Mental Disorders and Beijing Key Laboratory of Mental Disorders, and Beijing Institute for Brain Disorders Center of Schizophrenia, Beijing Anding Hospital, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the Advanced Innovation Center for Human Brain Protection, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China (Li)
| | - Qing Tian
- From the National Clinical Research Center for Mental Disorders and Beijing Key Laboratory of Mental Disorders, and Beijing Institute for Brain Disorders Center of Schizophrenia, Beijing Anding Hospital, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the Advanced Innovation Center for Human Brain Protection, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China (Li)
| | - Fang Dong
- From the National Clinical Research Center for Mental Disorders and Beijing Key Laboratory of Mental Disorders, and Beijing Institute for Brain Disorders Center of Schizophrenia, Beijing Anding Hospital, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the Advanced Innovation Center for Human Brain Protection, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China (Li)
| | - Liang Li
- From the National Clinical Research Center for Mental Disorders and Beijing Key Laboratory of Mental Disorders, and Beijing Institute for Brain Disorders Center of Schizophrenia, Beijing Anding Hospital, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the Advanced Innovation Center for Human Brain Protection, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China (Li)
| | - Chuanyue Wang
- From the National Clinical Research Center for Mental Disorders and Beijing Key Laboratory of Mental Disorders, and Beijing Institute for Brain Disorders Center of Schizophrenia, Beijing Anding Hospital, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the Advanced Innovation Center for Human Brain Protection, Capital Medical University (Sun, Bo, Mao, Tian, Dong, Wang); the School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China (Li)
| |
Collapse
|
8
|
Yang X, Ying C, Zhu L, Wenjing W. The neural oscillations in delta- and theta-bands contribute to divided attention in audiovisual integration. Perception 2024; 53:44-60. [PMID: 37899595 DOI: 10.1177/03010066231208539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2023]
Abstract
One of key mechanisms implicated in multisensory processing is neural oscillations in distinct frequency band. Many studies explored the modulation of attention by recording the electroencephalography signals when subjects attended one modality, and ignored the other modality input. However, when attention is directed toward one modality, it may be not always possible to shut out completely inputs from a different modality. Since many situations require division of attention between audition and vision, it is imperative to investigate the neural mechanisms underlying processing of concurrent auditory and visual sensory streams. In the present study, we designed a task of audiovisual semantic discrimination, in which the subjects were asked to share attention to both auditory and visual stimuli. We explored the contribution of neural oscillations in lower-frequency to the modulation of divided attention on audiovisual integration. Our results implied that theta-band activity contributes to the early modulation of divided attention, and delta-band activity contributes to the late modulation of divided attention to audiovisual integration. Moreover, the fronto-central delta- and theta-bands activity is likely a marker of divided attention in audiovisual integration, and the neural oscillation on delta- and theta-bands is conducive to allocating attention resources to dual-tasking involving task-coordinating abilities.
Collapse
Affiliation(s)
- Xi Yang
- Northeast Electric Power University, P. R. China
| | - Chen Ying
- Northeast Electric Power University, P. R. China
| | - Lan Zhu
- Northeast Electric Power University, P. R. China
| | - Wang Wenjing
- Northeast Electric Power University, P. R. China
| |
Collapse
|
9
|
Wang X, Tang X, Wang A, Zhang M. Non-spatial inhibition of return attenuates audiovisual integration owing to modality disparities. Atten Percept Psychophys 2023:10.3758/s13414-023-02825-y. [PMID: 38127253 DOI: 10.3758/s13414-023-02825-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2023] [Indexed: 12/23/2023]
Abstract
Although previous studies have investigated the relationship between inhibition of return (IOR) and multisensory integration, the influence of non-spatial has not been explored. The present study aimed to investigate the influence of non-spatial IOR on audiovisual integration by using a "prime-neutral cue-target" paradigm. In Experiment 1, which manipulated prime validity and target modality, the targets were positioned centrally, revealing significant non-spatial IOR effects in the visual, auditory, and audiovisual modalities. Analysis of relative multisensory response enhancement (rMRE) indicated substantial audiovisual integration enhancement in both valid and invalid target conditions. Furthermore, the enhancement was weaker for valid targets than for invalid targets. In Experiment 2, the targets were positioned above and below to rule out repetition blindness (RB); this experiment successfully replicated the results observed in Experiment 1. Notably, Experiments 1 and 2 consistently found that the correlation between modality differences and rMRE for valid targets indicated that differences in signal strength between visual and auditory modalities contributed to a reduction in audiovisual integration. However, the absence of correlation with the invalid target suggests that attention, as a key factor, may play a significant role in this process. The present study highlights how non-spatial IOR reduces audiovisual integration and sheds light on the complex interaction between attention and multisensory integration.
Collapse
Affiliation(s)
- Xiaoxue Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China.
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
10
|
Zou Z, Zhao B, Ting KH, Wong C, Hou X, Chan CCH. Multisensory integration augmenting motor processes among older adults. Front Aging Neurosci 2023; 15:1293479. [PMID: 38192281 PMCID: PMC10773807 DOI: 10.3389/fnagi.2023.1293479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 12/04/2023] [Indexed: 01/10/2024] Open
Abstract
Objective Multisensory integration enhances sensory processing in older adults. This study aimed to investigate how the sensory enhancement would modulate the motor related process in healthy older adults. Method Thirty-one older adults (12 males, mean age 67.7 years) and 29 younger adults as controls (16 males, mean age 24.9 years) participated in this study. Participants were asked to discriminate spatial information embedded in the unisensory (visual or audial) and multisensory (audiovisual) conditions. The responses made by the movements of the left and right wrists corresponding to the spatial information were registered with specially designed pads. The electroencephalogram (EEG) marker was the event-related super-additive P2 in the frontal-central region, the stimulus-locked lateralized readiness potentials (s-LRP) and response-locked lateralized readiness potentials (r-LRP). Results Older participants showed significantly faster and more accurate responses than controls in the multisensory condition than in the unisensory conditions. Both groups had significantly less negative-going s-LRP amplitudes elicited at the central sites in the between-condition contrasts. However, only the older group showed significantly less negative-going, centrally distributed r-LRP amplitudes. More importantly, only the r-LRP amplitude in the audiovisual condition significantly predicted behavioral performance. Conclusion Audiovisual integration enhances reaction time, which associates with modulated motor related processes among the older participants. The super-additive effects modulate both the motor preparation and generation processes. Interestingly, only the modulated motor generation process contributes to faster reaction time. As such effects were observed in older but not younger participants, multisensory integration likely augments motor functions in those with age-related neurodegeneration.
Collapse
Affiliation(s)
- Zhi Zou
- Department of Sport and Health, Guangzhou Sport University, Guangzhou, China
| | - Benxuan Zhao
- Department of Sport and Health, Guangzhou Sport University, Guangzhou, China
| | - Kin-hung Ting
- University Research Facility in Behavioral and Systems Neuroscience, The Hong Kong Polytechnic University, Hong Kong, Hong Kong SAR, China
| | - Clive Wong
- Department of Psychology, The Education University of Hong Kong, New Territories, Hong Kong SAR, China
| | - Xiaohui Hou
- Department of Sport and Health, Guangzhou Sport University, Guangzhou, China
| | - Chetwyn C. H. Chan
- Department of Psychology, The Education University of Hong Kong, New Territories, Hong Kong SAR, China
| |
Collapse
|
11
|
Choi I, Demir I, Oh S, Lee SH. Multisensory integration in the mammalian brain: diversity and flexibility in health and disease. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220338. [PMID: 37545309 PMCID: PMC10404930 DOI: 10.1098/rstb.2022.0338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multisensory integration (MSI) occurs in a variety of brain areas, spanning cortical and subcortical regions. In traditional studies on sensory processing, the sensory cortices have been considered for processing sensory information in a modality-specific manner. The sensory cortices, however, send the information to other cortical and subcortical areas, including the higher association cortices and the other sensory cortices, where the multiple modality inputs converge and integrate to generate a meaningful percept. This integration process is neither simple nor fixed because these brain areas interact with each other via complicated circuits, which can be modulated by numerous internal and external conditions. As a result, dynamic MSI makes multisensory decisions flexible and adaptive in behaving animals. Impairments in MSI occur in many psychiatric disorders, which may result in an altered perception of the multisensory stimuli and an abnormal reaction to them. This review discusses the diversity and flexibility of MSI in mammals, including humans, primates and rodents, as well as the brain areas involved. It further explains how such flexibility influences perceptual experiences in behaving animals in both health and disease. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Ilsong Choi
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
| | - Ilayda Demir
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seungmi Oh
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seung-Hee Lee
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| |
Collapse
|
12
|
Chen L, Zhu P, Li J, Song H, Liu H, Shen M, Chen H. The modulation of expectation violation on attention: Evidence from the spatial cueing effects. Cognition 2023; 238:105488. [PMID: 37178591 DOI: 10.1016/j.cognition.2023.105488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 03/21/2023] [Accepted: 05/03/2023] [Indexed: 05/15/2023]
Abstract
The study sought to investigate whether and how expectation violation can modulate attention using the exogenous spatial cueing paradigm, under the theoretical framework of the Memory Encoding Cost (MEC) model. The MEC proposes that exogenous spatial cueing effects are mainly driven by a combination of two distinct mechanisms: attentional facilitation triggered by the presence of an abrupt cue, and attentional suppression induced by memory encoding of the cue. In current experiments, participants needed to identify a target letter that was sometimes preceded by a peripheral onset cue. Various types of expectation violation were introduced by regulating the probability of cue presentation (Experiments 1 & 5), the probability of cue location (Experiments 2 & 4), and the probability of irrelevant sound presentation (Experiment 3). The results showed that expectation violation could enhance the cueing effect (valid vs. invalid cue) in some cases. More crucially, all experiments consistently observed asymmetrical modulation of expectation violation on the cost (invalid vs. neutral cue) and benefit (valid vs. neutral cue) effects: Expectation violation increased the cost effects while did not modulate or decreased (or even reversed) the benefit effects. Furthermore, Experiment 5 provided direct evidence that violation of expectations could enhance the memory encoding of a cue (e.g., color) and this memory advantage could manifest quickly in the early stages of the experiment. The MEC better explains these findings than some traditional models like the spotlight: Expectation violation can both enhance the attentional facilitation of the cue and memory encoding of irrelevant cue information. These findings suggest that expectation violation has a general adaptive function in modulating the attention selectivity.
Collapse
Affiliation(s)
- Luo Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China
| | - Ping Zhu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China
| | - Jian Li
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China
| | - Huixin Song
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China
| | - Huiying Liu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China
| | - Mowei Shen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China.
| | - Hui Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China.
| |
Collapse
|
13
|
Jiang Y, Qiao R, Shi Y, Tang Y, Hou Z, Tian Y. The effects of attention in auditory-visual integration revealed by time-varying networks. Front Neurosci 2023; 17:1235480. [PMID: 37600005 PMCID: PMC10434229 DOI: 10.3389/fnins.2023.1235480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 07/17/2023] [Indexed: 08/22/2023] Open
Abstract
Attention and audiovisual integration are crucial subjects in the field of brain information processing. A large number of previous studies have sought to determine the relationship between them through specific experiments, but failed to reach a unified conclusion. The reported studies explored the relationship through the frameworks of early, late, and parallel integration, though network analysis has been employed sparingly. In this study, we employed time-varying network analysis, which offers a comprehensive and dynamic insight into cognitive processing, to explore the relationship between attention and auditory-visual integration. The combination of high spatial resolution functional magnetic resonance imaging (fMRI) and high temporal resolution electroencephalography (EEG) was used. Firstly, a generalized linear model (GLM) was employed to find the task-related fMRI activations, which was selected as regions of interesting (ROIs) for nodes of time-varying network. Then the electrical activity of the auditory-visual cortex was estimated via the normalized minimum norm estimation (MNE) source localization method. Finally, the time-varying network was constructed using the adaptive directed transfer function (ADTF) technology. Notably, Task-related fMRI activations were mainly observed in the bilateral temporoparietal junction (TPJ), superior temporal gyrus (STG), primary visual and auditory areas. And the time-varying network analysis revealed that V1/A1↔STG occurred before TPJ↔STG. Therefore, the results supported the theory that auditory-visual integration occurred before attention, aligning with the early integration framework.
Collapse
Affiliation(s)
- Yuhao Jiang
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
- Central Nervous System Drug Key Laboratory of Sichuan Province, Luzhou, China
| | - Rui Qiao
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yupan Shi
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yi Tang
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Zhengjun Hou
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yin Tian
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| |
Collapse
|
14
|
Li X, Tang X, Yang J, Wang A, Zhang M. Visual adaptation changes the susceptibility to the fission illusion. Atten Percept Psychophys 2023; 85:2046-2055. [PMID: 36949258 DOI: 10.3758/s13414-023-02686-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/20/2023] [Indexed: 03/24/2023]
Abstract
Sound-induced flash illusion (SiFI) is the illusion that participants perceive incorrectly that the number of visual flashes is equal to the number of auditory beeps when presented within 100 ms. Although previous studies found that repetition suppression can reduce an individual's perceptual sensitivity to the SiFI, there is not yet a consensus as to how visual adaptation affects the SiFI. In the present study, we added prolonged adapting visual stimuli prior to the presentation of audiovisual stimuli to investigate whether the bottom-up factor of adaptation affects the SiFI. The adapting visual stimuli consisted of one or two of the same visual stimuli that lasted for 2 minutes in succession, followed by the audiovisual stimuli. Both adaptation conditions showed SiFI effects. The accuracy of adapting double-flashes was significantly lower than that of in adapting a single flash for the fission illusion. Our analyses indicated that such a pattern could be attributed to a lower d' in adapting double-flashes than in adapting a single flash. However, the accuracy, discriminability and criterion were not significantly different between the two adaptation conditions because of the instability of the fusion illusion. Thus, the present study indicated that the reduced perceptual sensitivity based on visual adaptation could enhance the fission illusion in multisensory integration.
Collapse
Affiliation(s)
- Xin Li
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Jiajia Yang
- Applied Brain Science Lab Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
15
|
Ahmed F, Nidiffer AR, O'Sullivan AE, Zuk NJ, Lalor EC. The integration of continuous audio and visual speech in a cocktail-party environment depends on attention. Neuroimage 2023; 274:120143. [PMID: 37121375 DOI: 10.1016/j.neuroimage.2023.120143] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 03/17/2023] [Accepted: 04/27/2023] [Indexed: 05/02/2023] Open
Abstract
In noisy environments, our ability to understand speech benefits greatly from seeing the speaker's face. This is attributed to the brain's ability to integrate audio and visual information, a process known as multisensory integration. In addition, selective attention plays an enormous role in what we understand, the so-called cocktail-party phenomenon. But how attention and multisensory integration interact remains incompletely understood, particularly in the case of natural, continuous speech. Here, we addressed this issue by analyzing EEG data recorded from participants who undertook a multisensory cocktail-party task using natural speech. To assess multisensory integration, we modeled the EEG responses to the speech in two ways. The first assumed that audiovisual speech processing is simply a linear combination of audio speech processing and visual speech processing (i.e., an A+V model), while the second allows for the possibility of audiovisual interactions (i.e., an AV model). Applying these models to the data revealed that EEG responses to attended audiovisual speech were better explained by an AV model, providing evidence for multisensory integration. In contrast, unattended audiovisual speech responses were best captured using an A+V model, suggesting that multisensory integration is suppressed for unattended speech. Follow up analyses revealed some limited evidence for early multisensory integration of unattended AV speech, with no integration occurring at later levels of processing. We take these findings as evidence that the integration of natural audio and visual speech occurs at multiple levels of processing in the brain, each of which can be differentially affected by attention.
Collapse
Affiliation(s)
- Farhin Ahmed
- Department of Biomedical Engineering, Department of Neuroscience, and Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY 14627, USA
| | - Aaron R Nidiffer
- Department of Biomedical Engineering, Department of Neuroscience, and Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY 14627, USA
| | - Aisling E O'Sullivan
- Department of Biomedical Engineering, Department of Neuroscience, and Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY 14627, USA; School of Engineering, Trinity Centre for Biomedical Engineering, and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin 2, Ireland
| | - Nathaniel J Zuk
- Edmond & Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
| | - Edmund C Lalor
- Department of Biomedical Engineering, Department of Neuroscience, and Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY 14627, USA; School of Engineering, Trinity Centre for Biomedical Engineering, and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin 2, Ireland.
| |
Collapse
|
16
|
Ren Y, Li Y, Xu Z, Luo R, Qian R, Duan J, Yang J, Yang W. Aging effect of cross-modal interactions during audiovisual detection and discrimination by behavior and ERPs. Front Aging Neurosci 2023; 15:1151652. [PMID: 37181627 PMCID: PMC10169674 DOI: 10.3389/fnagi.2023.1151652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 04/06/2023] [Indexed: 05/16/2023] Open
Abstract
Introduction Numerous studies have shown that aging greatly affects audiovisual integration; however, it is still unclear when the aging effect occurs, and its neural mechanism has yet to be fully elucidated. Methods We assessed the audiovisual integration (AVI) of older (n = 40) and younger (n = 45) adults using simple meaningless stimulus detection and discrimination tasks. The results showed that the response was significantly faster and more accurate for younger adults than for older adults in both the detection and discrimination tasks. The AVI was comparable for older and younger adults during stimulus detection (9.37% vs. 9.43%); however, the AVI was lower for older than for younger adults during stimulus discrimination (9.48% vs. 13.08%) behaviorally. The electroencephalography (EEG) analysis showed that comparable AVI amplitude was found at 220-240 ms for both groups during stimulus detection and discrimination, but there was no significant difference between brain regions for older adults but a higher AVI amplitude in the right posterior for younger adults. Additionally, a significant AVI was found for younger adults in 290-310 ms but was absent for older adults during stimulus discrimination. Furthermore, significant AVI was found in the left anterior and right anterior at 290-310 ms for older adults but in the central, right posterior and left posterior for younger adults. Discussion These results suggested that the aging effect of AVI occurred in multiple stages, but the attenuated AVI mainly occurred in the later discriminating stage attributed to attention deficit.
Collapse
Affiliation(s)
- Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China
| | - Yan Li
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China
| | - Zhihan Xu
- Department of Foreign Language, Ningbo University of Technology, Ningbo, China
| | - Rui Luo
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China
| | - Runqi Qian
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China
| | - Jieping Duan
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China
| | - Jiajia Yang
- Applied Brain Science Lab Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| |
Collapse
|
17
|
The role of primary motor cortex in manual inhibition of return: A transcranial magnetic stimulation study. Behav Brain Res 2023; 445:114380. [PMID: 36870395 DOI: 10.1016/j.bbr.2023.114380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 02/25/2023] [Accepted: 03/01/2023] [Indexed: 03/06/2023]
Abstract
Inhibition of return (IOR) is a behavioural phenomenon characterised by longer response times (RTs) to stimuli presented at previously cued versus uncued locations. The neural mechanisms underlying IOR effects are not fully understood. Previous neurophysiological studies have identified a role of frontoparietal areas including posterior parietal cortex (PPC) in the generation of IOR, but the contribution of primary motor cortex (M1) has not been directly tested. The present study investigated the effects of single-pulse transcranial magnetic stimulation (TMS) over M1 on manual IOR in a key-press task where peripheral (left or right) targets followed a cue at the same or opposite location at different SOAs (100/300/600/1000 ms). In Experiment 1, TMS was applied over right M1 on a randomized 50% of trials. In Experiment 2, active or sham stimulation was provided in separate blocks. In the absence of TMS (non-TMS trials in Experiment 1 and sham trials in Experiment 2), evidence of IOR was observed in RTs at longer SOAs. In both experiments, IOR effects differed between TMS and non-TMS/sham conditions, but the effects of TMS were greater and statistically significant in Experiment 1 where TMS and non-TMS trials were randomly interspersed. The magnitude of motor-evoked potentials was not altered by the cue-target relationship in either experiment. These findings do not support a key role of M1 in the mechanisms of IOR but suggest the need for further research to elucidate the role of the motor system in manual IOR effects.
Collapse
|
18
|
Ren Y, Li H, Li Y, Xu Z. Sustained visual attentional load modulates audiovisual integration in older and younger adults. Iperception 2023; 14:20416695231157348. [PMID: 36845028 PMCID: PMC9950617 DOI: 10.1177/20416695231157348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 01/30/2023] [Indexed: 02/25/2023] Open
Abstract
Previous studies have shown that attention influences audiovisual integration (AVI) in multiple stages, but it remains unclear how AVI interacts with attentional load. In addition, while aging has been associated with sensory-functional decline, little is known about how older individuals integrate cross-modal information under attentional load. To investigate these issues twenty older adults and 20 younger adults were recruited to conduct a dual task including a multiple object tracking (MOT) task, which manipulated sustained visual attentional load, and an audiovisual discrimination task, which assesses AVI. The results showed that response times were shorter and hit rate was higher for audiovisual stimuli than for auditory or visual stimuli alone and in younger adults than in older adults. The race model analysis showed that AVI was higher under the load_3 condition (monitoring two targets of the MOT task) than under any other load condition (no-load [NL], one or three targets monitoring). This effect was found regardless of age. However, AVI was lower in older adults than younger adults under NL condition. Moreover, the peak latency was longer, and the time window of AVI was delayed in older adults compared to younger adults under all conditions. These results suggest that slight visual sustained attentional load increased AVI but that heavy visual sustained attentional load decreased AVI, which supports the claim that attention resource was limited, and we further proposed that AVI was positively modulated by attentional resource. Finally, there were substantial impacts of aging on AVI; AVI was delayed in older adults.
Collapse
Affiliation(s)
- Yanna Ren
- Weiping Yang, Department of Psychology,
Faculty of Education, Hubei University, Wuhan, 430062, China.
| | | | - Yan Li
- Department of Psychology, College of
Humanities and Management, Guizhou University of Traditional
Chinese Medicine, Guiyang, China
| | - Zhihan Xu
- Department of Foreign Language, Ningbo University of
Technology, Ningbo, China
| |
Collapse
|
19
|
Lucia S, Aydin M, Bianco V, Fiorini L, Mussini E, Di Russo F. Effect of anticipatory multisensory integration on sensory-motor performance. Brain Struct Funct 2023:10.1007/s00429-023-02620-3. [PMID: 36808005 DOI: 10.1007/s00429-023-02620-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 02/10/2023] [Indexed: 02/23/2023]
Abstract
Multisensory integration (MSI) is a phenomenon that occurs in sensory areas after the presentation of multimodal stimuli. Nowadays, little is known about the anticipatory top-down processes taking place in the preparation stage of processing before the stimulus onset. Considering that the top-down modulation of modality-specific inputs might affect the MSI process, this study attempts to understand whether the direct modulation of the MSI process, beyond the well-known sensory effects, may lead to additional changes in multisensory processing also in non-sensory areas (i.e., those related to task preparation and anticipation). To this aim, event-related potentials (ERPs) were analyzed both before and after auditory and visual unisensory and multisensory stimuli during a discriminative response task (Go/No-go type). Results showed that MSI did not affect motor preparation in premotor areas, while cognitive preparation in the prefrontal cortex was increased and correlated with response accuracy. Early post-stimulus ERP activities were also affected by MSI and correlated with response time. Collectively, the present results point to the plasticity accommodating nature of the MSI processes, which are not limited to perception and extend to anticipatory cognitive preparation for task execution. Further, the enhanced cognitive control emerging during MSI is discussed in the context of Bayesian accounts of augmented predictive processing related to increased perceptual uncertainty.
Collapse
Affiliation(s)
- Stefania Lucia
- Department of Movement, Human and Health Sciences, "Foro Italico" University of Rome, Rome, Italy.
| | - Merve Aydin
- Department of Movement, Human and Health Sciences, "Foro Italico" University of Rome, Rome, Italy
| | - Valentina Bianco
- Department of Movement, Human and Health Sciences, "Foro Italico" University of Rome, Rome, Italy
| | - Linda Fiorini
- Department of Movement, Human and Health Sciences, "Foro Italico" University of Rome, Rome, Italy
- IMT School for Advanced Studies, Lucca, Italy
| | - Elena Mussini
- Department of Movement, Human and Health Sciences, "Foro Italico" University of Rome, Rome, Italy
- Department of Neuroscience, Imaging and Clinical Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Francesco Di Russo
- Department of Movement, Human and Health Sciences, "Foro Italico" University of Rome, Rome, Italy
- IRCCS Fondazione Santa Lucia, Rome, Italy
| |
Collapse
|
20
|
Wang X, Wu Y, Xing Z, Cui X, Gao M, Tang X. Modal-based attention modulates the redundant-signals effect: Role of unimodal target probability. Perception 2023; 52:97-115. [PMID: 36415087 DOI: 10.1177/03010066221136675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Multisensory integration includes two behavioral manifestations: the modality dominance effect and the redundant-signals effect (RSE). RSE is a multisensory improvement effect in which individuals respond more quickly and accurately to bimodal audiovisual (AV) targets than to unimodal auditory (A) or visual (V) targets. Previous studies have confirmed that RSE is the product of modality interactions between different modalities. The goal of this study was to systematically investigate the effects of the modality dominance manipulated by modal-based attention and unimodal target probability on RSE. The results showed that when paying attention to both the A and V modalities (Exp. 1), RSE was not significantly different between unimodal target probabilities. When selectively paying attention to the A modality (Exp. 2A), RSE was also not significantly different between unimodal target probabilities. However, when selectively paying attention to the V modality (Exp. 2B), the magnitude of RSE showed a significant decreasing trend with the increasing probability of V targets. Our study is the first to reveal that the unimodal target probability significantly modulates RSE in visual selective attention, and this modulatory effect of the unimodal target probability on RSE is opposite to the modulatory effect on the modality dominance effect.
Collapse
Affiliation(s)
| | | | | | | | - Min Gao
- 66523Liaoning Normal University, China
| | | |
Collapse
|
21
|
Fisher VL, Dean CL, Nave CS, Parkins EV, Kerkhoff WG, Kwakye LD. Increases in sensory noise predict attentional disruptions to audiovisual speech perception. Front Hum Neurosci 2023; 16:1027335. [PMID: 36684833 PMCID: PMC9846366 DOI: 10.3389/fnhum.2022.1027335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 12/05/2022] [Indexed: 01/06/2023] Open
Abstract
We receive information about the world around us from multiple senses which combine in a process known as multisensory integration. Multisensory integration has been shown to be dependent on attention; however, the neural mechanisms underlying this effect are poorly understood. The current study investigates whether changes in sensory noise explain the effect of attention on multisensory integration and whether attentional modulations to multisensory integration occur via modality-specific mechanisms. A task based on the McGurk Illusion was used to measure multisensory integration while attention was manipulated via a concurrent auditory or visual task. Sensory noise was measured within modality based on variability in unisensory performance and was used to predict attentional changes to McGurk perception. Consistent with previous studies, reports of the McGurk illusion decreased when accompanied with a secondary task; however, this effect was stronger for the secondary visual (as opposed to auditory) task. While auditory noise was not influenced by either secondary task, visual noise increased with the addition of the secondary visual task specifically. Interestingly, visual noise accounted for significant variability in attentional disruptions to the McGurk illusion. Overall, these results strongly suggest that sensory noise may underlie attentional alterations to multisensory integration in a modality-specific manner. Future studies are needed to determine whether this finding generalizes to other types of multisensory integration and attentional manipulations. This line of research may inform future studies of attentional alterations to sensory processing in neurological disorders, such as Schizophrenia, Autism, and ADHD.
Collapse
Affiliation(s)
- Victoria L. Fisher
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States,Yale University School of Medicine and the Connecticut Mental Health Center, New Haven, CT, United States
| | - Cassandra L. Dean
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States,Roche/Genentech Neurodevelopment & Psychiatry Teams Product Development, Neuroscience, South San Francisco, CA, United States
| | - Claire S. Nave
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States
| | - Emma V. Parkins
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States,Neuroscience Graduate Program, University of Cincinnati, Cincinnati, OH, United States
| | - Willa G. Kerkhoff
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States,Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, United States
| | - Leslie D. Kwakye
- Department of Neuroscience, Oberlin College, Oberlin, OH, United States,*Correspondence: Leslie D. Kwakye,
| |
Collapse
|
22
|
Chang C, Wang E, Yang J, Luan X, Wang A, Zhang M. Differences in eccentricity for sound-induced flash illusion in four visual fields. Perception 2023; 52:56-73. [PMID: 36397675 DOI: 10.1177/03010066221136670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
A sound-induced flash illusion (SiFI) is a multisensory illusion dominated by auditory stimuli, in which the individual perceives that the number of visual flashes is equal to the number of auditory stimuli when visual flashes are presented along with an unequal number of auditory stimuli. Although the mechanisms underlying fission and fusion illusions have been documented, there is not yet a consensus on how they vary according to the different eccentricities. In the present study, by incorporating the classic SiFI paradigm into four different eccentricities, we aimed to investigate whether the SiFI varies under the different eccentricities. The results showed that the fission illusion varied significantly across the four eccentricities, with the perifovea (7°) and peripheral (11°) illusions being greater than the fovea and parafovea (3°) illusions. In contrast, the fusion illusion did not vary significantly across the four eccentricities. Our findings revealed that SiFI was affected by different visual fields and that there were differences between the fission and the fusion illusions. Furthermore, by examining the SiFI of eccentricity across visual fields, this study also suggests that bottom-up factors affect the SiFI.
Collapse
Affiliation(s)
| | - Erlei Wang
- The Second Affiliated Hospital of Soochow University, China
| | | | | | | | - Ming Zhang
- 12582Soochow University, China; Okayama University, Japan
| |
Collapse
|
23
|
He Y, Yang T, He C, Sun K, Guo Y, Wang X, Bai L, Xue T, Xu T, Guo Q, Liao Y, Liu X, Wu S. Effects of audiovisual interactions on working memory: Use of the combined N-back + Go/NoGo paradigm. Front Psychol 2023; 14:1080788. [PMID: 36874804 PMCID: PMC9982107 DOI: 10.3389/fpsyg.2023.1080788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 01/27/2023] [Indexed: 02/19/2023] Open
Abstract
Background Approximately 94% of sensory information acquired by humans originates from the visual and auditory channels. Such information can be temporarily stored and processed in working memory, but this system has limited capacity. Working memory plays an important role in higher cognitive functions and is controlled by central executive function. Therefore, elucidating the influence of the central executive function on information processing in working memory, such as in audiovisual integration, is of great scientific and practical importance. Purpose This study used a paradigm that combined N-back and Go/NoGo tasks, using simple Arabic numerals as stimuli, to investigate the effects of cognitive load (modulated by varying the magnitude of N) and audiovisual integration on the central executive function of working memory as well as their interaction. Methods Sixty college students aged 17-21 years were enrolled and performed both unimodal and bimodal tasks to evaluate the central executive function of working memory. The order of the three cognitive tasks was pseudorandomized, and a Latin square design was used to account for order effects. Finally, working memory performance, i.e., reaction time and accuracy, was compared between unimodal and bimodal tasks with repeated-measures analysis of variance (ANOVA). Results As cognitive load increased, the presence of auditory stimuli interfered with visual working memory by a moderate to large extent; similarly, as cognitive load increased, the presence of visual stimuli interfered with auditory working memory by a moderate to large effect size. Conclusion Our study supports the theory of competing resources, i.e., that visual and auditory information interfere with each other and that the magnitude of this interference is primarily related to cognitive load.
Collapse
Affiliation(s)
- Yang He
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Tianqi Yang
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Chunyan He
- Department of Nursing, Fourth Military Medical University, Xi'an, China
| | - Kewei Sun
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Yaning Guo
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Xiuchao Wang
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Lifeng Bai
- Faculty of Humanities and Social Sciences, Aviation University of Air Force, Changchun, China
| | - Ting Xue
- Faculty of Humanities and Social Sciences, Aviation University of Air Force, Changchun, China
| | - Tao Xu
- Psychology Section, Secondary Sanatorium of Air Force Healthcare Center for Special Services, Hangzhou, China
| | - Qingjun Guo
- Psychology Section, Secondary Sanatorium of Air Force Healthcare Center for Special Services, Hangzhou, China
| | - Yang Liao
- Air Force Medical Center, Air Force Medical University, Beijing, China
| | - Xufeng Liu
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Shengjun Wu
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
24
|
Li Y, Luo M, Zhang X, Wang S. Effects of exogenous and endogenous cues on attentional orienting in deaf adults. Front Psychol 2022; 13:1038468. [PMID: 36275214 PMCID: PMC9584612 DOI: 10.3389/fpsyg.2022.1038468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 09/20/2022] [Indexed: 12/04/2022] Open
Abstract
Adults who are deaf have been shown to have better visual attentional orienting than those with typical hearing, especially when the target is located in the periphery of the visual field. However, most studies in this population have assessed exogenous visual attention orienting (bottom-up processing of external cues) rather than endogenous visual attention orienting (top-down processing of internal cues). We used a target detection task to assess both types of visual attention orienting. A modified cue-target paradigm was adopted to assess the facilitation effects of exogenous and endogenous cues during short and long inter-stimulus intervals (ISI), using a 2 (Group: deaf/typically hearing) * 2 (Location: central/peripheral) * 2 (Cue Type: exogenous/endogenous) mixed factorial design. ANOVAs showed that both exogenous cues and endogenous cues can facilitate deaf adults’ visual attentional orienting, and the facilitation effect of exogenous cues on attention orienting was significantly stronger for deaf participants than hearing participants. When the ISI was long, the effect was significantly stronger when the exogenous cue appeared in the periphery of the visual field. In the periphery, deaf adults benefited most from exogenous cues, whereas hearing adults benefited most from endogenous cues. The results suggest that not only exogenous cues but also endogenous cues can facilitate deaf adults’ visual attentional orienting. However, the effect of exogenous cues appears to be greater, especially when the stimulus appears in the peripheral visual field.
Collapse
Affiliation(s)
- Yunsong Li
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, China
| | - Meili Luo
- School of Psychology, South China Normal University, Guangzhou, Guangdong, China
| | - Xilin Zhang
- Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, China
- School of Psychology, South China Normal University, Guangzhou, Guangdong, China
| | - Suiping Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, Guangdong, China
- *Correspondence: Suiping Wang,
| |
Collapse
|
25
|
Audiovisual Emotional Congruency Modulates the Stimulus-Driven Cross-Modal Spread of Attention. Brain Sci 2022; 12:brainsci12091229. [PMID: 36138965 PMCID: PMC9497153 DOI: 10.3390/brainsci12091229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Revised: 09/04/2022] [Accepted: 09/07/2022] [Indexed: 11/18/2022] Open
Abstract
It has been reported that attending to stimuli in visual modality can spread to task-irrelevant but synchronously presented stimuli in auditory modality, a phenomenon termed the cross-modal spread of attention, which could be either stimulus-driven or representation-driven depending on whether the visual constituent of an audiovisual object is further selected based on the object representation. The stimulus-driven spread of attention occurs whenever a task-irrelevant sound synchronizes with an attended visual stimulus, regardless of the cross-modal semantic congruency. The present study recorded event-related potentials (ERPs) to investigate whether the stimulus-driven cross-modal spread of attention could be modulated by audio-visual emotional congruency in a visual oddball task where emotion (positive/negative) was task-irrelevant. The results first demonstrated a prominent stimulus-driven spread of attention regardless of audio-visual emotional congruency by showing that for all audiovisual pairs, the extracted ERPs to the auditory constituents of audiovisual stimuli within the time window of 200–300 ms were significantly larger than ERPs to the same auditory stimuli delivered alone. However, the amplitude of this stimulus-driven auditory Nd component during 200–300 ms was significantly larger for emotionally incongruent than congruent audiovisual stimuli when their visual constituents’ emotional valences were negative. Moreover, the Nd was sustained during 300–400 ms only for the incongruent audiovisual stimuli with emotionally negative visual constituents. These findings suggest that although the occurrence of the stimulus-driven cross-modal spread of attention is independent of audio-visual emotional congruency, its magnitude is nevertheless modulated even when emotion is task-irrelevant.
Collapse
|
26
|
Cho YJ, Yum JY, Kim K, Shin B, Eom H, Hong YJ, Heo J, Kim JJ, Lee HS, Kim E. Evaluating attention deficit hyperactivity disorder symptoms in children and adolescents through tracked head movements in a virtual reality classroom: The effect of social cues with different sensory modalities. Front Hum Neurosci 2022; 16:943478. [PMID: 35992945 PMCID: PMC9386071 DOI: 10.3389/fnhum.2022.943478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 07/15/2022] [Indexed: 11/22/2022] Open
Abstract
Background Attention deficit hyperactivity disorder (ADHD) is clinically diagnosed; however, quantitative analysis to statistically analyze the symptom severity of children with ADHD via the measurement of head movement is still in progress. Studies focusing on the cues that may influence the attention of children with ADHD in classroom settings, where children spend a considerable amount of time, are relatively scarce. Virtual reality allows real-life simulation of classroom environments and thus provides an opportunity to test a range of theories in a naturalistic and controlled manner. The objective of this study was to investigate the correlation between participants’ head movements and their reports of inattention and hyperactivity, and to investigate how their head movements are affected by different social cues of different sensory modalities. Methods Thirty-seven children and adolescents with (n = 20) and without (n = 17) ADHD were recruited for this study. All participants were assessed for diagnoses, clinical symptoms, and self-reported symptoms. A virtual reality-continuous performance test (VR-CPT) was conducted under four conditions: (1) control, (2) no-cue, (3) visual cue, and (4) visual/audio cue. A quantitativecomparison of the participants’ head movements was conducted in three dimensions (pitch [head nods], yaw [head turns], and roll [lateral head inclinations]) using a head-mounted display (HMD) in a VR classroom environment. Task-irrelevant head movements were analyzed separately, considering the dimension of movement needed to perform the VR-CPT. Results The magnitude of head movement, especially task-irrelevant head movement, significantly correlated with the current standard of clinical assessment in the ADHD group. Regarding the four conditions, head movement showed changes according to the complexity of social cues in both the ADHD and healthy control (HC) groups. Conclusion Children and adolescents with ADHD showed decreasing task-irrelevant movements in the presence of social stimuli toward the intended orientation. As a proof-of-concept study, this study preliminarily identifies the potential of VR as a tool to understand and investigate the classroom behavior of children with ADHD in a controlled, systematic manner.
Collapse
Affiliation(s)
- Yoon Jae Cho
- Department of Psychiatry, Yonsei University College of Medicine, Seoul, South Korea
| | - Jung Yon Yum
- Department of Neurology, Yonsei University College of Medicine, Seoul, South Korea
| | - Kwanguk Kim
- Department of Computer Science, Hanyang University, Seoul, South Korea
| | - Bokyoung Shin
- Department of Psychiatry, Yonsei University College of Medicine, Seoul, South Korea
| | - Hyojung Eom
- Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Seoul, South Korea
| | - Yeon-ju Hong
- Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Seoul, South Korea
| | - Jiwoong Heo
- Department of Computer Science, Hanyang University, Seoul, South Korea
| | - Jae-jin Kim
- Department of Psychiatry, Yonsei University College of Medicine, Seoul, South Korea
- Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Seoul, South Korea
| | - Hye Sun Lee
- Biostatistics Collaboration Unit, Department of Research Affairs, Yonsei University College of Medicine, Seoul, South Korea
| | - Eunjoo Kim
- Department of Psychiatry, Yonsei University College of Medicine, Seoul, South Korea
- Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Seoul, South Korea
- Department of Psychiatry, Yonsei University College of Medicine, Gangnam Severance Hospital, Seoul, South Korea
- *Correspondence: Eunjoo Kim,
| |
Collapse
|
27
|
Ren Q, Marshall AC, Kaiser J, Schütz-Bosbach S. Multisensory Integration of Anticipated Cardiac Signals with Visual Targets Affects Their Detection among Multiple Visual Stimuli. Neuroimage 2022; 262:119549. [DOI: 10.1016/j.neuroimage.2022.119549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/29/2022] [Accepted: 08/04/2022] [Indexed: 11/17/2022] Open
|
28
|
Are auditory cues special? Evidence from cross-modal distractor-induced blindness. Atten Percept Psychophys 2022; 85:889-904. [PMID: 35902451 PMCID: PMC10066119 DOI: 10.3758/s13414-022-02540-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/08/2022] [Indexed: 11/08/2022]
Abstract
A target that shares features with preceding distractor stimuli is less likely to be detected due to a distractor-driven activation of a negative attentional set. This transient impairment in perceiving the target (distractor-induced blindness/deafness) can be found within vision and audition. Recently, the phenomenon was observed in a cross-modal setting involving an auditory target and additional task-relevant visual information (cross-modal distractor-induced deafness). In the current study, consisting of three behavioral experiments, a visual target, indicated by an auditory cue, had to be detected despite the presence of visual distractors. Multiple distractors consistently led to reduced target detection if cue and target appeared in close temporal proximity, confirming cross-modal distractor-induced blindness. However, the effect on target detection was reduced compared to the effect of cross-modal distractor-induced deafness previously observed for reversed modalities. The physical features defining cue and target could not account for the diminished distractor effect in the current cross-modal task. Instead, this finding may be attributed to the auditory cue acting as an especially efficient release signal of the distractor-induced inhibition. Additionally, a multisensory enhancement of visual target detection by the concurrent auditory signal might have contributed to the reduced distractor effect.
Collapse
|
29
|
Janyan A, Shtyrov Y, Andriushchenko E, Blinova E, Shcherbakova O. Look and ye shall hear: Selective auditory attention modulates the audiovisual correspondence effect. Iperception 2022; 13:20416695221095884. [PMID: 35646302 PMCID: PMC9134444 DOI: 10.1177/20416695221095884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 04/04/2022] [Indexed: 11/26/2022] Open
Abstract
One of the unresolved questions in multisensory research is that of automaticity of
consistent associations between sensory features from different modalities (e.g. high
visual locations associated with high sound pitch). We addressed this issue by examining a
possible role of selective attention in the audiovisual correspondence effect. We
orthogonally manipulated loudness and pitch, directing participants’ attention to the
auditory modality only and using pitch and loudness identification tasks. Visual stimuli
in high, low or central spatial locations appeared simultaneously with the sounds. If the
correspondence effect is automatic, it should not be affected by task changes. The
results, however, demonstrated a cross-modal pitch-verticality correspondence effect only
when participants’ attention was directed to pitch, but not to loudness identification
task; moreover, the effect was present only in the upper location. The findings underscore
the involvement of selective attention in cross-modal associations and support a top-down
account of audiovisual correspondence effects.
Collapse
Affiliation(s)
| | | | | | - Ekaterina Blinova
- Laboratory of Behavioural Neurodynamics, Saint Petersburg State University, Saint Petersburg, Russia
- Department of General Psychology, Faculty of Psychology, Saint Petersburg State University, Saint Petersburg, Russia
| | - Olga Shcherbakova
- Laboratory of Behavioural Neurodynamics, Saint Petersburg State University, Saint Petersburg, Russia
- Department of General Psychology, Faculty of Psychology, Saint Petersburg State University, Saint Petersburg, Russia
| |
Collapse
|
30
|
The Role of the Interaction between the Inferior Parietal Lobule and Superior Temporal Gyrus in the Multisensory Go/No-go Task. Neuroimage 2022; 254:119140. [PMID: 35342002 DOI: 10.1016/j.neuroimage.2022.119140] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 03/19/2022] [Accepted: 03/22/2022] [Indexed: 11/23/2022] Open
Abstract
Information from multiple sensory modalities interacts. Using functional magnetic resonance imaging (fMRI), we aimed to identify the neural structures correlated with how cooccurring sound modulates the visual motor response execution. The reaction time (RT) to audiovisual stimuli was significantly faster than the RT to visual stimuli. Signal detection analyses showed no significant difference in the perceptual sensitivity (d') between audiovisual and visual stimuli, while the response criteria (β or c) of the audiovisual stimuli was decreased compared to the visual stimuli. The functional connectivity between the left inferior parietal lobule (IPL) and bilateral superior temporal gyrus (STG) was enhanced in Go processing compared with No-go processing of audiovisual stimuli. Furthermore, the left precentral gyrus (PreCG) showed enhanced functional connectivity with the bilateral STG and other areas of the ventral stream in Go processing compared with No-go processing of audiovisual stimuli. These results revealed that the neuronal network correlated with modulations of the motor response execution after the presentation of both visual stimuli along with cooccurring sound in a multisensory Go/Nogo task, including the left IPL, left PreCG, bilateral STG and some areas of the ventral stream. The role of the interaction between the IPL and STG in transforming audiovisual information into motor behavior is discussed. The current study provides a new perspective for exploring potential brain mechanisms underlying how humans execute appropriate behaviors on the basis of multisensory information.
Collapse
|
31
|
Tang X, Yuan M, Shi Z, Gao M, Ren R, Wei M, Gao Y. Multisensory integration attenuates visually induced oculomotor inhibition of return. J Vis 2022; 22:7. [PMID: 35297999 PMCID: PMC8944392 DOI: 10.1167/jov.22.4.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Inhibition of return (IOR) is a mechanism of the attention system involving bias toward novel stimuli and delayed generation of responses to targets at previously attended locations. According to the two-component theory, IOR consists of a perceptual component and an oculomotor component (oculomotor IOR [O-IOR]) depending on whether the eye movement system is activated. Previous studies have shown that multisensory integration weakens IOR when paying attention to both visual and auditory modalities. However, it remains unclear whether the O-IOR effect attenuated by multisensory integration also occurs when the oculomotor system is activated. Here, using two eye movement experiments, we investigated the effect of multisensory integration on O-IOR using the exogenous spatial cueing paradigm. In Experiment 1, we found a greater visual O-IOR effect compared with audiovisual and auditory O-IOR in divided modality attention. The relative multisensory response enhancement (rMRE) and violations of Miller's bound showed a greater magnitude of multisensory integration in the cued location compared with the uncued location. In Experiment 2, the magnitude of the audiovisual O-IOR effect was significantly less than that of the visual O-IOR in single visual modality selective attention. Implications for the effect of multisensory integration on O-IOR were discussed under conditions of oculomotor system activation, shedding new light on the two-component theory of IOR.
Collapse
Affiliation(s)
- Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Mengying Yuan
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Zhongyu Shi
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Min Gao
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Rongxia Ren
- Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.,
| | - Ming Wei
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Yulin Gao
- Department of Psychology, Jilin University, Changchun, China.,
| |
Collapse
|
32
|
Wang L, Lin L, Sun Y, Hou S, Ren J. The effect of movement speed on audiovisual temporal integration in streaming-bouncing illusion. Exp Brain Res 2022; 240:1139-1149. [PMID: 35147722 DOI: 10.1007/s00221-022-06312-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 01/18/2022] [Indexed: 11/04/2022]
Abstract
Motion perception in real situations is often stimulated by multisensory information. Speed is an essential characteristic of moving objects; however, at present, it is not clear whether speed affects the process of audiovisual temporal integration in motion perception. Therefore, this study used a streaming-bouncing task (a bistable motion perception; SB task) combined with a simultaneous judgment task (SJ task) to explore the effect of speed on audiovisual temporal integration from implicit and explicit perspectives. The experiment had a within-subjects design, two speed conditions (fast/slow), eleven audiovisual conditions [stimulus onset asynchrony (SOA): 0 ms/ ± 60 ms/ ± 120 ms/ ± 180 ms/ ± 240 ms/ ± 300 ms], and a visual-only condition. A total of 30 subjects were recruited for the study. These participants completed the SB task and the SJ task successively. The results showed the following outcomes: (1) the optimal times needed to induce the "bouncing" illusion and maximum audiovisual bounce-inducing effect (ABE) magnitude were much earlier than that for the optimal time of audiovisual synchrony, (2) speed as a bottom-up factor could affect the proportion of "bouncing" perception in SB illusions but did not affect the ABE magnitude, (3) speed could also affect the ability of audiovisual temporal integration in motion perception, and the main manifestation was that the point of subjective simultaneity (PSS) in fast speed conditions was earlier than that of slow speed conditions in the SJ task and (4) the SB task and SJ task were not related. In conclusion, the time to complete the maximum audiovisual integration was different from the optimal time for synchrony perception; moreover, speed could affect audiovisual temporal integration in motion perception but only in explicit temporal tasks.
Collapse
Affiliation(s)
- Luning Wang
- School of Psychology, Shanghai University of Sport, Shanghai, 200438, China
| | - Liyue Lin
- School of Psychology, Shanghai University of Sport, Shanghai, 200438, China
| | - Yujia Sun
- China Table Tennis College, Shanghai University of Sport, Shanghai, 200438, China
| | - Shuang Hou
- School of Psychology, Shanghai University of Sport, Shanghai, 200438, China
| | - Jie Ren
- China Table Tennis College, Shanghai University of Sport, Shanghai, 200438, China.
| |
Collapse
|
33
|
Fu J, Guo X, Tang X, Wang A, Zhang M, Gao Y, Seno T. The Effects of Bilateral and Ipsilateral Auditory Stimuli on the Subcomponents of Visual Attention. Iperception 2022; 12:20416695211058222. [PMID: 34987747 PMCID: PMC8721886 DOI: 10.1177/20416695211058222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 10/18/2021] [Indexed: 11/16/2022] Open
Abstract
Attention contains three functional network subcomponents of alerting, orienting, and executive control. The attention network test (ANT) is usually used to measure the efficiency of three attention subcomponents. Previous researches have focused on examining the unimodal attention with visual or auditory ANT paradigms. However, it is still unclear how an auditory stimulus influences the visual attention networks. This study investigated the effects of bilateral auditory stimuli (Experiment 1) and ipsilateral auditory stimulus (Experiment 2) on the visual attention subcomponents. We employed an ANT paradigm and manipulated the target modality types, including visual and audiovisual modalities. The participants were instructed to distinguish the direction of the central arrow surrounded by distractor arrows. In Experiment 1, we found that the simultaneous bilateral auditory stimuli reduced the efficiency of visual alerting and orienting, but had no significant effect on the efficiency of visual executive control. In Experiment 2, the ipsilateral auditory stimulus reduced the efficiency of visual executive control, but had no significant effect on the efficiency of visual alerting and orienting. We also observed a reduced relative multisensory response enhancement (rMRE) effect in cue condition relative to no cue condition (Experiment 1), and an increased rMRE effect in congruent condition compared with incongruent condition (Experiment 2). These results firstly provide evidence for the alerting, orienting and executive control effects in audiovisual condition. And the bilateral and ipsilateral auditory stimuli have different effects on the subcomponents of visual attention.
Collapse
Affiliation(s)
- Jing Fu
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Xuanru Guo
- Faculty of Design, Kyushu University, Minami-ku, Fukuoka, Japan
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | | | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, China
| | - Yulin Gao
- Department of Psychology, Jilin University, Changchun, China
| | - Takeharu Seno
- Faculty of Design, Kyushu University, Minami-ku, Fukuoka, Japan
| |
Collapse
|
34
|
Abstract
In the rapid serial visual presentation (RSVP) paradigm, response accuracy for the target decreases when it appears within a short time window (200~500 ms) after the previous target. This phenomenon is termed the attentional blink (AB). Although mechanisms of cross-modal processing that reduce the AB have been documented, researchers have not explored the differences across modal attentional conditions. In the present study, we used the RSVP paradigm to investigate the effect of auditory-driven visual target perceptual enhancement on the AB under modality-specific selective attention (Experiment 1) and bimodal-divided attention (Experiment 2). The results showed that cross-modal attentional enhancement was not moderated by stimulus salience. Moreover, the results also showed that accuracy was higher when the attended sound appeared simultaneously with the target. These results indicated that audiovisual enhancement reduced AB and that stronger attentional enhancement in the bimodal-divided attentional condition led to the disappearance of AB.
Collapse
|
35
|
Zhao S, Li Y, Wang C, Feng C, Feng W. Updating the dual-mechanism model for cross-sensory attentional spreading: The influence of space-based visual selective attention. Hum Brain Mapp 2021; 42:6038-6052. [PMID: 34553806 PMCID: PMC8596974 DOI: 10.1002/hbm.25668] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 08/24/2021] [Accepted: 09/14/2021] [Indexed: 11/08/2022] Open
Abstract
Selective attention to visual stimuli can spread cross‐modally to task‐irrelevant auditory stimuli through either the stimulus‐driven binding mechanism or the representation‐driven priming mechanism. The stimulus‐driven attentional spreading occurs whenever a task‐irrelevant sound is delivered simultaneously with a spatially attended visual stimulus, whereas the representation‐driven attentional spreading occurs only when the object representation of the sound is congruent with that of the to‐be‐attended visual object. The current study recorded event‐related potentials in a space‐selective visual object‐recognition task to examine the exact roles of space‐based visual selective attention in both the stimulus‐driven and representation‐driven cross‐modal attentional spreading, which remain controversial in the literature. Our results yielded that the representation‐driven auditory Nd component (200–400 ms after sound onset) did not differ according to whether the peripheral visual representations of audiovisual target objects were spatially attended or not, but was decreased when the auditory representations of target objects were presented alone. In contrast, the stimulus‐driven auditory Nd component (200–300 ms) was decreased but still prominent when the peripheral visual constituents of audiovisual nontarget objects were spatially unattended. These findings demonstrate not only that the representation‐driven attentional spreading is independent of space‐based visual selective attention and benefits in an all‐or‐nothing manner from object‐based visual selection for actually presented visual representations of target objects, but also that although the stimulus‐driven attentional spreading is modulated by space‐based visual selective attention, attending to visual modality per se is more likely to be the endogenous determinant of the stimulus‐driven attentional spreading.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China.,Department of English, School of Foreign Languages, Soochow University, Suzhou, Jiangsu, China
| | - Yang Li
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Chongzhi Wang
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China.,Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu, China
| |
Collapse
|
36
|
Multisensory stimuli shift perceptual priors to facilitate rapid behavior. Sci Rep 2021; 11:23052. [PMID: 34845325 PMCID: PMC8629992 DOI: 10.1038/s41598-021-02566-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 11/16/2021] [Indexed: 11/08/2022] Open
Abstract
Multisensory stimuli speed behavioral responses, but the mechanisms subserving these effects remain disputed. Historically, the observation that multisensory reaction times (RTs) outpace models assuming independent sensory channels has been taken as evidence for multisensory integration (the "redundant target effect"; RTE). However, this interpretation has been challenged by alternative explanations based on stimulus sequence effects, RT variability, and/or negative correlations in unisensory processing. To clarify the mechanisms subserving the RTE, we collected RTs from 78 undergraduates in a multisensory simple RT task. Based on previous neurophysiological findings, we hypothesized that the RTE was unlikely to reflect these alternative mechanisms, and more likely reflected pre-potentiation of sensory responses through crossmodal phase-resetting. Contrary to accounts based on stimulus sequence effects, we found that preceding stimuli explained only 3-9% of the variance in apparent RTEs. Comparing three plausible evidence accumulator models, we found that multisensory RT distributions were best explained by increased sensory evidence at stimulus onset. Because crossmodal phase-resetting increases cortical excitability before sensory input arrives, these results are consistent with a mechanism based on pre-potentiation through phase-resetting. Mathematically, this model entails increasing the prior log-odds of stimulus presence, providing a potential link between neurophysiological, behavioral, and computational accounts of multisensory interactions.
Collapse
|
37
|
Kvamme TL, Sarmanlu M, Bailey C, Overgaard M. Neurofeedback Modulation of the Sound-induced Flash Illusion Using Parietal Cortex Alpha Oscillations Reveals Dependency on Prior Multisensory Congruency. Neuroscience 2021; 482:1-17. [PMID: 34838934 DOI: 10.1016/j.neuroscience.2021.11.028] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 11/12/2021] [Accepted: 11/19/2021] [Indexed: 01/27/2023]
Abstract
Spontaneous neural oscillations are key predictors of perceptual decisions to bind multisensory signals into a unified percept. Research links decreased alpha power in the posterior cortices to attention and audiovisual binding in the sound-induced flash illusion (SIFI) paradigm. This suggests that controlling alpha oscillations would be a way of controlling audiovisual binding. In the present feasibility study we used MEG-neurofeedback to train one group of subjects to increase left/right and another to increase right/left alpha power ratios in the parietal cortex. We tested for changes in audiovisual binding in a SIFI paradigm where flashes appeared in both hemifields. Results showed that the neurofeedback induced a significant asymmetry in alpha power for the left/right group, not seen for the right/left group. Corresponding asymmetry changes in audiovisual binding in illusion trials (with 2, 3, and 4 beeps paired with 1 flash) were not apparent. Exploratory analyses showed that neurofeedback training effects were present for illusion trials with the lowest numeric disparity (i.e., 2 beeps and 1 flash trials) only if the previous trial had high congruency (2 beeps and 2 flashes). Our data suggest that the relation between parietal alpha power (an index of attention) and its effect on audiovisual binding is dependent on the learned causal structure in the previous stimulus. The present results suggests that low alpha power biases observers towards audiovisual binding when they have learned that audiovisual signals originate from a common origin, consistent with a Bayesian causal inference account of multisensory perception.
Collapse
Affiliation(s)
- Timo L Kvamme
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark.
| | - Mesud Sarmanlu
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark
| | - Christopher Bailey
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark
| | - Morten Overgaard
- Cognitive Neuroscience Research Unit, CFIN/MINDLab, Aarhus University, Aarhus, Denmark
| |
Collapse
|
38
|
Precision control for a flexible body representation. Neurosci Biobehav Rev 2021; 134:104401. [PMID: 34736884 DOI: 10.1016/j.neubiorev.2021.10.023] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 10/20/2021] [Accepted: 10/21/2021] [Indexed: 11/24/2022]
Abstract
Adaptive body representation requires the continuous integration of multisensory inputs within a flexible 'body model' in the brain. The present review evaluates the idea that this flexibility is augmented by the contextual modulation of sensory processing 'top-down'; which can be described as precision control within predictive coding formulations of Bayesian inference. Specifically, I focus on the proposal that an attenuation of proprioception may facilitate the integration of conflicting visual and proprioceptive bodily cues. Firstly, I review empirical work suggesting that the processing of visual vs proprioceptive body position information can be contextualised 'top-down'; for instance, by adopting specific attentional task sets. Building up on this, I review research showing a similar contextualisation of visual vs proprioceptive information processing in the rubber hand illusion and in visuomotor adaptation. Together, the reviewed literature suggests that proprioception, despite its indisputable importance for body perception and action control, can be attenuated top-down (through precision control) to facilitate the contextual adaptation of the brain's body model to novel visual feedback.
Collapse
|
39
|
Peng X, Tang X, Jiang H, Wang A, Zhang M, Chang R. Inhibition of Return Decreases Early Audiovisual Integration: An Event-Related Potential Study. Front Hum Neurosci 2021; 15:712958. [PMID: 34690717 PMCID: PMC8526535 DOI: 10.3389/fnhum.2021.712958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 09/10/2021] [Indexed: 11/25/2022] Open
Abstract
Previous behavioral studies have found that inhibition of return decreases the audiovisual integration, while the underlying neural mechanisms are unknown. The current work utilized the high temporal resolution of event-related potentials (ERPs) to investigate how audiovisual integration would be modulated by inhibition of return. We employed the cue-target paradigm and manipulated the target type and cue validity. Participants were required to perform the task of detection of visual (V), auditory (A), or audiovisual (AV) targets shown in the identical (valid cue) or opposed (invalid cue) side to be the preceding exogenous cue. The neural activities between AV targets and the sum of the A and V targets were compared, and their differences were calculated to present the audiovisual integration effect in different cue validity conditions (valid, invalid). The ERPs results showed that a significant super-additive audiovisual integration effect was observed on the P70 (60∼90 ms, frontal-central) only under the invalid cue condition. The significant audiovisual integration effects were observed on the N1 or P2 components (N1, 120∼180 ms, frontal-central-parietal; P2, 200∼260 ms, frontal-central-parietal) in both valid cue as well as invalid cue condition. And there were no significant differences on the later components between invalid cue and valid cue. The result offers the first neural demonstration that inhibition of return modulates the early audiovisual integration process.
Collapse
Affiliation(s)
- Xing Peng
- Institute of Aviation Human Factors and Ergonomics, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Hao Jiang
- Institute of Aviation Human Factors and Ergonomics, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Aijun Wang
- Department of Psychology, Soochow University, Suzhou, China
| | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, China
| | - Ruosong Chang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| |
Collapse
|
40
|
Vastano R, Costantini M, Widerstrom-Noga E. Maladaptive reorganization following SCI: The role of body representation and multisensory integration. Prog Neurobiol 2021; 208:102179. [PMID: 34600947 DOI: 10.1016/j.pneurobio.2021.102179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 09/08/2021] [Accepted: 09/24/2021] [Indexed: 10/20/2022]
Abstract
In this review we focus on maladaptive brain reorganization after spinal cord injury (SCI), including the development of neuropathic pain, and its relationship with impairments in body representation and multisensory integration. We will discuss the implications of altered sensorimotor interactions after SCI with and without neuropathic pain and possible deficits in multisensory integration and body representation. Within this framework we will examine published research findings focused on the use of bodily illusions to manipulate multisensory body representation to induce analgesic effects in heterogeneous chronic pain populations and in SCI-related neuropathic pain. We propose that the development and intensification of neuropathic pain after SCI is partly dependent on brain reorganization associated with dysfunctional multisensory integration processes and distorted body representation. We conclude this review by suggesting future research avenues that may lead to a better understanding of the complex mechanisms underlying the sense of the body after SCI, with a focus on cortical changes.
Collapse
Affiliation(s)
- Roberta Vastano
- University of Miami, Department of Neurological Surgery, The Miami Project to Cure Paralysis, Miami, FL, USA.
| | - Marcello Costantini
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy; Institute for Advanced Biomedical Technologies, ITAB, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy.
| | - Eva Widerstrom-Noga
- University of Miami, Department of Neurological Surgery, The Miami Project to Cure Paralysis, Miami, FL, USA.
| |
Collapse
|
41
|
Almadori E, Mastroberardino S, Botta F, Brunetti R, Lupiáñez J, Spence C, Santangelo V. Crossmodal Semantic Congruence Interacts with Object Contextual Consistency in Complex Visual Scenes to Enhance Short-Term Memory Performance. Brain Sci 2021; 11:brainsci11091206. [PMID: 34573227 PMCID: PMC8467083 DOI: 10.3390/brainsci11091206] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 08/30/2021] [Accepted: 09/09/2021] [Indexed: 11/17/2022] Open
Abstract
Object sounds can enhance the attentional selection and perceptual processing of semantically-related visual stimuli. However, it is currently unknown whether crossmodal semantic congruence also affects the post-perceptual stages of information processing, such as short-term memory (STM), and whether this effect is modulated by the object consistency with the background visual scene. In two experiments, participants viewed everyday visual scenes for 500 ms while listening to an object sound, which could either be semantically related to the object that served as the STM target at retrieval or not. This defined crossmodal semantically cued vs. uncued targets. The target was either in- or out-of-context with respect to the background visual scene. After a maintenance period of 2000 ms, the target was presented in isolation against a neutral background, in either the same or different spatial position as in the original scene. The participants judged the same vs. different position of the object and then provided a confidence judgment concerning the certainty of their response. The results revealed greater accuracy when judging the spatial position of targets paired with a semantically congruent object sound at encoding. This crossmodal facilitatory effect was modulated by whether the target object was in- or out-of-context with respect to the background scene, with out-of-context targets reducing the facilitatory effect of object sounds. Overall, these findings suggest that the presence of the object sound at encoding facilitated the selection and processing of the semantically related visual stimuli, but this effect depends on the semantic configuration of the visual scene.
Collapse
Affiliation(s)
- Erika Almadori
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome, Italy;
| | - Serena Mastroberardino
- Department of Psychology, School of Medicine & Psychology, Sapienza University of Rome, Via dei Marsi 78, 00185 Rome, Italy;
| | - Fabiano Botta
- Department of Experimental Psychology and Mind, Brain, and Behavior Research Center (CIMCYC), University of Granada, 18071 Granada, Spain; (F.B.); (J.L.)
| | - Riccardo Brunetti
- Cognitive and Clinical Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, 00163 Roma, Italy;
| | - Juan Lupiáñez
- Department of Experimental Psychology and Mind, Brain, and Behavior Research Center (CIMCYC), University of Granada, 18071 Granada, Spain; (F.B.); (J.L.)
| | - Charles Spence
- Department of Experimental Psychology, Oxford University, Oxford OX2 6GG, UK;
| | - Valerio Santangelo
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome, Italy;
- Department of Philosophy, Social Sciences & Education, University of Perugia, Piazza G. Ermini, 1, 06123 Perugia, Italy
- Correspondence:
| |
Collapse
|
42
|
Jublie A, Kumar D. Early Capture of Attention by Self-Face: Investigation Using a Temporal Order Judgment Task. Iperception 2021; 12:20416695211032993. [PMID: 34377429 PMCID: PMC8327255 DOI: 10.1177/20416695211032993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 06/28/2021] [Indexed: 11/23/2022] Open
Abstract
Earlier work on self-face processing has reported a bias in the processing of self-face result in faster response to self-face in comparison to other familiar and unfamiliar faces (termed as self-face advantage or SFA). Even though most studies agree that the SFA occurs due to an attentional bias, there is little agreement regarding the stage at which it occurs. While a large number of studies show self-face influencing processing later at disengagement stage, early event-related potential components show differential activity for the self-face suggesting that SFA occurs early. We address this contradiction using a cueless temporal order judgment task that allows us to investigate early perceptual processing, while bias due to top-down expectation is controlled. A greater shift in point of subjective simultaneity for self-face would indicate a greater processing advantage at early perceptual stage. With help of two experiments, we show an early perceptual advantage for self-face, compared to both a friend's face and an unfamiliar face (Experiment 1). This advantage is present even when the effect of criterion shift is minimized (Experiment 2). Interestingly, the magnitude of advantage is similar for self-friend and self-unfamiliar pair. The evidence from the two experiments suggests early capture of attention as a likely reason for the SFA, which is present for the self-face but not for other familiar faces.
Collapse
Affiliation(s)
- Aditi Jublie
- Department of Cognitive Science, Indian Institute of Technology Kanpur, Kanpur, India
| | - Devpriya Kumar
- Department of Cognitive Science, Indian Institute of Technology Kanpur, Kanpur, India
| |
Collapse
|
43
|
Wang A, Zhou H, Hu Y, Wu Q, Zhang T, Tang X, Zhang M. Endogenous Spatial Attention Modulates the Magnitude of the Colavita Visual Dominance Effect. Iperception 2021; 12:20416695211027186. [PMID: 34290850 PMCID: PMC8278468 DOI: 10.1177/20416695211027186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 06/03/2021] [Indexed: 10/25/2022] Open
Abstract
The Colavita effect refers to the phenomenon wherein people tend to not respond to an auditory stimulus when a visual stimulus is simultaneously presented. Although previous studies have shown that endogenous modality attention influences the Colavita effect, whether the Colavita effect is influenced by endogenous spatial attention remains unknown. In the present study, we established endogenous spatial cues to investigate whether the size of the Colavita effect changes under visual or auditory cues. We measured three indexes to investigate the effect of endogenous spatial attention on the size of the Colavita effect. These three indexes were developed based on the following observations in bimodal trials: (a) The proportion of the "only vision" response was significantly higher than that of the "only audition" response; (b) the proportion of the "vision precedes audition" response was significantly higher than that of the "audition precedes vision" response; and (c) the reaction time difference of the "vision precedes audition" response was significantly higher than that of the "audition precedes vision" response. Our results showed that the Colavita effect was always influenced by endogenous spatial attention and that its size was larger at the cued location than at the uncued location; the cue modality (visual vs. auditory) had no effect on the size of the Colavita effect. Taken together, the present results shed light on how endogenous spatial attention affects the Colavita effect.
Collapse
Affiliation(s)
| | | | - Yuanyuan Hu
- Department of Psychology, Soochow University, Suzhou, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Qiong Wu
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China
| | - Tianyang Zhang
- School of Public Health, Medical College of Soochow University, Suzhou, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Normal University, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
44
|
Limanowski J, Friston K. Attentional Modulation of Vision Versus Proprioception During Action. Cereb Cortex 2021; 30:1637-1648. [PMID: 31670769 PMCID: PMC7132949 DOI: 10.1093/cercor/bhz192] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 07/10/2019] [Accepted: 07/27/2019] [Indexed: 01/29/2023] Open
Abstract
To control our actions efficiently, our brain represents our body based on a combination of visual and proprioceptive cues, weighted according to how (un)reliable—how precise—each respective modality is in a given context. However, perceptual experiments in other modalities suggest that the weights assigned to sensory cues are also modulated “top-down” by attention. Here, we asked whether during action, attention can likewise modulate the weights (i.e., precision) assigned to visual versus proprioceptive information about body position. Participants controlled a virtual hand (VH) via a data glove, matching either the VH or their (unseen) real hand (RH) movements to a target, and thus adopting a ``visual'' or ``proprioceptive'' attentional set, under varying levels of visuo-proprioceptive congruence and visibility. Functional magnetic resonance imaging (fMRI) revealed increased activation of the multisensory superior parietal lobe (SPL) during the VH task and increased activation of the secondary somatosensory cortex (S2) during the RH task. Dynamic causal modeling (DCM) showed that these activity changes were the result of selective, diametrical gain modulations in the primary visual cortex (V1) and the S2. These results suggest that endogenous attention can balance the gain of visual versus proprioceptive brain areas, thus contextualizing their influence on multisensory areas representing the body for action.
Collapse
Affiliation(s)
- Jakub Limanowski
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London, UK
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London, UK
| |
Collapse
|
45
|
Keefe JM, Pokta E, Störmer VS. Cross-modal orienting of exogenous attention results in visual-cortical facilitation, not suppression. Sci Rep 2021; 11:10237. [PMID: 33986384 PMCID: PMC8119727 DOI: 10.1038/s41598-021-89654-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 04/29/2021] [Indexed: 11/10/2022] Open
Abstract
Attention may be oriented exogenously (i.e., involuntarily) to the location of salient stimuli, resulting in improved perception. However, it is unknown whether exogenous attention improves perception by facilitating processing of attended information, suppressing processing of unattended information, or both. To test this question, we measured behavioral performance and cue-elicited neural changes in the electroencephalogram as participants (N = 19) performed a task in which a spatially non-predictive auditory cue preceded a visual target. Critically, this cue was either presented at a peripheral target location or from the center of the screen, allowing us to isolate spatially specific attentional activity. We find that both behavior and attention-mediated changes in visual-cortical activity are enhanced at the location of a cue prior to the onset of a target, but that behavior and neural activity at an unattended target location is equivalent to that following a central cue that does not direct attention (i.e., baseline). These results suggest that exogenous attention operates via facilitation of information at an attended location.
Collapse
Affiliation(s)
- Jonathan M Keefe
- Department of Psychology, University of California, San Diego, 92092, USA.
| | - Emilia Pokta
- Department of Psychology, University of California, San Diego, 92092, USA
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego, 92092, USA
- Department of Brain and Psychological Sciences, Dartmouth College, Hanover, USA
| |
Collapse
|
46
|
Ren Y, Zhang Y, Hou Y, Li J, Bi J, Yang W. Exogenous Bimodal Cues Attenuate Age-Related Audiovisual Integration. Iperception 2021; 12:20416695211020768. [PMID: 34104386 PMCID: PMC8165524 DOI: 10.1177/20416695211020768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 05/09/2021] [Indexed: 11/17/2022] Open
Abstract
Previous studies have demonstrated that exogenous attention decreases audiovisual integration (AVI); however, whether the AVI is different when exogenous attention is elicited by bimodal and unimodal cues and its aging effect remain unclear. To clarify this matter, 20 older adults and 20 younger adults were recruited to conduct an auditory/visual discrimination task following bimodal audiovisual cues or unimodal auditory/visual cues. The results showed that the response to all stimulus types was faster in younger adults compared with older adults, and the response was faster when responding to audiovisual stimuli compared with auditory or visual stimuli. Analysis using the race model revealed that the AVI was lower in the exogenous-cue conditions compared with the no-cue condition for both older and younger adults. The AVI was observed in all exogenous-cue conditions for the younger adults (visual cue > auditory cue > audiovisual cue); however, for older adults, the AVI was only found in the visual-cue condition. In addition, the AVI was lower in older adults compared to younger adults under no- and visual-cue conditions. These results suggested that exogenous attention decreased the AVI, and the AVI was lower in exogenous attention elicited by bimodal-cue than by unimodal-cue conditions. In addition, the AVI was reduced for older adults compared with younger adults under exogenous attention.
Collapse
Affiliation(s)
- Yanna Ren
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Ying Zhang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yawei Hou
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Junyuan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Junhao Bi
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| |
Collapse
|
47
|
Nazaré CJ, Oliveira AM. Effects of Audiovisual Presentations on Visual Localization Errors: One or Several Multisensory Mechanisms? Multisens Res 2021; 34:1-35. [PMID: 33882452 DOI: 10.1163/22134808-bja10048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Accepted: 03/30/2021] [Indexed: 11/19/2022]
Abstract
The present study examines the extent to which temporal and spatial properties of sound modulate visual motion processing in spatial localization tasks. Participants were asked to locate the place at which a moving visual target unexpectedly vanished. Across different tasks, accompanying sounds were factorially varied within subjects as to their onset and offset times and/or positions relative to visual motion. Sound onset had no effect on the localization error. Sound offset was shown to modulate the perceived visual offset location, both for temporal and spatial disparities. This modulation did not conform to attraction toward the timing or location of the sounds but, demonstrably in the case of temporal disparities, to bimodal enhancement instead. Favorable indications to a contextual effect of audiovisual presentations on interspersed visual-only trials were also found. The short sound-leading offset asynchrony had equivalent benefits to audiovisual offset synchrony, suggestive of the involvement of early-level mechanisms, constrained by a temporal window, at these conditions. Yet, we tentatively hypothesize that the whole of the results and how they compare with previous studies requires the contribution of additional mechanisms, including learning-detection of auditory-visual associations and cross-sensory spread of endogenous attention.
Collapse
Affiliation(s)
- Cristina Jordão Nazaré
- Instituto Politécnico de Coimbra, ESTESC - Coimbra Health School, Audiologia, Coimbra, Portugal
| | | |
Collapse
|
48
|
Wang Z, Chen M, Goerlich KS, Aleman A, Xu P, Luo Y. Deficient auditory emotion processing but intact emotional multisensory integration in alexithymia. Psychophysiology 2021; 58:e13806. [PMID: 33742708 PMCID: PMC9285530 DOI: 10.1111/psyp.13806] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 01/29/2021] [Accepted: 02/24/2021] [Indexed: 11/29/2022]
Abstract
Alexithymia has been associated with emotion recognition deficits in both auditory and visual domains. Although emotions are inherently multimodal in daily life, little is known regarding abnormalities of emotional multisensory integration (eMSI) in relation to alexithymia. Here, we employed an emotional Stroop‐like audiovisual task while recording event‐related potentials (ERPs) in individuals with high alexithymia levels (HA) and low alexithymia levels (LA). During the task, participants had to indicate whether a voice was spoken in a sad or angry prosody while ignoring the simultaneously presented static face which could be either emotionally congruent or incongruent to the human voice. We found that HA performed worse and showed higher P2 amplitudes than LA independent of emotion congruency. Furthermore, difficulties in identifying and describing feelings were positively correlated with the P2 component, and P2 correlated negatively with behavioral performance. Bayesian statistics showed no group differences in eMSI and classical integration‐related ERP components (N1 and N2). Although individuals with alexithymia indeed showed deficits in auditory emotion recognition as indexed by decreased performance and higher P2 amplitudes, the present findings suggest an intact capacity to integrate emotional information from multiple channels in alexithymia. Our work provides valuable insights into the relationship between alexithymia and neuropsychological mechanisms of emotional multisensory integration. Our behavioral and electrophysiological data provide substantial evidence for intact emotion multisensory integration in relation to alexithymia. With high ecological validity, these findings are of particular importance given that humans are constantly exposed to competing, complex audiovisual emotional information in social interaction contexts. Our work has important implications for the psychophysiology of alexithymia and emotional processing.
Collapse
Affiliation(s)
- Zhihao Wang
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, China.,Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Mai Chen
- School of Psychology, Shenzhen University, Shenzhen, China
| | - Katharina S Goerlich
- Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - André Aleman
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, China.,Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Pengfei Xu
- State Key Laboratory of Cognitive and Learning, Faculty of Psychology, Beijing Normal University, Beijing, China.,Center for Neuroimaging, Shenzhen Institute of Neuroscience, Shenzhen, China.,Guangdong-Hong Kong-Macao Greater Bay Area Research Institute for Neuroscience and Neurotechnologies, Kwun Tong, Hong Kong, China
| | - Yuejia Luo
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, China.,State Key Laboratory of Cognitive and Learning, Faculty of Psychology, Beijing Normal University, Beijing, China.,Department of Psychology, Southern Medical University, Guangzhou, China.,The Research Center of Brain Science and Visual Cognition, Medical School, Kunming University of Science and Technology, Kunming, China.,Center for Neuroimaging, Shenzhen Institute of Neuroscience, Shenzhen, China
| |
Collapse
|
49
|
Porada DK, Regenbogen C, Freiherr J, Seubert J, Lundström JN. Trimodal processing of complex stimuli in inferior parietal cortex is modality-independent. Cortex 2021; 139:198-210. [PMID: 33878687 DOI: 10.1016/j.cortex.2021.03.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 11/29/2020] [Accepted: 03/09/2021] [Indexed: 11/26/2022]
Abstract
In humans, multisensory mechanisms facilitate object processing through integration of sensory signals that match in their temporal and spatial occurrence as well as their meaning. The generalizability of such integration processes across different sensory modalities is, however, to date not well understood. As such, it remains unknown whether there are cerebral areas that process object-related signals independently of the specific senses from which they arise, and whether these areas show different response profiles depending on the number of sensory channels that carry information. To address these questions, we presented participants with dynamic stimuli that simultaneously emitted object-related sensory information via one, two, or three channels (sight, sound, smell) in the MR scanner. By comparing neural activation patterns between various integration processes differing in type and number of stimulated senses, we showed that the left inferior frontal gyrus and areas within the left inferior parietal cortex were engaged independently of the number and type of sensory input streams. Activation in these areas was enhanced during bimodal stimulation, compared to the sum of unimodal activations, and increased even further during trimodal stimulation. Taken together, our findings demonstrate that activation of the inferior parietal cortex during processing and integration of meaningful multisensory stimuli is both modality-independent and modulated by the number of available sensory modalities. This suggests that the processing demand placed on the parietal cortex increases with the number of sensory input streams carrying meaningful information, likely due to the increasing complexity of such stimuli.
Collapse
Affiliation(s)
- Danja K Porada
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Christina Regenbogen
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany; JARA Institute Brain Structure Function Relationship, RWTH Aachen University, Aachen, Germany
| | - Jessica Freiherr
- Department of Psychiatry and Psychotherapy, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Janina Seubert
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Johan N Lundström
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Monell Chemical Senses Center, Philadelphia, USA; Department of Psychology, University of Pennsylvania, Philadelphia, USA; Stockholm University Brain Imaging Centre, Stockholm University, Stockholm, Sweden.
| |
Collapse
|
50
|
McCall AA, Miller DM, Balaban CD. Integration of vestibular and hindlimb inputs by vestibular nucleus neurons: multisensory influences on postural control. J Neurophysiol 2021; 125:1095-1110. [PMID: 33534649 DOI: 10.1152/jn.00350.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We recently demonstrated in decerebrate and conscious cat preparations that hindlimb somatosensory inputs converge with vestibular afferent input onto neurons in multiple central nervous system (CNS) locations that participate in balance control. Although it is known that head position and limb state modulate postural reflexes, presumably through vestibulospinal and reticulospinal pathways, the combined influence of the two inputs on the activity of neurons in these brainstem regions is unknown. In the present study, we evaluated the responses of vestibular nucleus (VN) neurons to vestibular and hindlimb stimuli delivered separately and together in conscious cats. We hypothesized that VN neuronal firing during activation of vestibular and limb proprioceptive inputs would be well fit by an additive model. Extracellular single-unit recordings were obtained from VN neurons. Sinusoidal whole body rotation in the roll plane was used as the search stimulus. Units responding to the search stimulus were tested for their responses to 10° ramp-and-hold roll body rotation, 60° extension hindlimb movement, and both movements delivered simultaneously. Composite response histograms were fit by a model of low- and high-pass filtered limb and body position signals using least squares nonlinear regression. We found that VN neuronal activity during combined vestibular and hindlimb proprioceptive stimulation in the conscious cat is well fit by a simple additive model for signals with similar temporal dynamics. The mean R2 value for goodness of fit across all units was 0.74 ± 0.17. It is likely that VN neurons that exhibit these integrative properties participate in adjusting vestibulospinal outflow in response to limb state.NEW & NOTEWORTHY Vestibular nucleus neurons receive convergent information from hindlimb somatosensory inputs and vestibular inputs. In this study, extracellular single-unit recordings of vestibular nucleus neurons during conditions of passively applied limb movement, passive whole body rotations, and combined stimulation were well fit by an additive model. The integration of hindlimb somatosensory inputs with vestibular inputs at the first stage of vestibular processing suggests that vestibular nucleus neurons account for limb position in determining vestibulospinal responses to postural perturbations.
Collapse
Affiliation(s)
- Andrew A McCall
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Derek M Miller
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Carey D Balaban
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, Pennsylvania.,Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania.,Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania
| |
Collapse
|