1
|
Sendesen E, Colak H. Neural markers associated with improved tinnitus perception after tinnitus retraining therapy. Int J Audiol 2024:1-7. [PMID: 39037049 DOI: 10.1080/14992027.2024.2378800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Accepted: 06/27/2024] [Indexed: 07/23/2024]
Abstract
OBJECTIVE Tinnitus retraining therapy (TRT) has been widely used in tinnitus management. However, its efficacy is often assessed through subjective methods. Here, we aimed to assess potential neural changes following TRT using mismatch negativity (MMN). DESIGN Chronic tinnitus (>6 months) patients participated in a six-month TRT program. We collected tinnitus psychoacoustic features and gathered the tinnitus handicap inventory (THI) before and after TRT. We also used a multi-featured paradigm, including frequency, intensity, duration, location and silent gap deviants, to elicit MMN response before and after TRT. Data were analyzed retrospectively. STUDY SAMPLE The study involved 26 chronic tinnitus patients. RESULTS Post-TRT measurements showed that MMN amplitudes significantly increased for all deviant conditions (p ≤ .03). However, we did not find a significant difference in MMN latencies for all deviant conditions (p ≥ .13). The THI scores of the patients significantly decreased following the TRT program (p < 0.001). Our results reveal improved subjective tinnitus perception following the TRT program. CONCLUSION These findings indicate that TRT might be a viable alternative in tinnitus management. The greater MMN amplitudes and improved subjective tinnitus perception raise the possibility that MMN can be a useful tool in tinnitus research and tinnitus patient follow-up.
Collapse
Affiliation(s)
- Eser Sendesen
- Department of Audiology, Hacettepe University, Ankara, Turkey
| | - Hasan Colak
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
2
|
Li J, Liu Y, Nehl E, Tucker JD. A behavioral economics approach to enhancing HIV preexposure and postexposure prophylaxis implementation. Curr Opin HIV AIDS 2024; 19:212-220. [PMID: 38686773 DOI: 10.1097/coh.0000000000000860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
PURPOSE OF REVIEW The 'PrEP cliff' phenomenon poses a critical challenge in global HIV PrEP implementation, marked by significant dropouts across the entire PrEP care continuum. This article reviews new strategies to address 'PrEP cliff'. RECENT FINDINGS Canadian clinicians have developed a service delivery model that offers presumptive PEP to patients in need and transits eligible PEP users to PrEP. Early findings are promising. This service model not only establishes a safety net for those who were not protected by PrEP, but it also leverages the immediate salience and perceived benefits of PEP as a natural nudge towards PrEP use. Aligning with Behavioral Economics, specifically the Salience Theory, this strategy holds potential in tackling PrEP implementation challenges. SUMMARY A natural pathway between PEP and PrEP has been widely observed. The Canadian service model exemplifies an innovative strategy that leverages this organic pathway and enhances the utility of both PEP and PrEP services. We offer theoretical insights into the reasons behind these PEP-PrEP transitions and evolve the Canadian model into a cohesive framework for implementation.
Collapse
Affiliation(s)
- Jingjing Li
- Department of Behavioral, Social and Health Education Sciences, Rollins School of Public Health
| | - Yaxin Liu
- Department of Psychology, Emory University, Atlanta, Georgia
| | - Eric Nehl
- Department of Behavioral, Social and Health Education Sciences, Rollins School of Public Health
| | - Joseph D Tucker
- Division of Infectious Diseases, University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
3
|
Parma C, Doria F, Zulueta A, Boscarino M, Giani L, Lunetta C, Parati EA, Picozzi M, Sattin D. Does Body Memory Exist? A Review of Models, Approaches and Recent Findings Useful for Neurorehabilitation. Brain Sci 2024; 14:542. [PMID: 38928542 PMCID: PMC11201876 DOI: 10.3390/brainsci14060542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 05/20/2024] [Accepted: 05/23/2024] [Indexed: 06/28/2024] Open
Abstract
Over the past twenty years, scientific research on body representations has grown significantly, with Body Memory (BM) emerging as a prominent area of interest in neurorehabilitation. Compared to other body representations, BM stands out as one of the most obscure due to the multifaceted nature of the concept of "memory" itself, which includes various aspects (such as implicit vs. explicit, conscious vs. unconscious). The concept of body memory originates from the field of phenomenology and has been developed by research groups studying embodied cognition. In this narrative review, we aim to present compelling evidence from recent studies that explore various definitions and explanatory models of BM. Additionally, we will provide a comprehensive overview of the empirical settings used to examine BM. The results can be categorized into two main areas: (i) how the body influences our memories, and (ii) how memories, in their broadest sense, could generate and/or influence metarepresentations-the ability to reflect on or make inferences about one's own cognitive representations or those of others. We present studies that emphasize the significance of BM in experimental settings involving patients with neurological and psychiatric disorders, ultimately analyzing these findings from an ontogenic perspective.
Collapse
Affiliation(s)
- Chiara Parma
- Istituti Clinici Scientifici Maugeri IRCCS, Health Directorate, Via Camaldoli 64, 20138 Milan, Italy; (C.P.); (F.D.)
- PhD. Program, Medicina Clinica e Sperimentale e Medical Humanities, Insubria University, 21100 Varese, Italy
| | - Federica Doria
- Istituti Clinici Scientifici Maugeri IRCCS, Health Directorate, Via Camaldoli 64, 20138 Milan, Italy; (C.P.); (F.D.)
| | - Aida Zulueta
- Istituti Clinici Scientifici Maugeri IRCCS, Labion, Via Camaldoli 64, 20138 Milan, Italy;
| | - Marilisa Boscarino
- Neurorehabilitation Department, Istituti Clinici Scientifici Maugeri IRCCS, Via Camaldoli 64, 20138 Milan, Italy; (M.B.); (L.G.); (E.A.P.)
| | - Luca Giani
- Neurorehabilitation Department, Istituti Clinici Scientifici Maugeri IRCCS, Via Camaldoli 64, 20138 Milan, Italy; (M.B.); (L.G.); (E.A.P.)
| | - Christian Lunetta
- Amyotrophic Lateral Sclerosis Unit, Neurorehabilitation Department, Istituti Clinici Scientifici Maugeri IRCCS, Via Camaldoli 64, 20138 Milan, Italy;
| | - Eugenio Agostino Parati
- Neurorehabilitation Department, Istituti Clinici Scientifici Maugeri IRCCS, Via Camaldoli 64, 20138 Milan, Italy; (M.B.); (L.G.); (E.A.P.)
| | - Mario Picozzi
- Center for Clinical Ethics, Biotechnology and Life Sciences Department, Insubria University, 21100 Varese, Italy;
| | - Davide Sattin
- Istituti Clinici Scientifici Maugeri IRCCS, Health Directorate, Via Camaldoli 64, 20138 Milan, Italy; (C.P.); (F.D.)
| |
Collapse
|
4
|
Wegner-Clemens K, Malcolm GL, Shomstein S. Predicting attentional allocation in real-world environments: The need to investigate crossmodal semantic guidance. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024; 15:e1675. [PMID: 38243393 DOI: 10.1002/wcs.1675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 12/01/2023] [Accepted: 12/07/2023] [Indexed: 01/21/2024]
Abstract
Real-world environments are multisensory, meaningful, and highly complex. To parse these environments in a highly efficient manner, a subset of this information must be selected both within and across modalities. However, the bulk of attention research has been conducted within sensory modalities, with a particular focus on vision. Visual attention research has made great strides, with over a century of research methodically identifying the underlying mechanisms that allow us to select critical visual information. Spatial attention, attention to features, and object-based attention have all been studied extensively. More recently, research has established semantics (meaning) as a key component to allocating attention in real-world scenes, with the meaning of an item or environment affecting visual attentional selection. However, a full understanding of how semantic information modulates real-world attention requires studying more than vision in isolation. The world provides semantic information across all senses, but with this extra information comes greater complexity. Here, we summarize visual attention (including semantic-based visual attention), crossmodal attention, and argue for the importance of studying crossmodal semantic guidance of attention. This article is categorized under: Psychology > Attention Psychology > Perception and Psychophysics.
Collapse
Affiliation(s)
- Kira Wegner-Clemens
- Psychological and Brain Sciences, George Washington University, Washington, DC, USA
| | | | - Sarah Shomstein
- Psychological and Brain Sciences, George Washington University, Washington, DC, USA
| |
Collapse
|
5
|
Ghosh P, Talwar S, Banerjee A. Unsupervised Characterization of Prediction Error Markers in Unisensory and Multisensory Streams Reveal the Spatiotemporal Hierarchy of Cortical Information Processing. eNeuro 2024; 11:ENEURO.0251-23.2024. [PMID: 38702194 PMCID: PMC11069433 DOI: 10.1523/eneuro.0251-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 05/06/2024] Open
Abstract
Elicited upon violation of regularity in stimulus presentation, mismatch negativity (MMN) reflects the brain's ability to perform automatic comparisons between consecutive stimuli and provides an electrophysiological index of sensory error detection whereas P300 is associated with cognitive processes such as updating of the working memory. To date, there has been extensive research on the roles of MMN and P300 individually, because of their potential to be used as clinical markers of consciousness and attention, respectively. Here, we intend to explore with an unsupervised and rigorous source estimation approach, the underlying cortical generators of MMN and P300, in the context of prediction error propagation along the hierarchies of brain information processing in healthy human participants. The existing methods of characterizing the two ERPs involve only approximate estimations of their amplitudes and latencies based on specific sensors of interest. Our objective is twofold: first, we introduce a novel data-driven unsupervised approach to compute latencies and amplitude of ERP components accurately on an individual-subject basis and reconfirm earlier findings. Second, we demonstrate that in multisensory environments, MMN generators seem to reflect a significant overlap of "modality-specific" and "modality-independent" information processing while P300 generators mark a shift toward completely "modality-independent" processing. Advancing earlier understanding that multisensory contexts speed up early sensory processing, our study reveals that temporal facilitation extends to even the later components of prediction error processing, using EEG experiments. Such knowledge can be of value to clinical research for characterizing the key developmental stages of lifespan aging, schizophrenia, and depression.
Collapse
Affiliation(s)
- Priyanka Ghosh
- Cognitive Brain Dynamics Lab, National Brain Research Centre, Gurgaon 122052, India
| | - Siddharth Talwar
- Cognitive Brain Dynamics Lab, National Brain Research Centre, Gurgaon 122052, India
| | - Arpan Banerjee
- Cognitive Brain Dynamics Lab, National Brain Research Centre, Gurgaon 122052, India
| |
Collapse
|
6
|
Wang X, Tang X, Wang A, Zhang M. Non-spatial inhibition of return attenuates audiovisual integration owing to modality disparities. Atten Percept Psychophys 2023:10.3758/s13414-023-02825-y. [PMID: 38127253 DOI: 10.3758/s13414-023-02825-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2023] [Indexed: 12/23/2023]
Abstract
Although previous studies have investigated the relationship between inhibition of return (IOR) and multisensory integration, the influence of non-spatial has not been explored. The present study aimed to investigate the influence of non-spatial IOR on audiovisual integration by using a "prime-neutral cue-target" paradigm. In Experiment 1, which manipulated prime validity and target modality, the targets were positioned centrally, revealing significant non-spatial IOR effects in the visual, auditory, and audiovisual modalities. Analysis of relative multisensory response enhancement (rMRE) indicated substantial audiovisual integration enhancement in both valid and invalid target conditions. Furthermore, the enhancement was weaker for valid targets than for invalid targets. In Experiment 2, the targets were positioned above and below to rule out repetition blindness (RB); this experiment successfully replicated the results observed in Experiment 1. Notably, Experiments 1 and 2 consistently found that the correlation between modality differences and rMRE for valid targets indicated that differences in signal strength between visual and auditory modalities contributed to a reduction in audiovisual integration. However, the absence of correlation with the invalid target suggests that attention, as a key factor, may play a significant role in this process. The present study highlights how non-spatial IOR reduces audiovisual integration and sheds light on the complex interaction between attention and multisensory integration.
Collapse
Affiliation(s)
- Xiaoxue Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China.
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
7
|
Castro F, Schenke KC. Augmented action observation: Theory and practical applications in sensorimotor rehabilitation. Neuropsychol Rehabil 2023:1-20. [PMID: 38117228 DOI: 10.1080/09602011.2023.2286012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 11/10/2023] [Indexed: 12/21/2023]
Abstract
Sensory feedback is a fundamental aspect of effective motor learning in sport and clinical contexts. One way to provide this is through sensory augmentation, where extrinsic sensory information are associated with, and modulated by, movement. Traditionally, sensory augmentation has been used as an online strategy, where feedback is provided during physical execution of an action. In this article, we argue that action observation can be an additional effective channel to provide augmented feedback, which would be complementary to other, more traditional, motor learning and sensory augmentation strategies. Given these similarities between observing and executing an action, action observation could be used when physical training is difficult or not feasible, for example during immobilization or during the initial stages of a rehabilitation protocol when peripheral fatigue is a common issue. We review the benefits of observational learning and preliminary evidence for the effectiveness of using augmented action observation to improve learning. We also highlight current knowledge gaps which make the transition from laboratory to practical contexts difficult. Finally, we highlight the key areas of focus for future research.
Collapse
Affiliation(s)
- Fabio Castro
- Institute of Sport, School of Life and Medical Sciences, University of Hertfordshire, Hatfield, UK
| | - Kimberley C Schenke
- School of Natural, Social and Sports Sciences, University of Gloucestershire, Cheltenham, UK
| |
Collapse
|
8
|
Ghaneirad E, Borgolte A, Sinke C, Čuš A, Bleich S, Szycik GR. The effect of multisensory semantic congruency on unisensory object recognition in schizophrenia. Front Psychiatry 2023; 14:1246879. [PMID: 38025441 PMCID: PMC10646423 DOI: 10.3389/fpsyt.2023.1246879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023] Open
Abstract
Multisensory, as opposed to unisensory processing of stimuli, has been found to enhance the performance (e.g., reaction time, accuracy, and discrimination) of healthy individuals across various tasks. However, this enhancement is not as pronounced in patients with schizophrenia (SZ), indicating impaired multisensory integration (MSI) in these individuals. To the best of our knowledge, no study has yet investigated the impact of MSI deficits in the context of working memory, a domain highly reliant on multisensory processing and substantially impaired in schizophrenia. To address this research gap, we employed two adopted versions of the continuous object recognition task to investigate the effect of single-trail multisensory encoding on subsequent object recognition in 21 schizophrenia patients and 21 healthy controls (HC). Participants were tasked with discriminating between initial and repeated presentations. For the initial presentations, half of the stimuli were audiovisual pairings, while the other half were presented unimodal. The task-relevant stimuli were then presented a second time in a unisensory manner (either auditory stimuli in the auditory task or visual stimuli in the visual task). To explore the impact of semantic context on multisensory encoding, half of the audiovisual pairings were selected to be semantically congruent, while the remaining pairs were not semantically related to each other. Consistent with prior studies, our findings demonstrated that the impact of single-trial multisensory presentation during encoding remains discernible during subsequent object recognition. This influence could be distinguished based on the semantic congruity between the auditory and visual stimuli presented during the encoding. This effect was more robust in the auditory task. In the auditory task, when congruent multisensory pairings were encoded, both participant groups demonstrated a multisensory facilitation effect. This effect resulted in improved accuracy and RT performance. Regarding incongruent audiovisual encoding, as expected, HC did not demonstrate an evident multisensory facilitation effect on memory performance. In contrast, SZs exhibited an atypically accelerated reaction time during the subsequent auditory object recognition. Based on the predictive coding model we propose that this observed deviations indicate a reduced semantic modulatory effect and anomalous predictive errors signaling, particularly in the context of conflicting cross-modal sensory inputs in SZ.
Collapse
Affiliation(s)
- Erfan Ghaneirad
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Anna Borgolte
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Christopher Sinke
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Division of Clinical Psychology and Sexual Medicine, Hannover Medical School, Hannover, Germany
| | - Anja Čuš
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Stefan Bleich
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
- Center for Systems Neuroscience, University of Veterinary Medicine, Hanover, Germany
| | - Gregor R. Szycik
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| |
Collapse
|
9
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
10
|
Pepper JL, Nuttall HE. Age-Related Changes to Multisensory Integration and Audiovisual Speech Perception. Brain Sci 2023; 13:1126. [PMID: 37626483 PMCID: PMC10452685 DOI: 10.3390/brainsci13081126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/20/2023] [Accepted: 07/22/2023] [Indexed: 08/27/2023] Open
Abstract
Multisensory integration is essential for the quick and accurate perception of our environment, particularly in everyday tasks like speech perception. Research has highlighted the importance of investigating bottom-up and top-down contributions to multisensory integration and how these change as a function of ageing. Specifically, perceptual factors like the temporal binding window and cognitive factors like attention and inhibition appear to be fundamental in the integration of visual and auditory information-integration that may become less efficient as we age. These factors have been linked to brain areas like the superior temporal sulcus, with neural oscillations in the alpha-band frequency also being implicated in multisensory processing. Age-related changes in multisensory integration may have significant consequences for the well-being of our increasingly ageing population, affecting their ability to communicate with others and safely move through their environment; it is crucial that the evidence surrounding this subject continues to be carefully investigated. This review will discuss research into age-related changes in the perceptual and cognitive mechanisms of multisensory integration and the impact that these changes have on speech perception and fall risk. The role of oscillatory alpha activity is of particular interest, as it may be key in the modulation of multisensory integration.
Collapse
Affiliation(s)
| | - Helen E. Nuttall
- Department of Psychology, Lancaster University, Bailrigg LA1 4YF, UK;
| |
Collapse
|
11
|
Guo A, Yang W, Yang X, Lin J, Li Z, Ren Y, Yang J, Wu J. Audiovisual n-Back Training Alters the Neural Processes of Working Memory and Audiovisual Integration: Evidence of Changes in ERPs. Brain Sci 2023; 13:992. [PMID: 37508924 PMCID: PMC10377064 DOI: 10.3390/brainsci13070992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 05/15/2023] [Accepted: 05/16/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: This study investigates whether audiovisual n-back training leads to training effects on working memory and transfer effects on perceptual processing. (2) Methods: Before and after training, the participants were tested using the audiovisual n-back task (1-, 2-, or 3-back), to detect training effects, and the audiovisual discrimination task, to detect transfer effects. (3) Results: For the training effect, the behavioral results show that training leads to greater accuracy and faster response times. Stronger training gains in accuracy and response time using 3- and 2-back tasks, compared to 1-back, were observed in the training group. Event-related potentials (ERPs) data revealed an enhancement of P300 in the frontal and central regions across all working memory levels after training. Training also led to the enhancement of N200 in the central region in the 3-back condition. For the transfer effect, greater audiovisual integration in the frontal and central regions during the post-test rather than pre-test was observed at an early stage (80-120 ms) in the training group. (4) Conclusion: Our findings provide evidence that audiovisual n-back training enhances neural processes underlying a working memory and demonstrate a positive influence of higher cognitive functions on lower cognitive functions.
Collapse
Affiliation(s)
- Ao Guo
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama 700-8530, Japan
| | - Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan 430062, China
- Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan 430062, China
| | - Xiangfu Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan 430062, China
| | - Jinfei Lin
- Department of Psychology, Faculty of Education, Hubei University, Wuhan 430062, China
| | - Zimo Li
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama 700-8530, Japan
| | - Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang 550003, China
| | - Jiajia Yang
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama 700-8530, Japan
- Applied Brain Science Lab., Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama 700-8530, Japan
| | - Jinglong Wu
- School of Medical Technology, Beijing Institute of Technology, Beijing 100811, China
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama 700-8530, Japan
| |
Collapse
|
12
|
Sarigul B, Urgen BA. Audio–Visual Predictive Processing in the Perception of Humans and Robots. Int J Soc Robot 2023. [DOI: 10.1007/s12369-023-00990-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2023]
Abstract
AbstractRecent work in cognitive science suggests that our expectations affect visual perception. With the rise of artificial agents in human life in the last few decades, one important question is whether our expectations about non-human agents such as humanoid robots affect how we perceive them. In the present study, we addressed this question in an audio–visual context. Participants reported whether a voice embedded in a noise belonged to a human or a robot. Prior to this judgment, they were presented with a human or a robot image that served as a cue and allowed them to form an expectation about the category of the voice that would follow. This cue was either congruent or incongruent with the category of the voice. Our results show that participants were faster and more accurate when the auditory target was preceded by a congruent cue than an incongruent cue. This was true regardless of the human-likeness of the robot. Overall, these results suggest that our expectations affect how we perceive non-human agents and shed light on future work in robot design.
Collapse
|
13
|
Smyre SA, Bean NL, Stein BE, Rowland BA. Predictability alters multisensory responses by modulating unisensory inputs. Front Neurosci 2023; 17:1150168. [PMID: 37065927 PMCID: PMC10090419 DOI: 10.3389/fnins.2023.1150168] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 03/13/2023] [Indexed: 03/30/2023] Open
Abstract
The multisensory (deep) layers of the superior colliculus (SC) play an important role in detecting, localizing, and guiding orientation responses to salient events in the environment. Essential to this role is the ability of SC neurons to enhance their responses to events detected by more than one sensory modality and to become desensitized (‘attenuated’ or ‘habituated’) or sensitized (‘potentiated’) to events that are predictable via modulatory dynamics. To identify the nature of these modulatory dynamics, we examined how the repetition of different sensory stimuli affected the unisensory and multisensory responses of neurons in the cat SC. Neurons were presented with 2HZ stimulus trains of three identical visual, auditory, or combined visual–auditory stimuli, followed by a fourth stimulus that was either the same or different (‘switch’). Modulatory dynamics proved to be sensory-specific: they did not transfer when the stimulus switched to another modality. However, they did transfer when switching from the visual–auditory stimulus train to either of its modality-specific component stimuli and vice versa. These observations suggest that predictions, in the form of modulatory dynamics induced by stimulus repetition, are independently sourced from and applied to the modality-specific inputs to the multisensory neuron. This falsifies several plausible mechanisms for these modulatory dynamics: they neither produce general changes in the neuron’s transform, nor are they dependent on the neuron’s output.
Collapse
|
14
|
Konagaya A, Gutmann G, Zhang Y. Co-creation environment with cloud virtual reality and real-time artificial intelligence toward the design of molecular robots. J Integr Bioinform 2023; 20:jib-2022-0017. [PMID: 36194394 PMCID: PMC10063180 DOI: 10.1515/jib-2022-0017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 08/31/2022] [Accepted: 09/07/2022] [Indexed: 11/15/2022] Open
Abstract
This paper describes the design philosophy for our cloud-based virtual reality (VR) co-creation environment (CCE) for molecular modeling. Using interactive VR simulation can provide enhanced perspectives in molecular modeling for intuitive live demonstration and experimentation in the CCE. Then the use of the CCE can enhance knowledge creation by bringing people together to share and create ideas or knowledge that may not emerge otherwise. Our prototype CCE discussed here, which was developed to demonstrate our design philosophy, has already enabled multiple members to log in and touch virtual molecules running on a cloud server with no noticeable network latency via real-time artificial intelligence techniques. The CCE plays an essential role in the rational design of molecular robot parts, which consist of bio-molecules such as DNA and protein molecules.
Collapse
Affiliation(s)
- Akihiko Konagaya
- Molecular Robotics Research Institute, Co., Ltd., 4259-3, Nagatsuta, Midori, Yokohama, Japan
- Keisen University, 2-10-1, Minamino, Tama, Tokyo, Japan
| | - Gregory Gutmann
- Molecular Robotics Research Institute, Co., Ltd., 4259-3, Nagatsuta, Midori, Yokohama, Japan
| | - Yuhui Zhang
- Molecular Robotics Research Institute, Co., Ltd., 4259-3, Nagatsuta, Midori, Yokohama, Japan
| |
Collapse
|
15
|
Zhu H, Tang X, Chen T, Yang J, Wang A, Zhang M. Audiovisual illusion training improves multisensory temporal integration. Conscious Cogn 2023; 109:103478. [PMID: 36753896 DOI: 10.1016/j.concog.2023.103478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 01/26/2023] [Accepted: 01/26/2023] [Indexed: 02/08/2023]
Abstract
When we perceive external physical stimuli from the environment, the brain must remain somewhat flexible to unaligned stimuli within a specific range, as multisensory signals are subject to different transmission and processing delays. Recent studies have shown that the width of the 'temporal binding window (TBW)' can be reduced by perceptual learning. However, to date, the vast majority of studies examining the mechanisms of perceptual learning have focused on experience-dependent effects, failing to reach a consensus on its relationship with the underlying perception influenced by audiovisual illusion. The sound-induced flash illusion (SiFI) training is a reliable function for improving perceptual sensitivity. The present study utilized the classic auditory-dominated SiFI paradigm with feedback training to investigate the effect of a 5-day SiFI training on multisensory temporal integration, as evaluated by a simultaneity judgment (SJ) task and temporal order judgment (TOJ) task. We demonstrate that audiovisual illusion training enhances multisensory temporal integration precision in the form of (i) the point of subjective simultaneity (PSS) shifts to reality (0 ms) and (ii) a narrowing TBW. The results are consistent with a Bayesian model of causal inference, suggesting that perception learning reduce the susceptibility to SiFI, whilst improving the precision of audiovisual temporal estimation.
Collapse
Affiliation(s)
- Haocheng Zhu
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Tingji Chen
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Jiajia Yang
- Applied Brain Science Lab Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
16
|
Yang W, Yang X, Guo A, Li S, Li Z, Lin J, Ren Y, Yang J, Wu J, Zhang Z. Audiovisual integration of the dynamic hand-held tool at different stimulus intensities in aging. Front Hum Neurosci 2022; 16:968987. [PMID: 36590067 PMCID: PMC9794578 DOI: 10.3389/fnhum.2022.968987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 11/15/2022] [Indexed: 12/23/2022] Open
Abstract
Introduction: In comparison to the audiovisual integration of younger adults, the same process appears more complex and unstable in older adults. Previous research has found that stimulus intensity is one of the most important factors influencing audiovisual integration. Methods: The present study compared differences in audiovisual integration between older and younger adults using dynamic hand-held tool stimuli, such as holding a hammer hitting the floor. Meanwhile, the effects of stimulus intensity on audiovisual integration were compared. The intensity of the visual and auditory stimuli was regulated by modulating the contrast level and sound pressure level. Results: Behavioral results showed that both older and younger adults responded faster and with higher hit rates to audiovisual stimuli than to visual and auditory stimuli. Further results of event-related potentials (ERPs) revealed that during the early stage of 60-100 ms, in the low-intensity condition, audiovisual integration of the anterior brain region was greater in older adults than in younger adults; however, in the high-intensity condition, audiovisual integration of the right hemisphere region was greater in younger adults than in older adults. Moreover, audiovisual integration was greater in the low-intensity condition than in the high-intensity condition in older adults during the 60-100 ms, 120-160 ms, and 220-260 ms periods, showing inverse effectiveness. However, there was no difference in the audiovisual integration of younger adults across different intensity conditions. Discussion: The results suggested that there was an age-related dissociation between high- and low-intensity conditions with audiovisual integration of the dynamic hand-held tool stimulus. Older adults showed greater audiovisual integration in the lower intensity condition, which may be due to the activation of compensatory mechanisms.
Collapse
Affiliation(s)
- Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China,Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan, China
| | - Xiangfu Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Ao Guo
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Shengnan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Zimo Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Jinfei Lin
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China,*Correspondence: Yanna Ren Zhilin Zhang
| | - Jiajia Yang
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan,Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Zhilin Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China,*Correspondence: Yanna Ren Zhilin Zhang
| |
Collapse
|
17
|
Schulze M, Aslan B, Jung P, Lux S, Philipsen A. Robust perceptual-load-dependent audiovisual integration in adult ADHD. Eur Arch Psychiatry Clin Neurosci 2022; 272:1443-1451. [PMID: 35380238 PMCID: PMC9653355 DOI: 10.1007/s00406-022-01401-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 03/14/2022] [Indexed: 12/24/2022]
Abstract
We perceive our daily-life surrounded by different senses (e.g., visual, and auditory). For a coherent percept, our brain binds those multiple streams of sensory stimulations, i.e., multisensory integration (MI). Dependent on stimulus complexity, early MI is triggered by bottom-up or late via top-down attentional deployment. Adult attention-deficit/hyperactivity disorder (ADHD) is associated with successful bottom-up MI and deficient top-down MI. In the current study, we investigated the robustness of the bottom-up MI by adding additional task demand varying the perceptual load. We hypothesized diminished bottom-up MI for high perceptual load for patients with ADHD. 18 adult patients with ADHD and 18 age- and gender-matched healthy controls participated in this study. In the visual search paradigm, a target letter was surrounded by uniform distractors (low load) or by different letters (high load). Additionally, either unimodal (visual flash, auditory beep) or multimodal (audiovisual) flanked the visual search. Linear-mixed modeling was used to investigate the influence of load on reaction times. Further, the race model inequality was calculated. Patients with ADHD showed a similar degree of MI performance like healthy controls, irrespective of perceptual load manipulation. ADHD patients violated the race model for the low load but not for the high-load condition. There seems to be robust bottom-up MI independent of perceptual load in ADHD patients. However, the sensory accumulation might be altered when attentional demands are high.
Collapse
Affiliation(s)
- Marcel Schulze
- Department of Psychiatry and Psychotherapy, University of Bonn, 53127, Bonn, Germany.
- Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany.
| | - Behrem Aslan
- Department of Psychiatry and Psychotherapy, University of Bonn, 53127, Bonn, Germany
| | - Paul Jung
- Department of Psychiatry and Psychotherapy, University of Bonn, 53127, Bonn, Germany
| | - Silke Lux
- Department of Psychiatry and Psychotherapy, University of Bonn, 53127, Bonn, Germany
- Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany
| | - Alexandra Philipsen
- Department of Psychiatry and Psychotherapy, University of Bonn, 53127, Bonn, Germany
| |
Collapse
|
18
|
Gao C, Green JJ, Yang X, Oh S, Kim J, Shinkareva SV. Audiovisual integration in the human brain: a coordinate-based meta-analysis. Cereb Cortex 2022; 33:5574-5584. [PMID: 36336347 PMCID: PMC10152097 DOI: 10.1093/cercor/bhac443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/09/2022] [Accepted: 10/10/2022] [Indexed: 11/09/2022] Open
Abstract
Abstract
People can seamlessly integrate a vast array of information from what they see and hear in the noisy and uncertain world. However, the neural underpinnings of audiovisual integration continue to be a topic of debate. Using strict inclusion criteria, we performed an activation likelihood estimation meta-analysis on 121 neuroimaging experiments with a total of 2,092 participants. We found that audiovisual integration is linked with the coexistence of multiple integration sites, including early cortical, subcortical, and higher association areas. Although activity was consistently found within the superior temporal cortex, different portions of this cortical region were identified depending on the analytical contrast used, complexity of the stimuli, and modality within which attention was directed. The context-dependent neural activity related to audiovisual integration suggests a flexible rather than fixed neural pathway for audiovisual integration. Together, our findings highlight a flexible multiple pathways model for audiovisual integration, with superior temporal cortex as the central node in these neural assemblies.
Collapse
Affiliation(s)
- Chuanji Gao
- Donders Institute for Brain, Cognition and Behaviour, Radboud University , Nijmegen , Netherlands
| | - Jessica J Green
- Department of Psychology, Institute for Mind and Brain, University of South Carolina , Columbia, SC 29201 , USA
| | - Xuan Yang
- Department of Psychology, Institute for Mind and Brain, University of South Carolina , Columbia, SC 29201 , USA
| | - Sewon Oh
- Department of Psychology, Institute for Mind and Brain, University of South Carolina , Columbia, SC 29201 , USA
| | - Jongwan Kim
- Department of Psychology, Jeonbuk National University , Jeonju , South Korea
| | - Svetlana V Shinkareva
- Department of Psychology, Institute for Mind and Brain, University of South Carolina , Columbia, SC 29201 , USA
| |
Collapse
|
19
|
Yang W, Li S, Guo A, Li Z, Yang X, Ren Y, Yang J, Wu J, Zhang Z. Auditory attentional load modulates the temporal dynamics of audiovisual integration in older adults: An ERPs study. Front Aging Neurosci 2022; 14:1007954. [PMID: 36325188 PMCID: PMC9618958 DOI: 10.3389/fnagi.2022.1007954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 09/23/2022] [Indexed: 12/02/2022] Open
Abstract
As older adults experience degenerations in perceptual ability, it is important to gain perception from audiovisual integration. Due to attending to one or more auditory stimuli, performing other tasks is a common challenge for older adults in everyday life. Therefore, it is necessary to probe the effects of auditory attentional load on audiovisual integration in older adults. The present study used event-related potentials (ERPs) and a dual-task paradigm [Go / No-go task + rapid serial auditory presentation (RSAP) task] to investigate the temporal dynamics of audiovisual integration. Behavioral results showed that both older and younger adults responded faster and with higher accuracy to audiovisual stimuli than to either visual or auditory stimuli alone. ERPs revealed weaker audiovisual integration under the no-attentional auditory load condition at the earlier processing stages and, conversely, stronger integration in the late stages. Moreover, audiovisual integration was greater in older adults than in younger adults at the following time intervals: 60–90, 140–210, and 430–530 ms. Notably, only under the low load condition in the time interval of 140–210 ms, we did find that the audiovisual integration of older adults was significantly greater than that of younger adults. These results delineate the temporal dynamics of the interactions with auditory attentional load and audiovisual integration in aging, suggesting that modulation of auditory attentional load affects audiovisual integration, enhancing it in older adults.
Collapse
Affiliation(s)
- Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
- Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan, China
| | - Shengnan Li
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Ao Guo
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Zimo Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Xiangfu Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China
- *Correspondence: Yanna Ren
| | - Jiajia Yang
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Zhilin Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
- Zhilin Zhang
| |
Collapse
|
20
|
Quintero SI, Shams L, Kamal K. Changing the Tendency to Integrate the Senses. Brain Sci 2022; 12:brainsci12101384. [PMID: 36291318 PMCID: PMC9599885 DOI: 10.3390/brainsci12101384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 10/04/2022] [Accepted: 10/05/2022] [Indexed: 11/16/2022] Open
Abstract
Integration of sensory signals that emanate from the same source, such as the visual of lip articulations and the sound of the voice of a speaking individual, can improve perception of the source signal (e.g., speech). Because momentary sensory inputs are typically corrupted with internal and external noise, there is almost always a discrepancy between the inputs, facing the perceptual system with the problem of determining whether the two signals were caused by the same source or different sources. Thus, whether or not multisensory stimuli are integrated and the degree to which they are bound is influenced by factors such as the prior expectation of a common source. We refer to this factor as the tendency to bind stimuli, or for short, binding tendency. In theory, the tendency to bind sensory stimuli can be learned by experience through the acquisition of the probabilities of the co-occurrence of the stimuli. It can also be influenced by cognitive knowledge of the environment. The binding tendency varies across individuals and can also vary within an individual over time. Here, we review the studies that have investigated the plasticity of binding tendency. We discuss the protocols that have been reported to produce changes in binding tendency, the candidate learning mechanisms involved in this process, the possible neural correlates of binding tendency, and outstanding questions pertaining to binding tendency and its plasticity. We conclude by proposing directions for future research and argue that understanding mechanisms and recipes for increasing binding tendency can have important clinical and translational applications for populations or individuals with a deficiency in multisensory integration.
Collapse
Affiliation(s)
- Saul I Quintero
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
- Department of Bioengineering, University of California, Los Angeles, CA 90089, USA
- Neuroscience Interdepartmental Program, University of California, Los Angeles, CA 90089, USA
| | - Kimia Kamal
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
21
|
Object memory is multisensory: Task-irrelevant sounds improve recollection. Psychon Bull Rev 2022; 30:652-665. [PMID: 36167915 PMCID: PMC10040470 DOI: 10.3758/s13423-022-02182-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2022] [Indexed: 11/08/2022]
Abstract
Hearing a task-irrelevant sound during object encoding can improve visual recognition memory when the sound is object-congruent (e.g., a dog and a bark). However, previous studies have only used binary old/new memory tests, which do not distinguish between recognition based on the recollection of details about the studied event or stimulus familiarity. In the present research, we hypothesized that hearing a task-irrelevant but semantically congruent natural sound at encoding would facilitate the formation of richer memory representations, resulting in increased recollection of details of the encoded event. Experiment 1 replicates previous studies showing that participants were more confident about their memory for items that were initially encoded with a congruent sound compared to an incongruent sound. Experiment 2 suggests that congruent object-sound pairings specifically facilitate recollection and not familiarity-based recognition memory, and Experiment 3 demonstrates that this effect was coupled with more accurate memory for audiovisual congruency of the item and sound from encoding rather than another aspect of the episode. These results suggest that even when congruent sounds are task-irrelevant, they promote formation of multisensory memories and subsequent recollection-based retention. Given the ubiquity of encounters with multisensory objects in our everyday lives, considering their impact on episodic memory is integral to building models of memory that apply to naturalistic settings.
Collapse
|
22
|
Vogel DHV, Jording M, Esser C, Conrad A, Weiss PH, Vogeley K. Temporal binding of social events less pronounced in individuals with Autism Spectrum Disorder. Sci Rep 2022; 12:14853. [PMID: 36050371 PMCID: PMC9437002 DOI: 10.1038/s41598-022-19309-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 08/26/2022] [Indexed: 11/09/2022] Open
Abstract
Differences in predictive processing are considered amongst the prime candidates for mechanisms underlying different symptoms of autism spectrum disorder (ASD). A particularly valuable paradigm to investigate these processes is temporal binding (TB) assessed through time estimation tasks. In this study, we report on two separate experiments using a TB task designed to assess the influence of top-down social information on action event related TB. Both experiments were performed with a group of individuals diagnosed with ASD and a matched group without ASD. The results replicate earlier findings on a pronounced social hyperbinding for social action-event sequences and extend them to persons with ASD. Hyperbinding however, is less pronounced in the group with ASD as compared to the group without ASD. We interpret our results as indicative of a reduced predictive processing during social interaction. This reduction most likely results from differences in the integration of top-down social information into action-event monitoring. We speculate that this corresponds to differences in mentalizing processes in ASD.
Collapse
Affiliation(s)
- David H V Vogel
- Institute of Neuroscience and Medicine, Cognitive Neuroscience (INM3), Research Center Juelich, Jülich, Germany. .,Faculty of Medicine and University Hospital Cologne, Department of Psychiatry, University of Cologne, Cologne, Germany.
| | - Mathis Jording
- Institute of Neuroscience and Medicine, Cognitive Neuroscience (INM3), Research Center Juelich, Jülich, Germany.,Faculty of Medicine and University Hospital Cologne, Department of Psychiatry, University of Cologne, Cologne, Germany
| | - Carolin Esser
- Faculty of Medicine and University Hospital Cologne, Department of Psychiatry, University of Cologne, Cologne, Germany
| | - Amelie Conrad
- Faculty of Medicine and University Hospital Cologne, Department of Psychiatry, University of Cologne, Cologne, Germany
| | - Peter H Weiss
- Institute of Neuroscience and Medicine, Cognitive Neuroscience (INM3), Research Center Juelich, Jülich, Germany.,Faculty of Medicine and University Hospital Cologne, Department of Neurology, University of Cologne, Cologne, Germany
| | - Kai Vogeley
- Institute of Neuroscience and Medicine, Cognitive Neuroscience (INM3), Research Center Juelich, Jülich, Germany.,Faculty of Medicine and University Hospital Cologne, Department of Psychiatry, University of Cologne, Cologne, Germany
| |
Collapse
|
23
|
The relationship between multisensory associative learning and multisensory integration. Neuropsychologia 2022; 174:108336. [PMID: 35872233 DOI: 10.1016/j.neuropsychologia.2022.108336] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 07/15/2022] [Accepted: 07/16/2022] [Indexed: 11/23/2022]
Abstract
Integrating sensory information from multiple modalities leads to more precise and efficient perception and behaviour. The process of determining which sensory information should be perceptually bound is reliant on both low-level stimulus features, as well as multisensory associations learned throughout development based on the statistics of our environment. Here, we explored the relationship between multisensory associative learning and multisensory integration using encephalography (EEG) and behavioural measures. Sixty-one participants completed a three-phase study. First, participants were exposed to novel audiovisual shape-tone pairings with frequent and infrequent stimulus pairings and complete a target detection task. EEG recordings of the mismatch negativity (MMN) and P3 were calculated as neural indices of multisensory associative learning. Next, the same learned stimulus pairs were presented in audiovisual as well as unisensory auditory and visual modalities while both early (<120 ms) and late neural indices of multisensory integration were recorded. Finally, participants completed an analogous behavioural speeded-response task, with behavioural indices of multisensory gain calculated using the Race Model. Significant relationships were found in fronto-central and occipital areas between neural measures of associative learning and both early and late indices of multisensory integration in frontal and centro-parietal areas, respectively. Participants who showed stronger indices of associative learning also exhibited stronger indices of multisensory integration of the stimuli they learned to associate. Furthermore, a significant relationship was found between neural index of early multisensory integration and behavioural indices of multisensory gain. These results provide insight into the neural underpinnings of how higher-order processes such as associative learning guide multisensory integration.
Collapse
|
24
|
Self-prioritization with unisensory and multisensory stimuli in a matching task. Atten Percept Psychophys 2022; 84:1666-1688. [PMID: 35538291 PMCID: PMC9232425 DOI: 10.3758/s13414-022-02498-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2022] [Indexed: 11/26/2022]
Abstract
A shape-label matching task is commonly used to examine the self-advantage in motor reaction-time responses (the Self-Prioritization Effect; SPE). In the present study, auditory labels were introduced, and, for the first time, responses to unisensory auditory, unisensory visual, and multisensory object-label stimuli were compared across block-type (i.e., trials blocked by sensory modality type, and intermixed trials of unisensory and multisensory stimuli). Auditory stimulus intensity was presented at either 50 dB (Group 1) or 70 dB (Group 2). The participants in Group 2 also completed a multisensory detection task, making simple speeded motor responses to the shape and sound stimuli and their multisensory combinations. In the matching task, the SPE was diminished in intermixed trials, and in responses to the unisensory auditory stimuli as compared with the multisensory (visual shape+auditory label) stimuli. In contrast, the SPE did not differ in responses to the unisensory visual and multisensory (auditory object+visual label) stimuli. The matching task was associated with multisensory ‘costs’ rather than gains, but response times to self- versus stranger-associated stimuli were differentially affected by the type of multisensory stimulus (auditory object+visual label or visual shape+auditory label). The SPE was thus modulated both by block-type and the combination of object and label stimulus modalities. There was no SPE in the detection task. Taken together, these findings suggest that the SPE with unisensory and multisensory stimuli is modulated by both stimulus- and task-related parameters within the matching task. The SPE does not transfer to a significant motor speed gain when the self-associations are not task-relevant.
Collapse
|
25
|
Semantically congruent audiovisual integration with modal-based attention accelerates auditory short-term memory retrieval. Atten Percept Psychophys 2022; 84:1625-1634. [PMID: 35641858 DOI: 10.3758/s13414-021-02437-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/28/2021] [Indexed: 11/08/2022]
Abstract
Evidence has shown that multisensory integration benefits to unisensory perception performance are asymmetric and that auditory perception performance can receive more multisensory benefits, especially when the attention focus is directed toward a task-irrelevant visual stimulus. At present, whether the benefits of semantically (in)congruent multisensory integration with modal-based attention for subsequent unisensory short-term memory (STM) retrieval are also asymmetric remains unclear. Using a delayed matching-to-sample paradigm, the present study investigated this issue by manipulating the attention focus during multisensory memory encoding. The results revealed that both visual and auditory STM retrieval reaction times were faster under semantically congruent multisensory conditions than under unisensory memory encoding conditions. We suggest that coherent multisensory representation formation might be optimized by restricted multisensory encoding and can be rapidly triggered by subsequent unisensory memory retrieval demands. Crucially, auditory STM retrieval is exclusively accelerated by semantically congruent multisensory memory encoding, indicating that the less effective sensory modality of memory retrieval relies more on the coherent prior formation of a multisensory representation optimized by modal-based attention.
Collapse
|
26
|
De Winne J, Devos P, Leman M, Botteldooren D. With No Attention Specifically Directed to It, Rhythmic Sound Does Not Automatically Facilitate Visual Task Performance. Front Psychol 2022; 13:894366. [PMID: 35756201 PMCID: PMC9226390 DOI: 10.3389/fpsyg.2022.894366] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/19/2022] [Indexed: 11/22/2022] Open
Abstract
In a century where humans and machines—powered by artificial intelligence or not—increasingly work together, it is of interest to understand human processing of multi-sensory stimuli in relation to attention and working memory. This paper explores whether and when supporting visual information with rhythmic auditory stimuli can optimize multi-sensory information processing. In turn, this can make the interaction between humans or between machines and humans more engaging, rewarding and activating. For this purpose a novel working memory paradigm was developed where participants are presented with a series of five target digits randomly interchanged with five distractor digits. Their goal is to remember the target digits and recall them orally. Depending on the condition support is provided by audio and/or rhythm. It is expected that the sound will lead to a better performance. It is also expected that this effect of sound is different in case of rhythmic and non-rhythmic sound. Last but not least, some variability is expected across participants. To make correct conclusions, the data of the experiment was statistically analyzed in a classic way, but also predictive models were developed in order to predict outcomes based on a range of input variables related to the experiment and the participant. The effect of auditory support could be confirmed, but no difference was observed between rhythmic and non-rhythmic sounds. Overall performance was indeed affected by individual differences, such as visual dominance or perceived task difficulty. Surprisingly a music education did not significantly affect the performance and even tended toward a negative effect. To better understand the underlying processes of attention, also brain activation data, e.g., by means of electroencephalography (EEG), should be recorded. This approach can be subject to a future work.
Collapse
Affiliation(s)
- Jorg De Winne
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium.,Department of Art, Music and Theater Studies, Institute for Psychoacoustics and Electronic Music (IPEM), Ghent University, Ghent, Belgium
| | - Paul Devos
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium
| | - Marc Leman
- Department of Art, Music and Theater Studies, Institute for Psychoacoustics and Electronic Music (IPEM), Ghent University, Ghent, Belgium
| | - Dick Botteldooren
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium
| |
Collapse
|
27
|
Pinto JO, Dores AR, Peixoto B, Vieira de Melo BB, Barbosa F. Critical review of multisensory integration programs and proposal of a theoretical framework for its combination with neurocognitive training. Expert Rev Neurother 2022; 22:557-566. [PMID: 35722763 DOI: 10.1080/14737175.2022.2092401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
INTRODUCTION The main purpose of this manuscript is to critically review the Multisensory Integration (MI) training programs applied to older adults, their characteristics, target sensory systems, efficacy, assessment methods, and results. We also intend to propose an integrated framework to support combined interventions of neurocognitive and sensory training. AREAS COVERED A critical review was conducted covering the most relevant literature on the MI training programs applied to older adults. Two MI training programs applied to cognitively healthy older adults were found: (a) audio-visual temporal discrimination training and (b) simultaneity judgment training. Both led to the improvement of the MI between pre- and post-training. However, only the audio-visual temporal discrimination training led to the generalization of the improvements to another MI task. EXPERT OPINION Considering the relationship between sensory and cognitive functioning, this review supports the potential advantages of combining MI with neurocognitive training in the rehabilitation of older adults. We suggested that this can be achieved within the framework of Branched Programmed Neurocognitive Training (BPNT). Criteria for deciding the most suitable multisensory intervention, that is, MI or Multisensory Stimulation, and general guidelines for the development of MI intervention protocols with older adults with or without cognitive impairment are provided.
Collapse
Affiliation(s)
- Joana O Pinto
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal.,Human and Social Sciences Technical and Scientific Area, School of Health, Polytechnic Institute of Porto, Porto, Portugal.,CESPU, University Institute of Health Sciences, Gandra, Portugal
| | - Artemisa R Dores
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal.,Human and Social Sciences Technical and Scientific Area, School of Health, Polytechnic Institute of Porto, Porto, Portugal.,Psychosocial Rehabilitation Laboratory, Center for Rehabilitation Research, School of Health of the Polytechnic of Porto, Porto, Portugal
| | - Bruno Peixoto
- CESPU, University Institute of Health Sciences, Gandra, Portugal.,NeuroGen - Center for Health Technology and Services Research (CINTESIS), Porto, Portugal.,TOXRUN - Toxicology Research Unit, University Institute of Health Sciences, CESPU, Gandra, Portugal
| | - Bruno B Vieira de Melo
- Psychosocial Rehabilitation Laboratory, Center for Rehabilitation Research, School of Health of the Polytechnic of Porto, Porto, Portugal
| | - Fernando Barbosa
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal
| |
Collapse
|
28
|
Cavedoni S, Cipresso P, Mancuso V, Bruni F, Pedroli E. Virtual reality for the assessment and rehabilitation of neglect: where are we now? A 6-year review update. VIRTUAL REALITY 2022; 26:1663-1704. [PMID: 35669614 PMCID: PMC9148943 DOI: 10.1007/s10055-022-00648-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 03/24/2022] [Indexed: 06/13/2023]
Abstract
Unilateral spatial neglect (USN) is a frequent repercussion of a cerebrovascular accident, typically a stroke. USN patients fail to orient their attention to the contralesional side to detect auditory, visual, and somatosensory stimuli, as well as to collect and purposely use this information. Traditional methods for USN assessment and rehabilitation include paper-and-pencil procedures, which address cognitive functions as isolated from other aspects of patients' functioning within a real-life context. This might compromise the ecological validity of these procedures and limit their generalizability; moreover, USN evaluation and treatment currently lacks a gold standard. The field of technology has provided several promising tools that have been integrated within the clinical practice; over the years, a "first wave" has promoted computerized methods, which cannot provide an ecological and realistic environment and tasks. Thus, a "second wave" has fostered the implementation of virtual reality (VR) devices that, with different degrees of immersiveness, induce a sense of presence and allow patients to actively interact within the life-like setting. The present paper provides an updated, comprehensive picture of VR devices in the assessment and rehabilitation of USN, building on the review of Pedroli et al. (2015). The present paper analyzes the methodological and technological aspects of the studies selected, considering the issue of usability and ecological validity of virtual environments and tasks. Despite the technological advancement, the studies in this field lack methodological rigor as well as a proper evaluation of VR usability and should improve the ecological validity of VR-based assessment and rehabilitation of USN.
Collapse
Affiliation(s)
- S. Cavedoni
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - P. Cipresso
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
- Department of Psychology, University of Turin, Via Verdi, 10, 10124 Turin, TO Italy
| | - V. Mancuso
- Faculty of Psychology, eCampus University, Novedrate, Italy
| | - F. Bruni
- Faculty of Psychology, eCampus University, Novedrate, Italy
| | - E. Pedroli
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
- Faculty of Psychology, eCampus University, Novedrate, Italy
| |
Collapse
|
29
|
Janyan A, Shtyrov Y, Andriushchenko E, Blinova E, Shcherbakova O. Look and ye shall hear: Selective auditory attention modulates the audiovisual correspondence effect. Iperception 2022; 13:20416695221095884. [PMID: 35646302 PMCID: PMC9134444 DOI: 10.1177/20416695221095884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 04/04/2022] [Indexed: 11/26/2022] Open
Abstract
One of the unresolved questions in multisensory research is that of automaticity of
consistent associations between sensory features from different modalities (e.g. high
visual locations associated with high sound pitch). We addressed this issue by examining a
possible role of selective attention in the audiovisual correspondence effect. We
orthogonally manipulated loudness and pitch, directing participants’ attention to the
auditory modality only and using pitch and loudness identification tasks. Visual stimuli
in high, low or central spatial locations appeared simultaneously with the sounds. If the
correspondence effect is automatic, it should not be affected by task changes. The
results, however, demonstrated a cross-modal pitch-verticality correspondence effect only
when participants’ attention was directed to pitch, but not to loudness identification
task; moreover, the effect was present only in the upper location. The findings underscore
the involvement of selective attention in cross-modal associations and support a top-down
account of audiovisual correspondence effects.
Collapse
Affiliation(s)
| | | | | | - Ekaterina Blinova
- Laboratory of Behavioural Neurodynamics, Saint Petersburg State University, Saint Petersburg, Russia
- Department of General Psychology, Faculty of Psychology, Saint Petersburg State University, Saint Petersburg, Russia
| | - Olga Shcherbakova
- Laboratory of Behavioural Neurodynamics, Saint Petersburg State University, Saint Petersburg, Russia
- Department of General Psychology, Faculty of Psychology, Saint Petersburg State University, Saint Petersburg, Russia
| |
Collapse
|
30
|
Tang X, Yuan M, Shi Z, Gao M, Ren R, Wei M, Gao Y. Multisensory integration attenuates visually induced oculomotor inhibition of return. J Vis 2022; 22:7. [PMID: 35297999 PMCID: PMC8944392 DOI: 10.1167/jov.22.4.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Inhibition of return (IOR) is a mechanism of the attention system involving bias toward novel stimuli and delayed generation of responses to targets at previously attended locations. According to the two-component theory, IOR consists of a perceptual component and an oculomotor component (oculomotor IOR [O-IOR]) depending on whether the eye movement system is activated. Previous studies have shown that multisensory integration weakens IOR when paying attention to both visual and auditory modalities. However, it remains unclear whether the O-IOR effect attenuated by multisensory integration also occurs when the oculomotor system is activated. Here, using two eye movement experiments, we investigated the effect of multisensory integration on O-IOR using the exogenous spatial cueing paradigm. In Experiment 1, we found a greater visual O-IOR effect compared with audiovisual and auditory O-IOR in divided modality attention. The relative multisensory response enhancement (rMRE) and violations of Miller's bound showed a greater magnitude of multisensory integration in the cued location compared with the uncued location. In Experiment 2, the magnitude of the audiovisual O-IOR effect was significantly less than that of the visual O-IOR in single visual modality selective attention. Implications for the effect of multisensory integration on O-IOR were discussed under conditions of oculomotor system activation, shedding new light on the two-component theory of IOR.
Collapse
Affiliation(s)
- Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Mengying Yuan
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Zhongyu Shi
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Min Gao
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Rongxia Ren
- Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.,
| | - Ming Wei
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Yulin Gao
- Department of Psychology, Jilin University, Changchun, China.,
| |
Collapse
|
31
|
Riva G. Virtual Reality in Clinical Psychology. COMPREHENSIVE CLINICAL PSYCHOLOGY 2022. [PMCID: PMC7500920 DOI: 10.1016/b978-0-12-818697-8.00006-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
32
|
Abstract
In the rapid serial visual presentation (RSVP) paradigm, response accuracy for the target decreases when it appears within a short time window (200~500 ms) after the previous target. This phenomenon is termed the attentional blink (AB). Although mechanisms of cross-modal processing that reduce the AB have been documented, researchers have not explored the differences across modal attentional conditions. In the present study, we used the RSVP paradigm to investigate the effect of auditory-driven visual target perceptual enhancement on the AB under modality-specific selective attention (Experiment 1) and bimodal-divided attention (Experiment 2). The results showed that cross-modal attentional enhancement was not moderated by stimulus salience. Moreover, the results also showed that accuracy was higher when the attended sound appeared simultaneously with the target. These results indicated that audiovisual enhancement reduced AB and that stronger attentional enhancement in the bimodal-divided attentional condition led to the disappearance of AB.
Collapse
|
33
|
Zhao S, Li Y, Wang C, Feng C, Feng W. Updating the dual-mechanism model for cross-sensory attentional spreading: The influence of space-based visual selective attention. Hum Brain Mapp 2021; 42:6038-6052. [PMID: 34553806 PMCID: PMC8596974 DOI: 10.1002/hbm.25668] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 08/24/2021] [Accepted: 09/14/2021] [Indexed: 11/08/2022] Open
Abstract
Selective attention to visual stimuli can spread cross‐modally to task‐irrelevant auditory stimuli through either the stimulus‐driven binding mechanism or the representation‐driven priming mechanism. The stimulus‐driven attentional spreading occurs whenever a task‐irrelevant sound is delivered simultaneously with a spatially attended visual stimulus, whereas the representation‐driven attentional spreading occurs only when the object representation of the sound is congruent with that of the to‐be‐attended visual object. The current study recorded event‐related potentials in a space‐selective visual object‐recognition task to examine the exact roles of space‐based visual selective attention in both the stimulus‐driven and representation‐driven cross‐modal attentional spreading, which remain controversial in the literature. Our results yielded that the representation‐driven auditory Nd component (200–400 ms after sound onset) did not differ according to whether the peripheral visual representations of audiovisual target objects were spatially attended or not, but was decreased when the auditory representations of target objects were presented alone. In contrast, the stimulus‐driven auditory Nd component (200–300 ms) was decreased but still prominent when the peripheral visual constituents of audiovisual nontarget objects were spatially unattended. These findings demonstrate not only that the representation‐driven attentional spreading is independent of space‐based visual selective attention and benefits in an all‐or‐nothing manner from object‐based visual selection for actually presented visual representations of target objects, but also that although the stimulus‐driven attentional spreading is modulated by space‐based visual selective attention, attending to visual modality per se is more likely to be the endogenous determinant of the stimulus‐driven attentional spreading.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China.,Department of English, School of Foreign Languages, Soochow University, Suzhou, Jiangsu, China
| | - Yang Li
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Chongzhi Wang
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China.,Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu, China
| |
Collapse
|
34
|
Using Immersive Virtual Reality to Examine How Visual and Tactile Cues Drive the Material-Weight Illusion. Atten Percept Psychophys 2021; 84:509-518. [PMID: 34862589 PMCID: PMC8641965 DOI: 10.3758/s13414-021-02414-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/13/2021] [Indexed: 11/08/2022]
Abstract
The material-weight illusion (MWI) demonstrates how our past experience with material and weight can create expectations that influence the perceived heaviness of an object. Here we used mixed-reality to place touch and vision in conflict, to investigate whether the modality through which materials are presented to a lifter could influence the top-down perceptual processes driving the MWI. University students lifted equally-weighted polystyrene, cork and granite cubes whilst viewing computer-generated images of the cubes in virtual reality (VR). This allowed the visual and tactile material cues to be altered, whilst all other object properties were kept constant. Representation of the objects’ material in VR was manipulated to create four sensory conditions: visual-tactile matched, visual-tactile mismatched, visual differences only and tactile differences only. A robust MWI was induced across all sensory conditions, whereby the polystyrene object felt heavier than the granite object. The strength of the MWI differed across conditions, with tactile material cues having a stronger influence on perceived heaviness than visual material cues. We discuss how these results suggest a mechanism whereby multisensory integration directly impacts how top-down processes shape perception.
Collapse
|
35
|
Yu H, Wang A, Li Q, Liu Y, Yang J, Takahashi S, Ejima Y, Zhang M, Wu J. Semantically Congruent Bimodal Presentation with Divided-Modality Attention Accelerates Unisensory Working Memory Retrieval. Perception 2021; 50:917-932. [PMID: 34841972 DOI: 10.1177/03010066211052943] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Although previous studies have shown that semantic multisensory integration can be differentially modulated by attention focus, it remains unclear whether attentionally mediated multisensory perceptual facilitation could impact further cognitive performance. Using a delayed matching-to-sample paradigm, the present study investigated the effect of semantically congruent bimodal presentation on subsequent unisensory working memory (WM) performance by manipulating attention focus. The results showed that unisensory WM retrieval was faster in the semantically congruent condition than in the incongruent multisensory encoding condition. However, such a result was only found in the divided-modality attention condition. This result indicates that a robust multisensory representation was constructed during semantically congruent multisensory encoding with divided-modality attention; this representation then accelerated unisensory WM performance, especially auditory WM retrieval. Additionally, an overall faster unisensory WM retrieval was observed under the modality-specific selective attention condition compared with the divided-modality condition, indicating that the division of attention to address two modalities demanded more central executive resources to encode and integrate crossmodal information and to maintain a constructed multisensory representation, leaving few resources for WM retrieval. Additionally, the present finding may support the amodal view that WM has an amodal central storage component that is used to maintain modal-based attention-optimized multisensory representations.
Collapse
Affiliation(s)
- Hongtao Yu
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, 12997Okayama University, Japan
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, 12582Soochow University, Suzhou, China
| | | | | | | | | | - Yoshimichi Ejima
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, 12997Okayama University, Japan
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, 12997Okayama University, Japan
| | - Jinglong Wu
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, 12997Okayama University, Japan
| |
Collapse
|
36
|
Kiepe F, Kraus N, Hesselmann G. Sensory Attenuation in the Auditory Modality as a Window Into Predictive Processing. Front Hum Neurosci 2021; 15:704668. [PMID: 34803629 PMCID: PMC8602204 DOI: 10.3389/fnhum.2021.704668] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 10/14/2021] [Indexed: 11/23/2022] Open
Abstract
Self-generated auditory input is perceived less loudly than the same sounds generated externally. The existence of this phenomenon, called Sensory Attenuation (SA), has been studied for decades and is often explained by motor-based forward models. Recent developments in the research of SA, however, challenge these models. We review the current state of knowledge regarding theoretical implications about the significance of Sensory Attenuation and its role in human behavior and functioning. Focusing on behavioral and electrophysiological results in the auditory domain, we provide an overview of the characteristics and limitations of existing SA paradigms and highlight the problem of isolating SA from other predictive mechanisms. Finally, we explore different hypotheses attempting to explain heterogeneous empirical findings, and the impact of the Predictive Coding Framework in this research area.
Collapse
Affiliation(s)
- Fabian Kiepe
- Psychologische Hochschule Berlin (PHB), Berlin Psychological University, Berlin, Germany
| | - Nils Kraus
- Psychologische Hochschule Berlin (PHB), Berlin Psychological University, Berlin, Germany
| | - Guido Hesselmann
- Psychologische Hochschule Berlin (PHB), Berlin Psychological University, Berlin, Germany
| |
Collapse
|
37
|
Riva G, Serino S, Di Lernia D, Pagnini F. Regenerative Virtual Therapy: The Use of Multisensory Technologies and Mindful Attention for Updating the Altered Representations of the Bodily Self. Front Syst Neurosci 2021; 15:749268. [PMID: 34803617 PMCID: PMC8595209 DOI: 10.3389/fnsys.2021.749268] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 10/04/2021] [Indexed: 12/25/2022] Open
Abstract
The term “regenerative medicine” (RM) indicates an emerging trend in biomedical sciences that aims at replacing, engineering, or regenerating human cells, tissues, or organs to restore or establish normal function. So far, the focus of RM has been the physical body. Neuroscience, however, is now suggesting that mental disorders can be broadly characterized by a dysfunction in the way the brain computes and integrates the representations of the inner and outer body across time [bodily self-consciousness (BSC)]. In this perspective, we proposed a new kind of clinical intervention, i.e., “Regenerative Virtual Therapy” (RVT), which integrates knowledge from different disciplines, from neuroscience to computational psychiatry, to regenerate a distorted or faulty BSC. The main goal of RVT was to use technology-based somatic modification techniques to restructure the maladaptive bodily representations behind a pathological condition. Specifically, starting from a Bayesian model of our BSC (i.e., body matrix), we suggested the use of mindful attention, cognitive reappraisal, and brain stimulation techniques merged with high-rewarding and novel synthetic multisensory bodily experience (i.e., a virtual reality full-body illusion in sync with a low predictabIlity interoceptive modulation) to rewrite a faulty experience of the body and to regenerate the wellbeing of an individual. The use of RVT will also offer an unprecedented experimental overview of the dynamics of our bodily representations, allowing the reverse-engineering of their functioning for hacking them using advanced technologies.
Collapse
Affiliation(s)
- Giuseppe Riva
- Applied Technology for Neuro-Psychology Laboratory, Istituto Auxologico Italiano, Milan, Italy.,Humane Technology Laboratory, Università Cattolica del Sacro Cuore, Milan, Italy.,Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Silvia Serino
- Humane Technology Laboratory, Università Cattolica del Sacro Cuore, Milan, Italy.,Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Daniele Di Lernia
- Humane Technology Laboratory, Università Cattolica del Sacro Cuore, Milan, Italy.,Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Francesco Pagnini
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy.,Department of Psychology, Harvard University, Cambridge, MA, United States
| |
Collapse
|
38
|
Precision control for a flexible body representation. Neurosci Biobehav Rev 2021; 134:104401. [PMID: 34736884 DOI: 10.1016/j.neubiorev.2021.10.023] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 10/20/2021] [Accepted: 10/21/2021] [Indexed: 11/24/2022]
Abstract
Adaptive body representation requires the continuous integration of multisensory inputs within a flexible 'body model' in the brain. The present review evaluates the idea that this flexibility is augmented by the contextual modulation of sensory processing 'top-down'; which can be described as precision control within predictive coding formulations of Bayesian inference. Specifically, I focus on the proposal that an attenuation of proprioception may facilitate the integration of conflicting visual and proprioceptive bodily cues. Firstly, I review empirical work suggesting that the processing of visual vs proprioceptive body position information can be contextualised 'top-down'; for instance, by adopting specific attentional task sets. Building up on this, I review research showing a similar contextualisation of visual vs proprioceptive information processing in the rubber hand illusion and in visuomotor adaptation. Together, the reviewed literature suggests that proprioception, despite its indisputable importance for body perception and action control, can be attenuated top-down (through precision control) to facilitate the contextual adaptation of the brain's body model to novel visual feedback.
Collapse
|
39
|
Liang X, Koh CL, Yeh CH, Goodin P, Lamp G, Connelly A, Carey LM. Predicting Post-Stroke Somatosensory Function from Resting-State Functional Connectivity: A Feasibility Study. Brain Sci 2021; 11:brainsci11111388. [PMID: 34827387 PMCID: PMC8615819 DOI: 10.3390/brainsci11111388] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 10/07/2021] [Accepted: 10/18/2021] [Indexed: 12/02/2022] Open
Abstract
Accumulating evidence shows that brain functional deficits may be impacted by damage to remote brain regions. Recent advances in neuroimaging suggest that stroke impairment can be better predicted based on disruption to brain networks rather than from lesion locations or volumes only. Our aim was to explore the feasibility of predicting post-stroke somatosensory function from brain functional connectivity through the application of machine learning techniques. Somatosensory impairment was measured using the Tactile Discrimination Test. Functional connectivity was employed to model the global brain function. Behavioral measures and MRI were collected at the same timepoint. Two machine learning models (linear regression and support vector regression) were chosen to predict somatosensory impairment from disrupted networks. Along with two feature pools (i.e., low-order and high-order functional connectivity, or low-order functional connectivity only) engineered, four predictive models were built and evaluated in the present study. Forty-three chronic stroke survivors participated this study. Results showed that the regression model employing both low-order and high-order functional connectivity can predict outcomes based on correlation coefficient of r = 0.54 (p = 0.0002). A machine learning predictive approach, involving high- and low-order modelling, is feasible for the prediction of residual somatosensory function in stroke patients using functional brain networks.
Collapse
Affiliation(s)
- Xiaoyun Liang
- Neurorehabilitation and Recovery, Florey Institute of Neuroscience and Mental Health, Melbourne, VIC 3084, Australia; (C.-L.K.); (P.G.); (G.L.); (L.M.C.)
- Victorian Infant Brain Studies (VIBeS) Group, Murdoch Children’s Research Institute, Melbourne, VIC 3052, Australia
- Correspondence:
| | - Chia-Lin Koh
- Neurorehabilitation and Recovery, Florey Institute of Neuroscience and Mental Health, Melbourne, VIC 3084, Australia; (C.-L.K.); (P.G.); (G.L.); (L.M.C.)
- Department of Occupational Therapy, Social Work and Social Policy, School of Allied Health Human Services and Sport, La Trobe University, Melbourne, VIC 3086, Australia
- Department of Occupational Therapy, College of Medicine, National Cheng Kung University, Tainan 701, Taiwan
| | - Chun-Hung Yeh
- Imaging Division, Florey Institute of Neuroscience and Mental Health, Melbourne, VIC 3084, Australia; (C.-H.Y.); (A.C.)
- Institute for Radiological Research, Chang Gung University and Chang Gung Memorial Hospital, Taoyuan 33302, Taiwan
- Department of Psychiatry, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan 33305, Taiwan
| | - Peter Goodin
- Neurorehabilitation and Recovery, Florey Institute of Neuroscience and Mental Health, Melbourne, VIC 3084, Australia; (C.-L.K.); (P.G.); (G.L.); (L.M.C.)
| | - Gemma Lamp
- Neurorehabilitation and Recovery, Florey Institute of Neuroscience and Mental Health, Melbourne, VIC 3084, Australia; (C.-L.K.); (P.G.); (G.L.); (L.M.C.)
- Department of Psychology and Counselling, School of Psychology and Public Health, La Trobe University, Melbourne, VIC 3086, Australia
| | - Alan Connelly
- Imaging Division, Florey Institute of Neuroscience and Mental Health, Melbourne, VIC 3084, Australia; (C.-H.Y.); (A.C.)
| | - Leeanne M. Carey
- Neurorehabilitation and Recovery, Florey Institute of Neuroscience and Mental Health, Melbourne, VIC 3084, Australia; (C.-L.K.); (P.G.); (G.L.); (L.M.C.)
- Department of Occupational Therapy, Social Work and Social Policy, School of Allied Health Human Services and Sport, La Trobe University, Melbourne, VIC 3086, Australia
| |
Collapse
|
40
|
Karagiorgis AT, Chalas N, Karagianni M, Papadelis G, Vivas AB, Bamidis P, Paraskevopoulos E. Computerized Music-Reading Intervention Improves Resistance to Unisensory Distraction Within a Multisensory Task, in Young and Older Adults. Front Hum Neurosci 2021; 15:742607. [PMID: 34566611 PMCID: PMC8461100 DOI: 10.3389/fnhum.2021.742607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 08/23/2021] [Indexed: 11/13/2022] Open
Abstract
Incoming information from multiple sensory channels compete for attention. Processing the relevant ones and ignoring distractors, while at the same time monitoring the environment for potential threats, is crucial for survival, throughout the lifespan. However, sensory and cognitive mechanisms often decline in aging populations, making them more susceptible to distraction. Previous interventions in older adults have successfully improved resistance to distraction, but the inclusion of multisensory integration, with its unique properties in attentional capture, in the training protocol is underexplored. Here, we studied whether, and how, a 4-week intervention, which targets audiovisual integration, affects the ability to deal with task-irrelevant unisensory deviants within a multisensory task. Musically naïve participants engaged in a computerized music reading game and were asked to detect audiovisual incongruences between the pitch of a song's melody and the position of a disk on the screen, similar to a simplistic music staff. The effects of the intervention were evaluated via behavioral and EEG measurements in young and older adults. Behavioral findings include the absence of age-related differences in distraction and the indirect improvement of performance due to the intervention, seen as an amelioration of response bias. An asymmetry between the effects of auditory and visual deviants was identified and attributed to modality dominance. The electroencephalographic results showed that both groups shared an increase in activation strength after training, when processing auditory deviants, located in the left dorsolateral prefrontal cortex. A functional connectivity analysis revealed that only young adults improved flow of information, in a network comprised of a fronto-parietal subnetwork and a multisensory temporal area. Overall, both behavioral measures and neurophysiological findings suggest that the intervention was indirectly successful, driving a shift in response strategy in the cognitive domain and higher-level or multisensory brain areas, and leaving lower level unisensory processing unaffected.
Collapse
Affiliation(s)
- Alexandros T Karagiorgis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece.,School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Nikolas Chalas
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Maria Karagianni
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Georgios Papadelis
- School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Ana B Vivas
- Department of Psychology, CITY College, University of York Europe Campus, Thessaloniki, Greece
| | - Panagiotis Bamidis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Evangelos Paraskevopoulos
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece.,Department of Psychology, University of Cyprus, Nicosia, Cyprus
| |
Collapse
|
41
|
Sun J, Wang Z, Tian X. Manual Gestures Modulate Early Neural Responses in Loudness Perception. Front Neurosci 2021; 15:634967. [PMID: 34539324 PMCID: PMC8440995 DOI: 10.3389/fnins.2021.634967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Accepted: 08/06/2021] [Indexed: 12/02/2022] Open
Abstract
How different sensory modalities interact to shape perception is a fundamental question in cognitive neuroscience. Previous studies in audiovisual interaction have focused on abstract levels such as categorical representation (e.g., McGurk effect). It is unclear whether the cross-modal modulation can extend to low-level perceptual attributes. This study used motional manual gestures to test whether and how the loudness perception can be modulated by visual-motion information. Specifically, we implemented a novel paradigm in which participants compared the loudness of two consecutive sounds whose intensity changes around the just noticeable difference (JND), with manual gestures concurrently presented with the second sound. In two behavioral experiments and two EEG experiments, we investigated our hypothesis that the visual-motor information in gestures would modulate loudness perception. Behavioral results showed that the gestural information biased the judgment of loudness. More importantly, the EEG results demonstrated that early auditory responses around 100 ms after sound onset (N100) were modulated by the gestures. These consistent results in four behavioral and EEG experiments suggest that visual-motor processing can integrate with auditory processing at an early perceptual stage to shape the perception of a low-level perceptual attribute such as loudness, at least under challenging listening conditions.
Collapse
Affiliation(s)
- Jiaqiu Sun
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China
| | - Ziqing Wang
- NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China.,Shanghai Key Laboratory of Brain Functional Genomics, Ministry of Education, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Xing Tian
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China.,Shanghai Key Laboratory of Brain Functional Genomics, Ministry of Education, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| |
Collapse
|
42
|
Schulze M, Aslan B, Stöcker T, Stirnberg R, Lux S, Philipsen A. Disentangling early versus late audiovisual integration in adult ADHD: a combined behavioural and resting-state connectivity study. J Psychiatry Neurosci 2021; 46:E528-E537. [PMID: 34548387 PMCID: PMC8526154 DOI: 10.1503/jpn.210017] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 05/27/2021] [Accepted: 06/21/2021] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Studies investigating sensory processing in attention-deficit/hyperactivity disorder (ADHD) have shown altered visual and auditory processing. However, evidence is lacking for audiovisual interplay - namely, multisensory integration. As well, neuronal dysregulation at rest (e.g., aberrant within- or between-network functional connectivity) may account for difficulties with integration across the senses in ADHD. We investigated whether sensory processing was altered at the multimodal level in adult ADHD and included resting-state functional connectivity to illustrate a possible overlap between deficient network connectivity and the ability to integrate stimuli. METHODS We tested 25 patients with ADHD and 24 healthy controls using 2 illusionary paradigms: the sound-induced flash illusion and the McGurk illusion. We applied the Mann-Whitney U test to assess statistical differences between groups. We acquired resting-state functional MRIs on a 3.0 T Siemens magnetic resonance scanner, using a highly accelerated 3-dimensional echo planar imaging sequence. RESULTS For the sound-induced flash illusion, susceptibility and reaction time were not different between the 2 groups. For the McGurk illusion, susceptibility was significantly lower for patients with ADHD, and reaction times were significantly longer. At a neuronal level, resting-state functional connectivity in the ADHD group was more highly regulated in polymodal regions that play a role in binding unimodal sensory inputs from different modalities and enabling sensory-to-cognition integration. LIMITATIONS We did not explicitly screen for autism spectrum disorder, which has high rates of comorbidity with ADHD and also involves impairments in multisensory integration. Although the patients were carefully screened by our outpatient department, we could not rule out the possibility of autism spectrum disorder in some participants. CONCLUSION Unimodal hypersensitivity seems to have no influence on the integration of basal stimuli, but it might have negative consequences for the multisensory integration of complex stimuli. This finding was supported by observations of higher resting-state functional connectivity between unimodal sensory areas and polymodal multisensory integration convergence zones for complex stimuli.
Collapse
Affiliation(s)
- Marcel Schulze
- From the Department of Psychiatry and Psychotherapy, University of Bonn, Bonn, Germany (Schulze, Aslan, Lux, Philipsen); Biopsychology and Cognitive Neuroscience, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany (Schulze); the German Centre for Neurodegenerative Diseases (DZNE), Bonn, Germany (Stöcker, Stirnberg); and the Department of Physics and Astronomy, University of Bonn, Bonn, Germany (Stöcker)
| | | | | | | | | | | |
Collapse
|
43
|
Multisensory Exercise Improves Balance in People with Balance Disorders: A Systematic Review. Curr Med Sci 2021; 41:635-648. [PMID: 34403086 DOI: 10.1007/s11596-021-2417-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 08/04/2021] [Indexed: 10/20/2022]
Abstract
OBJECTIVE To examine the effect of multisensory exercise on balance disorders. METHODS PubMed, Scopus and Web of Science were searched to identify eligible studies published before January 1, 2020. Eligible studies included randomized control trials (RCTs), non-randomized studies, case-control studies, and cohort studies. The methodological quality of the included studies was evaluated using JBI Critical Appraisal Checklists for RCTs and for Quasi-Experimental Studies by two researchers independently. A narrative synthesis of intervention characteristics and health-related outcomes was performed. RESULTS A total of 11 non-randomized studies and 9 RCTs were eligible, including 667 participants. The results supported our assumption that multisensory exercise improved balance in people with balance disorders. All of the 20 studies were believed to be of high or moderate quality. CONCLUSION Our study confirmed that multisensory exercise was effective in improving balance in people with balance disorders. Multisensory exercises could lower the risk of fall and enhance confidence level to improve the quality of life. Further research is needed to investigate the optimal strategy of multisensory exercises and explore the underlying neural and molecular mechanisms of balance improvement brought by multisensory exercises.
Collapse
|
44
|
Wang A, Zhou H, Hu Y, Wu Q, Zhang T, Tang X, Zhang M. Endogenous Spatial Attention Modulates the Magnitude of the Colavita Visual Dominance Effect. Iperception 2021; 12:20416695211027186. [PMID: 34290850 PMCID: PMC8278468 DOI: 10.1177/20416695211027186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 06/03/2021] [Indexed: 10/25/2022] Open
Abstract
The Colavita effect refers to the phenomenon wherein people tend to not respond to an auditory stimulus when a visual stimulus is simultaneously presented. Although previous studies have shown that endogenous modality attention influences the Colavita effect, whether the Colavita effect is influenced by endogenous spatial attention remains unknown. In the present study, we established endogenous spatial cues to investigate whether the size of the Colavita effect changes under visual or auditory cues. We measured three indexes to investigate the effect of endogenous spatial attention on the size of the Colavita effect. These three indexes were developed based on the following observations in bimodal trials: (a) The proportion of the "only vision" response was significantly higher than that of the "only audition" response; (b) the proportion of the "vision precedes audition" response was significantly higher than that of the "audition precedes vision" response; and (c) the reaction time difference of the "vision precedes audition" response was significantly higher than that of the "audition precedes vision" response. Our results showed that the Colavita effect was always influenced by endogenous spatial attention and that its size was larger at the cued location than at the uncued location; the cue modality (visual vs. auditory) had no effect on the size of the Colavita effect. Taken together, the present results shed light on how endogenous spatial attention affects the Colavita effect.
Collapse
Affiliation(s)
| | | | - Yuanyuan Hu
- Department of Psychology, Soochow University, Suzhou, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Qiong Wu
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China
| | - Tianyang Zhang
- School of Public Health, Medical College of Soochow University, Suzhou, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Normal University, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, China; Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
45
|
The N400 and late occipital positivity in processing dynamic facial expressions with natural emotional voice. Neuroreport 2021; 32:858-863. [PMID: 34029292 DOI: 10.1097/wnr.0000000000001669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
People require multimodal emotional interactions to live in a social environment. Several studies using dynamic facial expressions and emotional voices have reported that multimodal emotional incongruency evokes an early sensory component of event-related potentials (ERPs), while others have found a late cognitive component. The integration mechanism of two different results remains unclear. We speculate that it is semantic analysis in a multimodal integration framework that evokes the late ERP component. An electrophysiological experiment was conducted using emotionally congruent or incongruent dynamic faces and natural voices to promote semantic analysis. To investigate the top-down modulation of the ERP component, attention was manipulated via two tasks that directed participants to attend to facial versus vocal expressions. Our results revealed interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N400 ERP amplitudes but not N1 and P2 amplitudes, for incongruent emotional face-voice combinations only in the face-attentive task. A late occipital positive potential amplitude emerged only during the voice-attentive task. Overall, these findings support the idea that semantic analysis is a key factor in evoking the late cognitive component. The task effect for these ERPs suggests that top-down attention alters not only the amplitude of ERP but also the ERP component per se. Our results implicate a principle of emotional face-voice processing in the brain that may underlie complex audiovisual interactions in everyday communication.
Collapse
|
46
|
Tabas A, von Kriegstein K. Adjudicating Between Local and Global Architectures of Predictive Processing in the Subcortical Auditory Pathway. Front Neural Circuits 2021; 15:644743. [PMID: 33776657 PMCID: PMC7994860 DOI: 10.3389/fncir.2021.644743] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 02/16/2021] [Indexed: 11/13/2022] Open
Abstract
Predictive processing, a leading theoretical framework for sensory processing, suggests that the brain constantly generates predictions on the sensory world and that perception emerges from the comparison between these predictions and the actual sensory input. This requires two distinct neural elements: generative units, which encode the model of the sensory world; and prediction error units, which compare these predictions against the sensory input. Although predictive processing is generally portrayed as a theory of cerebral cortex function, animal and human studies over the last decade have robustly shown the ubiquitous presence of prediction error responses in several nuclei of the auditory, somatosensory, and visual subcortical pathways. In the auditory modality, prediction error is typically elicited using so-called oddball paradigms, where sequences of repeated pure tones with the same pitch are at unpredictable intervals substituted by a tone of deviant frequency. Repeated sounds become predictable promptly and elicit decreasing prediction error; deviant tones break these predictions and elicit large prediction errors. The simplicity of the rules inducing predictability make oddball paradigms agnostic about the origin of the predictions. Here, we introduce two possible models of the organizational topology of the predictive processing auditory network: (1) the global view, that assumes that predictions on the sensory input are generated at high-order levels of the cerebral cortex and transmitted in a cascade of generative models to the subcortical sensory pathways; and (2) the local view, that assumes that independent local models, computed using local information, are used to perform predictions at each processing stage. In the global view information encoding is optimized globally but biases sensory representations along the entire brain according to the subjective views of the observer. The local view results in a diminished coding efficiency, but guarantees in return a robust encoding of the features of sensory input at each processing stage. Although most experimental results to-date are ambiguous in this respect, recent evidence favors the global model.
Collapse
Affiliation(s)
- Alejandro Tabas
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
47
|
Song Y, Yao M, Kemprecos H, Byrne A, Xiao Z, Zhang Q, Singh A, Wang J, Chen ZS. Predictive coding models for pain perception. J Comput Neurosci 2021; 49:107-127. [PMID: 33595765 DOI: 10.1007/s10827-021-00780-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 12/14/2020] [Accepted: 01/29/2021] [Indexed: 12/31/2022]
Abstract
Pain is a complex, multidimensional experience that involves dynamic interactions between sensory-discriminative and affective-emotional processes. Pain experiences have a high degree of variability depending on their context and prior anticipation. Viewing pain perception as a perceptual inference problem, we propose a predictive coding paradigm to characterize evoked and non-evoked pain. We record the local field potentials (LFPs) from the primary somatosensory cortex (S1) and the anterior cingulate cortex (ACC) of freely behaving rats-two regions known to encode the sensory-discriminative and affective-emotional aspects of pain, respectively. We further use predictive coding to investigate the temporal coordination of oscillatory activity between the S1 and ACC. Specifically, we develop a phenomenological predictive coding model to describe the macroscopic dynamics of bottom-up and top-down activity. Supported by recent experimental data, we also develop a biophysical neural mass model to describe the mesoscopic neural dynamics in the S1 and ACC populations, in both naive and chronic pain-treated animals. Our proposed predictive coding models not only replicate important experimental findings, but also provide new prediction about the impact of the model parameters on the physiological or behavioral read-out-thereby yielding mechanistic insight into the uncertainty of expectation, placebo or nocebo effect, and chronic pain.
Collapse
Affiliation(s)
- Yuru Song
- Department of Psychiatry, New York University School of Medicine, New York, USA.,Department of Biology, University of California, San Diego, USA
| | - Mingchen Yao
- Department of Psychiatry, New York University School of Medicine, New York, USA.,Kuang Yaming Honors School, Nanjing University, Nanjing, China
| | - Helen Kemprecos
- Department of Biochemistry, New York University, New York, USA
| | - Aine Byrne
- Center for Neural Science, New York University, New York, USA
| | - Zhengdong Xiao
- Department of Psychiatry, New York University School of Medicine, New York, USA
| | - Qiaosheng Zhang
- Department of Anesthesiology, Pain and Operative Medicine, New York University School of Medicine, New York, USA
| | - Amrita Singh
- Department of Anesthesiology, Pain and Operative Medicine, New York University School of Medicine, New York, USA
| | - Jing Wang
- Department of Anesthesiology, Pain and Operative Medicine, New York University School of Medicine, New York, USA.,Department of Neuroscience and Physiology, New York University School of Medicine, New York, USA.,Neuroscience Institute, New York University School of Medicine, New York, USA
| | - Zhe S Chen
- Department of Psychiatry, New York University School of Medicine, New York, USA. .,Department of Neuroscience and Physiology, New York University School of Medicine, New York, USA. .,Neuroscience Institute, New York University School of Medicine, New York, USA.
| |
Collapse
|
48
|
Merz S, Frings C, Spence C. When irrelevant information helps: Extending the Eriksen-flanker task into a multisensory world. Atten Percept Psychophys 2021; 83:776-789. [PMID: 32514664 PMCID: PMC7884353 DOI: 10.3758/s13414-020-02066-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Charles W. Eriksen dedicated much of his research career to the field of cognitive psychology, investigating human information processing in those situations that required selection between competing stimuli. Together with his wife Barbara, he introduced the flanker task, which became one of the standard experimental tasks used by researchers to investigate the mechanisms underpinning selection. Although Eriksen himself was primarily interested in investigating visual selection, the flanker task was eventually adapted by other researchers to investigate human information processing and selection in a variety of nonvisual and multisensory situations. Here, we discuss the core aspects of the flanker task and interpret the evidence of the flanker task when used in crossmodal and multisensory settings. "Selection" has been a core topic of psychology for nearly 120 years. Nowadays, though, it is clear that we need to look at selection from a multisensory perspective-the flanker task, at least in its crossmodal and multisensory variants, is an important tool with which to investigate selection, attention, and multisensory information processing.
Collapse
Affiliation(s)
- Simon Merz
- Department of Psychology, Cognitive Psychology, University of Trier, Universitätsring 15, 54286, Trier, Germany.
| | - Christian Frings
- Department of Psychology, Cognitive Psychology, University of Trier, Universitätsring 15, 54286, Trier, Germany
| | - Charles Spence
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
49
|
Barutchu A, Spence C. Top-down task-specific determinants of multisensory motor reaction time enhancements and sensory switch costs. Exp Brain Res 2021; 239:1021-1034. [PMID: 33515085 PMCID: PMC7943519 DOI: 10.1007/s00221-020-06014-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 12/08/2020] [Indexed: 12/19/2022]
Abstract
This study was designed to investigate the complex interplay between multisensory processing, top–down processes related to the task relevance of sensory signals, and sensory switching. Thirty-five adults completed either a speeded detection or a discrimination task using the same auditory and visual stimuli and experimental setup. The stimuli consisted of unisensory and multisensory presentations of the letters ‘b’ and ‘d’. The multisensory stimuli were either congruent (e.g., the grapheme ‘b’ with the phoneme /b/) or incongruent (e.g., the grapheme ‘b’ with the phoneme /d/). In the detection task, the participants had to respond to all of the stimuli as rapidly as possible while, in the discrimination task, they only responded on those trials where one prespecified letter (either ‘b’ or ‘d’) was present. Incongruent multisensory stimuli resulted in faster responses as compared to unisensory stimuli in the detection task. In the discrimination task, only the dual-target congruent stimuli resulted in faster RTs, while the incongruent multisensory stimuli led to slower RTs than to unisensory stimuli; RTs were the slowest when the visual (rather than the auditory) signal was irrelevant, thus suggesting visual dominance. Switch costs were also observed when switching between unisensory target stimuli, while dual-target multisensory stimuli were less likely to be affected by sensory switching. Taken together, these findings suggest that multisensory motor enhancements and sensory switch costs are influenced by top–down modulations determined by task instructions, which can override the influence of prior learnt associations.
Collapse
Affiliation(s)
- Ayla Barutchu
- Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, UK.
| | - Charles Spence
- Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, UK
| |
Collapse
|
50
|
Paraskevopoulos E, Chalas N, Karagiorgis A, Karagianni M, Styliadis C, Papadelis G, Bamidis P. Aging Effects on the Neuroplastic Attributes of Multisensory Cortical Networks as Triggered by a Computerized Music Reading Training Intervention. Cereb Cortex 2021; 31:123-137. [PMID: 32794571 DOI: 10.1093/cercor/bhaa213] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 07/08/2020] [Accepted: 07/13/2020] [Indexed: 12/24/2022] Open
Abstract
The constant increase in the graying population is the result of a great expansion of life expectancy. A smaller expansion of healthy cognitive and brain functioning diminishes the gains achieved by longevity. Music training, as a special case of multisensory learning, may induce restorative neuroplasticity in older ages. The current study aimed to explore aging effects on the cortical network supporting multisensory cognition and to define aging effects on the network's neuroplastic attributes. A computer-based music reading protocol was developed and evaluated via electroencephalography measurements pre- and post-training on young and older adults. Results revealed that multisensory integration is performed via diverse strategies in the two groups: Older adults employ higher-order supramodal areas to a greater extent than lower level perceptual regions, in contrast to younger adults, indicating an age-related shift in the weight of each processing strategy. Restorative neuroplasticity was revealed in the left inferior frontal gyrus and right medial temporal gyrus, as a result of the training, while task-related reorganization of cortical connectivity was obstructed in the group of older adults, probably due to systemic maturation mechanisms. On the contrary, younger adults significantly increased functional connectivity among the regions supporting multisensory integration.
Collapse
Affiliation(s)
- Evangelos Paraskevopoulos
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Nikolas Chalas
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece.,Institute for Biomagnetism and Biosignal Analysis, University of Münster, D-48149 Münster, Germany
| | - Alexandros Karagiorgis
- School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Maria Karagianni
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Charis Styliadis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Georgios Papadelis
- School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Panagiotis Bamidis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| |
Collapse
|