1
|
Ju P, Zhou Z, Xie Y, Hui J, Yang X. Music training is associated with better audio-visual integration in Chinese language. Int J Psychophysiol 2024; 203:112414. [PMID: 39134177 DOI: 10.1016/j.ijpsycho.2024.112414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 06/20/2024] [Accepted: 08/09/2024] [Indexed: 08/15/2024]
Abstract
In the present study, we aimed to investigate whether long-term music training could improve audio-visual speech integration in Chinese, using event-related brain potential (ERP) measurements. Specifically, we recruited musicians and non-musicians to participate in our experiment where visual Chinese characters were presented simultaneously with congruent or incongruent speech sounds. In order to maintain participants' focus on both auditory and visual modalities, they were instructed to perform a probe detection task. Our study revealed that for the musicians, audiovisual incongruent stimuli elicited larger N1 and N400 amplitudes compared to audiovisual congruent stimuli. Conversely, for the non-musicians, only a larger N400 amplitude was observed for incongruent stimuli relative to congruent stimuli, without a significant difference in N1 amplitude. Furthermore, correlation analyses indicated that more years of music training was associated with a larger N1 effect for the musicians. These results suggest that musicians were capable of detecting character-speech sound incongruence at an earlier time window compared to non-musicians. Overall, our findings provide compelling evidence that music training is associated with better integration of visual characters and auditory speech sounds in language processing.
Collapse
Affiliation(s)
- Ping Ju
- Department of Psychology, Renmin University of China, Beijing, China
| | - Zihang Zhou
- Department of Psychology, Renmin University of China, Beijing, China; School of foreign languages, Renmin University of China, Beijing, China
| | - Yuhan Xie
- Department of Psychology, Renmin University of China, Beijing, China
| | - Jiaying Hui
- Department of Psychology, Renmin University of China, Beijing, China
| | - Xiaohong Yang
- Department of Psychology, Renmin University of China, Beijing, China.
| |
Collapse
|
2
|
Senkowski D, Engel AK. Multi-timescale neural dynamics for multisensory integration. Nat Rev Neurosci 2024:10.1038/s41583-024-00845-7. [PMID: 39090214 DOI: 10.1038/s41583-024-00845-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/02/2024] [Indexed: 08/04/2024]
Abstract
Carrying out any everyday task, be it driving in traffic, conversing with friends or playing basketball, requires rapid selection, integration and segregation of stimuli from different sensory modalities. At present, even the most advanced artificial intelligence-based systems are unable to replicate the multisensory processes that the human brain routinely performs, but how neural circuits in the brain carry out these processes is still not well understood. In this Perspective, we discuss recent findings that shed fresh light on the oscillatory neural mechanisms that mediate multisensory integration (MI), including power modulations, phase resetting, phase-amplitude coupling and dynamic functional connectivity. We then consider studies that also suggest multi-timescale dynamics in intrinsic ongoing neural activity and during stimulus-driven bottom-up and cognitive top-down neural network processing in the context of MI. We propose a new concept of MI that emphasizes the critical role of neural dynamics at multiple timescales within and across brain networks, enabling the simultaneous integration, segregation, hierarchical structuring and selection of information in different time windows. To highlight predictions from our multi-timescale concept of MI, real-world scenarios in which multi-timescale processes may coordinate MI in a flexible and adaptive manner are considered.
Collapse
Affiliation(s)
- Daniel Senkowski
- Department of Psychiatry and Neurosciences, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| |
Collapse
|
3
|
Jin J, Zheng Q, Liu H, Feng K, Bai Y, Ni G. Musical experience enhances time discrimination: Evidence from cortical responses. Ann N Y Acad Sci 2024; 1536:167-176. [PMID: 38829709 DOI: 10.1111/nyas.15153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/05/2024]
Abstract
Time discrimination, a critical aspect of auditory perception, is influenced by numerous factors. Previous research has suggested that musical experience can restructure the brain, thereby enhancing time discrimination. However, this phenomenon remains underexplored. In this study, we seek to elucidate the enhancing effect of musical experience on time discrimination, utilizing both behavioral and electroencephalogram methodologies. Additionally, we aim to explore, through brain connectivity analysis, the role of increased connectivity in brain regions associated with auditory perception as a potential contributory factor to time discrimination induced by musical experience. The results show that the music-experienced group demonstrated higher behavioral accuracy, shorter reaction time, and shorter P3 and mismatch response latencies as compared to the control group. Furthermore, the music-experienced group had higher connectivity in the left temporal lobe. In summary, our research underscores the positive impact of musical experience on time discrimination and suggests that enhanced connectivity in brain regions linked to auditory perception may be responsible for this enhancement.
Collapse
Affiliation(s)
- Jiaqi Jin
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Qi Zheng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Hongxing Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Kunyun Feng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Yanru Bai
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin, China
- State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| | - Guangjian Ni
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin, China
- State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| |
Collapse
|
4
|
Lee HH, Groves K, Ripollés P, Carrasco M. Audiovisual integration in the McGurk effect is impervious to music training. Sci Rep 2024; 14:3262. [PMID: 38332159 PMCID: PMC10853564 DOI: 10.1038/s41598-024-53593-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 02/01/2024] [Indexed: 02/10/2024] Open
Abstract
The McGurk effect refers to an audiovisual speech illusion where the discrepant auditory and visual syllables produce a fused percept between the visual and auditory component. However, little is known about how individual differences contribute to the McGurk effect. Here, we examined whether music training experience-which involves audiovisual integration-can modulate the McGurk effect. Seventy-three participants completed the Goldsmiths Musical Sophistication Index (Gold-MSI) questionnaire to evaluate their music expertise on a continuous scale. Gold-MSI considers participants' daily-life exposure to music learning experiences (formal and informal), instead of merely classifying people into different groups according to how many years they have been trained in music. Participants were instructed to report, via a 3-alternative forced choice task, "what a person said": /Ba/, /Ga/ or /Da/. The experiment consisted of 96 audiovisual congruent trials and 96 audiovisual incongruent (McGurk) trials. We observed no significant correlations between the susceptibility of the McGurk effect and the different subscales of the Gold-MSI (active engagement, perceptual abilities, music training, singing abilities, emotion) or the general musical sophistication composite score. Together, these findings suggest that music training experience does not modulate audiovisual integration in speech as reflected by the McGurk effect.
Collapse
Affiliation(s)
- Hsing-Hao Lee
- Department of Psychology, New York University, New York, USA.
| | - Karleigh Groves
- Department of Psychology, New York University, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, USA
- Music and Audio Research Lab (MARL), New York University, New York, USA
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, USA
- Music and Audio Research Lab (MARL), New York University, New York, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, USA
- Center for Neural Science, New York University, New York, USA
| |
Collapse
|
5
|
Al-youzbaki MU, Schormans AL, Allman BL. Past and present experience shifts audiovisual temporal perception in rats. Front Behav Neurosci 2023; 17:1287587. [PMID: 37908200 PMCID: PMC10613659 DOI: 10.3389/fnbeh.2023.1287587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 09/25/2023] [Indexed: 11/02/2023] Open
Abstract
Our brains have a propensity to integrate closely-timed auditory and visual stimuli into a unified percept; a phenomenon that is highly malleable based on prior sensory experiences, and is known to be altered in clinical populations. While the neural correlates of audiovisual temporal perception have been investigated using neuroimaging and electroencephalography techniques in humans, animal research will be required to uncover the underlying cellular and molecular mechanisms. Prior to conducting such mechanistic studies, it is important to first confirm the translational potential of any prospective animal model. Thus, in the present study, we conducted a series of experiments to determine if rats show the hallmarks of audiovisual temporal perception observed in neurotypical humans, and whether the rat behavioral paradigms could reveal when they experienced perceptual disruptions akin to those observed in neurodevelopmental disorders. After training rats to perform a temporal order judgment (TOJ) or synchrony judgment (SJ) task, we found that the rats' perception was malleable based on their past and present sensory experiences. More specifically, passive exposure to asynchronous audiovisual stimulation in the minutes prior to behavioral testing caused the rats' perception to predictably shift in the direction of the leading stimulus; findings which represent the first time that this form of audiovisual perceptual malleability has been reported in non-human subjects. Furthermore, rats performing the TOJ task also showed evidence of rapid recalibration, in which their audiovisual temporal perception on the current trial was predictably influenced by the timing lag between the auditory and visual stimuli in the preceding trial. Finally, by manipulating either experimental testing parameters or altering the rats' neurochemistry with a systemic injection of MK-801, we showed that the TOJ and SJ tasks could identify when the rats had difficulty judging the timing of audiovisual stimuli. These findings confirm that the behavioral paradigms are indeed suitable for future testing of rats with perceptual disruptions in audiovisual processing. Overall, our collective results highlight that rats represent an excellent animal model to study the cellular and molecular mechanisms underlying the acuity and malleability of audiovisual temporal perception, as they showcase the perceptual hallmarks commonly observed in humans.
Collapse
|