1
|
Shen D, Ross B, Alain C. Temporal deployment of attention in musicians: Evidence from an attentional blink paradigm. Ann N Y Acad Sci 2023; 1530:110-123. [PMID: 37823710 DOI: 10.1111/nyas.15069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
The generalization of music training to unrelated nonmusical domains is well established and may reflect musicians' superior ability to regulate attention. We investigated the temporal deployment of attention in musicians and nonmusicians using scalp-recording of event-related potentials in an attentional blink (AB) paradigm. Participants listened to rapid sequences of stimuli and identified target and probe sounds. The AB was defined as a probe identification deficit when the probe closely follows the target. The sequence of stimuli was preceded by a neutral or informative cue about the probe position within the sequence. Musicians outperformed nonmusicians in identifying the target and probe. In both groups, cueing improved target and probe identification and reduced the AB. The informative cue elicited a sustained potential, which was more prominent in musicians than nonmusicians over left temporal areas and yielded a larger N1 amplitude elicited by the target. The N1 was larger in musicians than nonmusicians, and its amplitude over the left frontocentral cortex of musicians correlated with accuracy. Together, these results reveal musicians' superior ability to regulate attention, allowing them to prepare for incoming stimuli, thereby improving sound object identification. This capacity to manage attentional resources to optimize task performance may generalize to nonmusical activities.
Collapse
Affiliation(s)
- Dawei Shen
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Bernhard Ross
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, University of Toronto, Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, University of Toronto, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Toader C, Tataru CP, Florian IA, Covache-Busuioc RA, Bratu BG, Glavan LA, Bordeianu A, Dumitrascu DI, Ciurea AV. Cognitive Crescendo: How Music Shapes the Brain's Structure and Function. Brain Sci 2023; 13:1390. [PMID: 37891759 PMCID: PMC10605363 DOI: 10.3390/brainsci13101390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 09/25/2023] [Accepted: 09/27/2023] [Indexed: 10/29/2023] Open
Abstract
Music is a complex phenomenon with multiple brain areas and neural connections being implicated. Centuries ago, music was discovered as an efficient modality for psychological status enrichment and even for the treatment of multiple pathologies. Modern research investigations give a new avenue for music perception and the understanding of the underlying neurological mechanisms, using neuroimaging, especially magnetic resonance imaging. Multiple brain areas were depicted in the last decades as being of high value for music processing, and further analyses in the neuropsychology field uncover the implications in emotional and cognitive activities. Music listening improves cognitive functions such as memory, attention span, and behavioral augmentation. In rehabilitation, music-based therapies have a high rate of success for the treatment of depression and anxiety and even in neurological disorders such as regaining the body integrity after a stroke episode. Our review focused on the neurological and psychological implications of music, as well as presenting the significant clinical relevance of therapies using music.
Collapse
Affiliation(s)
- Corneliu Toader
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (B.-G.B.); (L.A.G.); (A.B.); (D.-I.D.); (A.V.C.)
- Department of Vascular Neurosurgery, National Institute of Neurology and Neurovascular Diseases, 077160 Bucharest, Romania
| | - Calin Petru Tataru
- Department of Opthamology, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania
- Central Military Emergency Hospital “Dr. Carol Davila”, 010825 Bucharest, Romania
| | - Ioan-Alexandru Florian
- Department of Neurosciences, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400012 Cluj-Napoca, Romania
| | - Razvan-Adrian Covache-Busuioc
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (B.-G.B.); (L.A.G.); (A.B.); (D.-I.D.); (A.V.C.)
| | - Bogdan-Gabriel Bratu
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (B.-G.B.); (L.A.G.); (A.B.); (D.-I.D.); (A.V.C.)
| | - Luca Andrei Glavan
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (B.-G.B.); (L.A.G.); (A.B.); (D.-I.D.); (A.V.C.)
| | - Andrei Bordeianu
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (B.-G.B.); (L.A.G.); (A.B.); (D.-I.D.); (A.V.C.)
| | - David-Ioan Dumitrascu
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (B.-G.B.); (L.A.G.); (A.B.); (D.-I.D.); (A.V.C.)
| | - Alexandru Vlad Ciurea
- Department of Neurosurgery, “Carol Davila” University of Medicine and Pharmacy, 020021 Bucharest, Romania; (C.T.); (B.-G.B.); (L.A.G.); (A.B.); (D.-I.D.); (A.V.C.)
- Neurosurgery Department, Sanador Clinical Hospital, 010991 Bucharest, Romania
| |
Collapse
|
3
|
Hansen NC, Højlund A, Møller C, Pearce M, Vuust P. Musicians show more integrated neural processing of contextually relevant acoustic features. Front Neurosci 2022; 16:907540. [PMID: 36312026 PMCID: PMC9612920 DOI: 10.3389/fnins.2022.907540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 09/08/2022] [Indexed: 12/04/2022] Open
Abstract
Little is known about expertise-related plasticity of neural mechanisms for auditory feature integration. Here, we contrast two diverging hypotheses that musical expertise is associated with more independent or more integrated predictive processing of acoustic features relevant to melody perception. Mismatch negativity (MMNm) was recorded with magnetoencephalography (MEG) from 25 musicians and 25 non-musicians, exposed to interleaved blocks of a complex, melody-like multi-feature paradigm and a simple, oddball control paradigm. In addition to single deviants differing in frequency (F), intensity (I), or perceived location (L), double and triple deviants were included reflecting all possible feature combinations (FI, IL, LF, FIL). Following previous work, early neural processing overlap was approximated in terms of MMNm additivity by comparing empirical MMNms obtained with double and triple deviants to modeled MMNms corresponding to summed constituent single-deviant MMNms. Significantly greater subadditivity was found in musicians compared to non-musicians, specifically for frequency-related deviants in complex, melody-like stimuli. Despite using identical sounds, expertise effects were absent from the simple oddball paradigm. This novel finding supports the integrated processing hypothesis whereby musicians recruit overlapping neural resources facilitating more integrative representations of contextually relevant stimuli such as frequency (perceived as pitch) during melody perception. More generally, these specialized refinements in predictive processing may enable experts to optimally capitalize upon complex, domain-relevant, acoustic cues.
Collapse
Affiliation(s)
- Niels Chr. Hansen
- Aarhus Institute of Advanced Studies, Aarhus University, Aarhus, Denmark
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
- Department of Dramaturgy and Musicology, School of Communication and Culture, Aarhus University, Aarhus, Denmark
- *Correspondence: Niels Chr. Hansen,
| | - Andreas Højlund
- Department of Linguistics, Cognitive Science, and Semiotics, School of Communication and Culture, Aarhus University, Aarhus, Denmark
- Department of Clinical Medicine, Faculty of Health, Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark
| | - Cecilie Møller
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
- Department of Psychology and Behavioural Sciences, Aarhus University, Aarhus, Denmark
| | - Marcus Pearce
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
- School of Electronic Engineering and Computer Science, Cognitive Science Research Group and Centre for Digital Music, Queen Mary University of London, London, United Kingdom
| | - Peter Vuust
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| |
Collapse
|
4
|
Zhang Y, Zhang C, Cheng L, Qi M. The Use of Deep Learning-Based Gesture Interactive Robot in the Treatment of Autistic Children Under Music Perception Education. Front Psychol 2022; 13:762701. [PMID: 35222179 PMCID: PMC8866172 DOI: 10.3389/fpsyg.2022.762701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 01/03/2022] [Indexed: 11/23/2022] Open
Abstract
The purpose of this study was to apply deep learning to music perception education. Music perception therapy for autistic children using gesture interactive robots based on the concept of educational psychology and deep learning technology is proposed. First, the experimental problems are defined and explained based on the relevant theories of pedagogy. Next, gesture interactive robots and music perception education classrooms are studied based on recurrent neural networks (RNNs). Then, autistic children are treated by music perception, and an electroencephalogram (EEG) is used to collect the music perception effect and disease diagnosis results of children. Due to significant advantages of signal feature extraction and classification, RNN is used to analyze the EEG of autistic children receiving different music perception treatments to improve classification accuracy. The experimental results are as follows. The analysis of EEG signals proves that different people have different perceptions of music, but this difference fluctuates in a certain range. The classification accuracy of the designed model is about 72–94%, and the average classification accuracy is about 85%. The average accuracy of the model for EEG classification of autistic children is 85%, and that of healthy children is 84%. The test results with similar models also prove the excellent performance of the design model. This exploration provides a reference for applying the artificial intelligence (AI) technology in music perception education to diagnose and treat autistic children.
Collapse
Affiliation(s)
- Yiyao Zhang
- College of Art and Communication, Beijing Normal University, Beijing, China
| | - Chao Zhang
- School of Theater, Film and Television, Communication University of China, Beijing, China
| | - Lei Cheng
- School of Art, Ludong University, Yantai, China
| | - Mingwei Qi
- Department of Music, Dalian Arts College, Dalian, China
| |
Collapse
|
5
|
Heggli OA, Konvalinka I, Kringelbach ML, Vuust P. A metastable attractor model of self-other integration (MEAMSO) in rhythmic synchronization. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200332. [PMID: 34420393 DOI: 10.1098/rstb.2020.0332] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
Human interaction is often accompanied by synchronized bodily rhythms. Such synchronization may emerge spontaneously as when a crowd's applause turns into a steady beat, be encouraged as in nursery rhymes, or be intentional as in the case of playing music together. The latter has been extensively studied using joint finger-tapping paradigms as a simplified version of rhythmic interpersonal synchronization. A key finding is that synchronization in such cases is multifaceted, with synchronized behaviour resting upon different synchronization strategies such as mutual adaptation, leading-following and leading-leading. However, there are multiple open questions regarding the mechanism behind these strategies and how they develop dynamically over time. Here, we propose a metastable attractor model of self-other integration (MEAMSO). This model conceptualizes dyadic rhythmic interpersonal synchronization as a process of integrating and segregating signals of self and other. Perceived sounds are continuously evaluated as either being attributed to self-produced or other-produced actions. The model entails a metastable system with two particular attractor states: one where an individual maintains two separate predictive models for self- and other-produced actions, and the other where these two predictive models integrate into one. The MEAMSO explains the three known synchronization strategies and makes testable predictions about the dynamics of interpersonal synchronization both in behaviour and the brain. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
Affiliation(s)
- Ole Adrian Heggli
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and the Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Ivana Konvalinka
- SINe Lab, Section for Cognitive Systems, DTU Compute, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Morten L Kringelbach
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and the Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark.,Centre for Eudaimonia and Human Flourishing, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and the Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| |
Collapse
|
6
|
Sorati M, Behne DM. Considerations in Audio-Visual Interaction Models: An ERP Study of Music Perception by Musicians and Non-musicians. Front Psychol 2021; 11:594434. [PMID: 33551911 PMCID: PMC7854916 DOI: 10.3389/fpsyg.2020.594434] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 12/03/2020] [Indexed: 11/13/2022] Open
Abstract
Previous research with speech and non-speech stimuli suggested that in audiovisual perception, visual information starting prior to the onset of corresponding sound can provide visual cues, and form a prediction about the upcoming auditory sound. This prediction leads to audiovisual (AV) interaction. Auditory and visual perception interact and induce suppression and speeding up of the early auditory event-related potentials (ERPs) such as N1 and P2. To investigate AV interaction, previous research examined N1 and P2 amplitudes and latencies in response to audio only (AO), video only (VO), audiovisual, and control (CO) stimuli, and compared AV with auditory perception based on four AV interaction models (AV vs. AO+VO, AV-VO vs. AO, AV-VO vs. AO-CO, AV vs. AO). The current study addresses how different models of AV interaction express N1 and P2 suppression in music perception. Furthermore, the current study took one step further and examined whether previous musical experience, which can potentially lead to higher N1 and P2 amplitudes in auditory perception, influenced AV interaction in different models. Musicians and non-musicians were presented the recordings (AO, AV, VO) of a keyboard /C4/ key being played, as well as CO stimuli. Results showed that AV interaction models differ in their expression of N1 and P2 amplitude and latency suppression. The calculation of model (AV-VO vs. AO) and (AV-VO vs. AO-CO) has consequences for the resulting N1 and P2 difference waves. Furthermore, while musicians, compared to non-musicians, showed higher N1 amplitude in auditory perception, suppression of amplitudes and latencies for N1 and P2 was similar for the two groups across the AV models. Collectively, these results suggest that when visual cues from finger and hand movements predict the upcoming sound in AV music perception, suppression of early ERPs is similar for musicians and non-musicians. Notably, the calculation differences across models do not lead to the same pattern of results for N1 and P2, demonstrating that the four models are not interchangeable and are not directly comparable.
Collapse
Affiliation(s)
- Marzieh Sorati
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | - Dawn M Behne
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
7
|
Srinivasan N, Bishop J, Yekovich R, Rosenfield DB, Helekar SA. Differential Activation and Functional Plasticity of Multimodal Areas Associated with Acquired Musical Skill. Neuroscience 2020; 446:294-303. [PMID: 32818600 DOI: 10.1016/j.neuroscience.2020.08.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 07/27/2020] [Accepted: 08/10/2020] [Indexed: 10/23/2022]
Abstract
Training of a musical skill is known to produce a distributed neural representation of the ability to perceive music and perform musical tasks. In the present study we tested the hypothesis that the audiovisual perception of music involves a wider activation of multimodal sensory and sensorimotor structures in the brain, including those containing mirror neurons. We mapped the activation of brain areas during passive listening and viewing of the first 40 s of "Ode to Joy" being played on the piano by an expert pianist. To do this we performed brain functional magnetic resonance imaging during the presentation of 6 different stimulus contrasts pertaining to that musical melody in a pseudo-randomized order. Group data analysis in musically trained and untrained adults showed robust activation in broadly distributed occipitotemporal, parietal and frontal areas in trained subjects and much restricted activation in untrained subjects. A visual stimulus contrast focusing on the visual motion percept of moving fingers on piano keys revealed selective bilateral activation of a locus corresponding to the V5/MT area, which was significantly more pronounced in trained subjects and showed partial linear dependence on the duration of training on the left side. Quantitative analysis of individual brain volumes confirmed a significantly greater and wider spread of activation in trained compared to untrained subjects. These findings support the view that audiovisual perception of music and musical gestures in trained musicians involves an expanded and widely distributed neural representation formed due to experience-dependent plasticity.
Collapse
Affiliation(s)
- N Srinivasan
- Speech and Language Center, Stanley H. Appel Department of Neurology, Houston Methodist Neurological Institute, Houston, TX, United States
| | - J Bishop
- Speech and Language Center, Stanley H. Appel Department of Neurology, Houston Methodist Neurological Institute, Houston, TX, United States
| | - R Yekovich
- Shepherd School of Music, Rice University, Houston, TX, United States
| | - D B Rosenfield
- Speech and Language Center, Stanley H. Appel Department of Neurology, Houston Methodist Neurological Institute, Houston, TX, United States; Shepherd School of Music, Rice University, Houston, TX, United States
| | - S A Helekar
- Speech and Language Center, Stanley H. Appel Department of Neurology, Houston Methodist Neurological Institute, Houston, TX, United States.
| |
Collapse
|