1
|
Herff SA, Bonetti L, Cecchetti G, Vuust P, Kringelbach ML, Rohrmeier MA. Hierarchical syntax model of music predicts theta power during music listening. Neuropsychologia 2024; 199:108905. [PMID: 38740179 DOI: 10.1016/j.neuropsychologia.2024.108905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 03/07/2024] [Accepted: 05/06/2024] [Indexed: 05/16/2024]
Abstract
Linguistic research showed that the depth of syntactic embedding is reflected in brain theta power. Here, we test whether this also extends to non-linguistic stimuli, specifically music. We used a hierarchical model of musical syntax to continuously quantify two types of expert-annotated harmonic dependencies throughout a piece of Western classical music: prolongation and preparation. Prolongations can roughly be understood as a musical analogue to linguistic coordination between constituents that share the same function (e.g., 'pizza' and 'pasta' in 'I ate pizza and pasta'). Preparation refers to the dependency between two harmonies whereby the first implies a resolution towards the second (e.g., dominant towards tonic; similar to how the adjective implies the presence of a noun in 'I like spicy … '). Source reconstructed MEG data of sixty-five participants listening to the musical piece was then analysed. We used Bayesian Mixed Effects models to predict theta envelope in the brain, using the number of open prolongation and preparation dependencies as predictors whilst controlling for audio envelope. We observed that prolongation and preparation both carry independent and distinguishable predictive value for theta band fluctuation in key linguistic areas such as the Angular, Superior Temporal, and Heschl's Gyri, or their right-lateralised homologues, with preparation showing additional predictive value for areas associated with the reward system and prediction. Musical expertise further mediated these effects in language-related brain areas. Results show that predictions of precisely formalised music-theoretical models are reflected in the brain activity of listeners which furthers our understanding of the perception and cognition of musical structure.
Collapse
Affiliation(s)
- Steffen A Herff
- Sydney Conservatorium of Music, University of Sydney, Sydney, Australia; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Gabriele Cecchetti
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark
| | - Morten L Kringelbach
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
2
|
Ono K, Mizuochi R, Yamamoto K, Sasaoka T, Ymawaki S. Exploring the neural underpinnings of chord prediction uncertainty: an electroencephalography (EEG) study. Sci Rep 2024; 14:4586. [PMID: 38403782 PMCID: PMC10894873 DOI: 10.1038/s41598-024-55366-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 02/22/2024] [Indexed: 02/27/2024] Open
Abstract
Predictive processing in the brain, involving interaction between interoceptive (bodily signal) and exteroceptive (sensory) processing, is essential for understanding music as it encompasses musical temporality dynamics and affective responses. This study explores the relationship between neural correlates and subjective certainty of chord prediction, focusing on the alignment between predicted and actual chord progressions in both musically appropriate chord sequences and random chord sequences. Participants were asked to predict the final chord in sequences while their brain activity was measured using electroencephalography (EEG). We found that the stimulus preceding negativity (SPN), an EEG component associated with predictive processing of sensory stimuli, was larger for non-harmonic chord sequences than for harmonic chord progressions. Additionally, the heartbeat evoked potential (HEP), an EEG component related to interoceptive processing, was larger for random chord sequences and correlated with prediction certainty ratings. HEP also correlated with the N5 component, found while listening to the final chord. Our findings suggest that HEP more directly reflects the subjective prediction certainty than SPN. These findings offer new insights into the neural mechanisms underlying music perception and prediction, emphasizing the importance of considering auditory prediction certainty when examining the neural basis of music cognition.
Collapse
Affiliation(s)
- Kentaro Ono
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan.
| | - Ryohei Mizuochi
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| | - Kazuki Yamamoto
- Graduate School of Humanities and Social Sciences, Hiroshima University, Higashihiroshima, Japan
| | - Takafumi Sasaoka
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| | - Shigeto Ymawaki
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| |
Collapse
|
3
|
Chen X, Affourtit J, Ryskin R, Regev TI, Norman-Haignere S, Jouravlev O, Malik-Moraleda S, Kean H, Varley R, Fedorenko E. The human language system, including its inferior frontal component in "Broca's area," does not support music perception. Cereb Cortex 2023; 33:7904-7929. [PMID: 37005063 PMCID: PMC10505454 DOI: 10.1093/cercor/bhad087] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 04/04/2023] Open
Abstract
Language and music are two human-unique capacities whose relationship remains debated. Some have argued for overlap in processing mechanisms, especially for structure processing. Such claims often concern the inferior frontal component of the language system located within "Broca's area." However, others have failed to find overlap. Using a robust individual-subject fMRI approach, we examined the responses of language brain regions to music stimuli, and probed the musical abilities of individuals with severe aphasia. Across 4 experiments, we obtained a clear answer: music perception does not engage the language system, and judgments about music structure are possible even in the presence of severe damage to the language network. In particular, the language regions' responses to music are generally low, often below the fixation baseline, and never exceed responses elicited by nonmusic auditory conditions, like animal sounds. Furthermore, the language regions are not sensitive to music structure: they show low responses to both intact and structure-scrambled music, and to melodies with vs. without structural violations. Finally, in line with past patient investigations, individuals with aphasia, who cannot judge sentence grammaticality, perform well on melody well-formedness judgments. Thus, the mechanisms that process structure in language do not appear to process music, including music syntax.
Collapse
Affiliation(s)
- Xuanyi Chen
- Department of Cognitive Sciences, Rice University, TX 77005, United States
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rachel Ryskin
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive & Information Sciences, University of California, Merced, Merced, CA 95343, United States
| | - Tamar I Regev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Samuel Norman-Haignere
- Department of Biostatistics & Computational Biology, University of Rochester Medical Center, Rochester, NY, United States
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States
| | - Olessia Jouravlev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive Science, Carleton University, Ottawa, ON, Canada
| | - Saima Malik-Moraleda
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | - Hope Kean
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rosemary Varley
- Psychology & Language Sciences, UCL, London, WCN1 1PF, United Kingdom
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| |
Collapse
|
4
|
Walla P, Külzer D, Leeb A, Moidl L, Kalt S. Brain Activities Show There Is Nothing Like a Real Friend in Contrast to Influencers and Other Celebrities. Brain Sci 2023; 13:brainsci13050831. [PMID: 37239305 DOI: 10.3390/brainsci13050831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 05/18/2023] [Accepted: 05/19/2023] [Indexed: 05/28/2023] Open
Abstract
Especially for young people, influencers and other celebrities followed on social media evoke affective closeness that in their young minds seems real even though it is fake. Such fake friendships are potentially problematic because of their felt reality on the consumer side while lacking any inversely felt true closeness. The question arises if the unilateral friendship of a social media user is equal or at least similar to real reciprocal friendship. Instead of asking social media users for explicit responses (conscious deliberation), the present exploratory study aimed to answer this question with the help of brain imaging technology. Thirty young participants were first invited to provide individual lists including (i) twenty names of their most followed and loved influencers or other celebrities (fake friend names), (ii) twenty names of loved real friends and relatives (real friend names) as well as (iii) twenty names they do not feel any closeness to (no friend names). They then came to the Freud CanBeLab (Cognitive and Affective Neuroscience and Behavior Lab) where they were shown their selected names in a random sequence (two rounds), while their brain activities were recorded via electroencephalography (EEG) and later calculated into event-related potentials (ERPs). We found short (ca. 100 ms) left frontal brain activity starting at around 250 ms post-stimulus to process real friend and no friend names similarly, while both ERPs differed from those elicited by fake friend names. This is followed by a longer effect (ca. 400 ms), where left and right frontal and temporoparietal ERPs also differed between fake and real friend names, but at this later processing stage, no friend names elicited similar brain activities to fake friend names in those regions. In general, real friend names elicited the most negative going brain potentials (interpreted as highest brain activation levels). These exploratory findings represent objective empirical evidence that the human brain clearly distinguishes between influencers or other celebrities and close people out of real life even though subjective feelings of closeness and trust can be similar. In summary, brain imaging shows there is nothing like a real friend. The findings of this study might be seen as a starting point for future studies using ERPs to investigate social media impact and topics such as fake friendship.
Collapse
Affiliation(s)
- Peter Walla
- Freud CanBeLab, Faculty of Psychology, Sigmund Freud University, Sigmund Freud Platz 1, 1020 Vienna, Austria
- Faculty of Medicine, Sigmund Freud University, Sigmund Freud Platz 3, 1020 Vienna, Austria
- School of Psychology, Newcastle University, University Drive, Callaghan, NSW 2308, Australia
| | - Dimitrios Külzer
- Freud CanBeLab, Faculty of Psychology, Sigmund Freud University, Sigmund Freud Platz 1, 1020 Vienna, Austria
| | - Annika Leeb
- Freud CanBeLab, Faculty of Psychology, Sigmund Freud University, Sigmund Freud Platz 1, 1020 Vienna, Austria
| | - Lena Moidl
- Freud CanBeLab, Faculty of Psychology, Sigmund Freud University, Sigmund Freud Platz 1, 1020 Vienna, Austria
| | - Stefan Kalt
- Freud CanBeLab, Faculty of Psychology, Sigmund Freud University, Sigmund Freud Platz 1, 1020 Vienna, Austria
| |
Collapse
|
5
|
Jiang L, Zhang R, Tao L, Zhang Y, Zhou Y, Cai Q. Neural mechanisms of musical structure and tonality, and the effect of musicianship. Front Psychol 2023; 14:1092051. [PMID: 36844277 PMCID: PMC9948014 DOI: 10.3389/fpsyg.2023.1092051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 01/16/2023] [Indexed: 02/11/2023] Open
Abstract
Introduction The neural basis for the processing of musical syntax has previously been examined almost exclusively in classical tonal music, which is characterized by a strictly organized hierarchical structure. Musical syntax may differ in different music genres caused by tonality varieties. Methods The present study investigated the neural mechanisms for processing musical syntax across genres varying in tonality - classical, impressionist, and atonal music - and, in addition, examined how musicianship modulates such processing. Results Results showed that, first, the dorsal stream, including the bilateral inferior frontal gyrus and superior temporal gyrus, plays a key role in the perception of tonality. Second, right frontotemporal regions were crucial in allowing musicians to outperform non-musicians in musical syntactic processing; musicians also benefit from a cortical-subcortical network including pallidum and cerebellum, suggesting more auditory-motor interaction in musicians than in non-musicians. Third, left pars triangularis carries out online computations independently of tonality and musicianship, whereas right pars triangularis is sensitive to tonality and partly dependent on musicianship. Finally, unlike tonal music, the processing of atonal music could not be differentiated from that of scrambled notes, both behaviorally and neurally, even among musicians. Discussion The present study highlights the importance of studying varying music genres and experience levels and provides a better understanding of musical syntax and tonality processing and how such processing is modulated by music experience.
Collapse
Affiliation(s)
- Lei Jiang
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China,School of Music, East China Normal University, Shanghai, China
| | - Ruiqing Zhang
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Lily Tao
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Yuxin Zhang
- Shanghai High School International Division, Shanghai, China
| | - Yongdi Zhou
- School of Psychology, Shenzhen University, Shenzhen, China,Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, United States,Yongdi Zhou, ✉
| | - Qing Cai
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China,Shanghai Changning Mental Health Center, Shanghai, China,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China,*Correspondence: Qing Cai, ✉
| |
Collapse
|
6
|
Malekmohammadi A, Ehrlich SK, Cheng G. Modulation of theta and gamma oscillations during familiarization with previously unknown music. Brain Res 2023; 1800:148198. [PMID: 36493897 DOI: 10.1016/j.brainres.2022.148198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/24/2022] [Accepted: 12/04/2022] [Indexed: 12/12/2022]
Abstract
Repeated listening to unknown music leads to gradual familiarization with musical sequences. Passively listening to musical sequences could involve an array of dynamic neural responses in reaching familiarization with the musical excerpts. This study elucidates the dynamic brain response and its variation over time by investigating the electrophysiological changes during the familiarization with initially unknown music. Twenty subjects were asked to familiarize themselves with previously unknown 10 s classical music excerpts over three repetitions while their electroencephalogram was recorded. Dynamic spectral changes in neural oscillations are monitored by time-frequency analyses for all frequency bands (theta: 5-9 Hz, alpha: 9-13 Hz, low-beta: 13-21 Hz, high beta: 21-32 Hz, and gamma: 32-50 Hz). Time-frequency analyses reveal sustained theta event-related desynchronization (ERD) in the frontal-midline and the left pre-frontal electrodes which decreased gradually from 1st to 3rd time repetition of the same excerpts (frontal-midline: 57.90 %, left-prefrontal: 75.93 %). Similarly, sustained gamma ERD decreased in the frontal-midline and bilaterally frontal/temporal areas (frontal-midline: 61.47 %, left-frontal: 90.88 %, right-frontal: 87.74 %). During familiarization, the decrease of theta ERD is superior in the first part (1-5 s) whereas the decrease of gamma ERD is superior in the second part (5-9 s) of music excerpts. The results suggest that decreased theta ERD is associated with successfully identifying familiar sequences, whereas decreased gamma ERD is related to forming unfamiliar sequences.
Collapse
Affiliation(s)
- Alireza Malekmohammadi
- Chair for Cognitive System, Technical University of Munich, Electrical Engineering, Munich, 80333, Germany.
| | - Stefan K Ehrlich
- Chair for Cognitive System, Technical University of Munich, Electrical Engineering, Munich, 80333, Germany
| | - Gordon Cheng
- Chair for Cognitive System, Technical University of Munich, Electrical Engineering, Munich, 80333, Germany
| |
Collapse
|
7
|
Chiappetta B, Patel AD, Thompson CK. Musical and linguistic syntactic processing in agrammatic aphasia: An ERP study. JOURNAL OF NEUROLINGUISTICS 2022; 62:101043. [PMID: 35002061 PMCID: PMC8740885 DOI: 10.1016/j.jneuroling.2021.101043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Language and music rely on complex sequences organized according to syntactic principles that are implicitly understood by enculturated listeners. Across both domains, syntactic processing involves predicting and integrating incoming elements into higher-order structures. According to the Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, 2003), musical and linguistic syntactic processing rely on shared resources for integrating incoming elements (e.g., chords, words) into unfolding sequences. One prediction of the SSIRH is that people with agrammatic aphasia (whose deficits are due to syntactic integration problems) should present with deficits in processing musical syntax. We report the first neural study to test this prediction: event-related potentials (ERPs) were measured in response to musical and linguistic syntactic violations in a group of people with agrammatic aphasia (n=7) compared to a group of healthy controls (n=14) using an acceptability judgement task. The groups were matched with respect to age, education, and extent of musical training. Violations were based on morpho-syntactic relations in sentences and harmonic relations in chord sequences. Both groups presented with a significant P600 response to syntactic violations across both domains. The aphasic participants presented with a reduced-amplitude posterior P600 compared to the healthy adults in response to linguistic, but not musical, violations. Participants with aphasia did however present with larger frontal positivities in response to violations in both domains. Intriguingly, extent of musical training was associated with larger posterior P600 responses to syntactic violations of language and music in both groups. Overall, these findings are not consistent with the predictions of the SSIRH, and instead suggest that linguistic, but not musical, syntactic processing may be selectively impaired in stroke-induced agrammatic aphasia. However, the findings also suggest a relationship between musical training and linguistic syntactic processing, which may have clinical implications for people with aphasia, and motivates more research on the relationship between these two domains.
Collapse
Affiliation(s)
- Brianne Chiappetta
- Aphasia and Neurolinguistics Research Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Aniruddh D. Patel
- Department of Psychology, Tufts University, Medford, MA, USA
- Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, ON, CA
| | - Cynthia K. Thompson
- Aphasia and Neurolinguistics Research Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
- Mesulam Center for Cognitive Neurology and Alzheimer’s Disease, Northwestern University, Chicago, IL, USA
- Department of Neurology, Northwestern University, Chicago, IL, USA
| |
Collapse
|
8
|
Bianco R, Novembre G, Ringer H, Kohler N, Keller PE, Villringer A, Sammler D. Lateral Prefrontal Cortex Is a Hub for Music Production from Structural Rules to Movements. Cereb Cortex 2021; 32:3878-3895. [PMID: 34965579 PMCID: PMC9476625 DOI: 10.1093/cercor/bhab454] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 11/08/2021] [Accepted: 11/09/2021] [Indexed: 11/13/2022] Open
Abstract
Complex sequential behaviors, such as speaking or playing music, entail flexible rule-based chaining of single acts. However, it remains unclear how the brain translates abstract structural rules into movements. We combined music production with multimodal neuroimaging to dissociate high-level structural and low-level motor planning. Pianists played novel musical chord sequences on a muted MR-compatible piano by imitating a model hand on screen. Chord sequences were manipulated in terms of musical harmony and context length to assess structural planning, and in terms of fingers used for playing to assess motor planning. A model of probabilistic sequence processing confirmed temporally extended dependencies between chords, as opposed to local dependencies between movements. Violations of structural plans activated the left inferior frontal and middle temporal gyrus, and the fractional anisotropy of the ventral pathway connecting these two regions positively predicted behavioral measures of structural planning. A bilateral frontoparietal network was instead activated by violations of motor plans. Both structural and motor networks converged in lateral prefrontal cortex, with anterior regions contributing to musical structure building, and posterior areas to movement planning. These results establish a promising approach to study sequence production at different levels of action representation.
Collapse
Affiliation(s)
- Roberta Bianco
- UCL Ear Institute, University College London, London WC1X 8EE, UK.,Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Giacomo Novembre
- Neuroscience of Perception and Action Lab, Italian Institute of Technology (IIT), Rome 00161, Italy
| | - Hanna Ringer
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Institute of Psychology, University of Leipzig, Leipzig 04109, Germany
| | - Natalie Kohler
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| | - Peter E Keller
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Aarhus 8000, Denmark.,The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW 2751, Australia
| | - Arno Villringer
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Daniela Sammler
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| |
Collapse
|
9
|
Kim CH, Jin SH, Kim JS, Kim Y, Yi SW, Chung CK. Dissociation of Connectivity for Syntactic Irregularity and Perceptual Ambiguity in Musical Chord Stimuli. Front Neurosci 2021; 15:693629. [PMID: 34526877 PMCID: PMC8435864 DOI: 10.3389/fnins.2021.693629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 07/30/2021] [Indexed: 11/18/2022] Open
Abstract
Musical syntax has been studied mainly in terms of “syntactic irregularity” in harmonic/melodic sequences. However, “perceptual ambiguity” referring to the uncertainty of judgment/classification of presented stimuli can in addition be involved in our musical stimuli using three different chord sequences. The present study addresses how “syntactic irregularity” and “perceptual ambiguity” on musical syntax are dissociated, in terms of effective connectivity between the bilateral inferior frontal gyrus (IFGs) and superior temporal gyrus (STGs) by linearized time-delayed mutual information (LTDMI). Three conditions were of five-chord sequences with endings of dominant to tonic, dominant to submediant, and dominant to supertonic. The dominant to supertonic is most irregular, compared with the regular dominant to tonic. The dominant to submediant of the less irregular condition is the most ambiguous condition. In the LTDMI results, connectivity from the right to the left IFG (IFG-LTDMI) was enhanced for the most irregular condition, whereas that from the right to the left STG (STG-LTDMI) was enhanced for the most ambiguous condition (p = 0.024 in IFG-LTDMI, p < 0.001 in STG-LTDMI, false discovery rate (FDR) corrected). Correct rate was negatively correlated with STG-LTDMI, further reflecting perceptual ambiguity (p = 0.026). We found for the first time that syntactic irregularity and perceptual ambiguity coexist in chord stimulus testing musical syntax and that the two processes are dissociated in interhemispheric connectivities in the IFG and STG, respectively.
Collapse
Affiliation(s)
- Chan Hee Kim
- Interdisciplinary Program in Neuroscience, College of Natural Science, Seoul National University, Seoul, South Korea.,Department of Neurosurgery, MEG Center, Seoul National University Hospital, Seoul, South Korea
| | - Seung-Hyun Jin
- Department of Neurosurgery, MEG Center, Seoul National University Hospital, Seoul, South Korea
| | - June Sic Kim
- Department of Neurosurgery, MEG Center, Seoul National University Hospital, Seoul, South Korea.,Research Institute of Basic Sciences, Seoul National University, Seoul, South Korea
| | - Youn Kim
- Department of Music, School of Humanities, The University of Hong Kong, Hong Kong, Hong Kong SAR China
| | - Suk Won Yi
- College of Music, Seoul National University, Seoul, South Korea.,Western Music Research Institute, Seoul National University, Seoul, South Korea
| | - Chun Kee Chung
- Interdisciplinary Program in Neuroscience, College of Natural Science, Seoul National University, Seoul, South Korea.,Department of Neurosurgery, MEG Center, Seoul National University Hospital, Seoul, South Korea.,Department of Brain and Cognitive Science, College of Natural Science, Seoul National University, Seoul, South Korea.,Department of Neurosurgery, Seoul National University Hospital, Seoul, South Korea
| |
Collapse
|
10
|
Vaquero L, Ramos-Escobar N, Cucurell D, François C, Putkinen V, Segura E, Huotilainen M, Penhune V, Rodríguez-Fornells A. Arcuate fasciculus architecture is associated with individual differences in pre-attentive detection of unpredicted music changes. Neuroimage 2021; 229:117759. [PMID: 33454403 DOI: 10.1016/j.neuroimage.2021.117759] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 12/16/2020] [Accepted: 01/06/2021] [Indexed: 12/12/2022] Open
Abstract
The mismatch negativity (MMN) is an event related brain potential (ERP) elicited by unpredicted sounds presented in a sequence of repeated auditory stimuli. The neural sources of the MMN have been previously attributed to a fronto-temporo-parietal network which crucially overlaps with the so-called auditory dorsal stream, involving inferior and middle frontal, inferior parietal, and superior and middle temporal regions. These cortical areas are structurally connected by the arcuate fasciculus (AF), a three-branch pathway supporting the feedback-feedforward loop involved in auditory-motor integration, auditory working memory, storage of acoustic templates, as well as comparison and update of those templates. Here, we characterized the individual differences in the white-matter macrostructural properties of the AF and explored their link to the electrophysiological marker of passive change detection gathered in a melodic multifeature MMN-EEG paradigm in 26 healthy young adults without musical training. Our results show that left fronto-temporal white-matter connectivity plays an important role in the pre-attentive detection of rhythm modulations within a melody. Previous studies have shown that this AF segment is also critical for language processing and learning. This strong coupling between structure and function in auditory change detection might be related to life-time linguistic (and possibly musical) exposure and experiences, as well as to timing processing specialization of the left auditory cortex. To the best of our knowledge, this is the first time in which the relationship between neurophysiological (EEG) and brain white-matter connectivity indexes using DTI-tractography are studied together. Thus, the present results, although still exploratory, add to the existing evidence on the importance of studying the constraints imposed on cognitive functions by the underlying structural connectivity.
Collapse
Affiliation(s)
- Lucía Vaquero
- Laboratory of Cognitive and Computational Neuroscience, Complutense University of Madrid and Polytechnic University of Madrid, Campus Científico y Tecnológico de la UPM, Pozuelo de Alarcón, 28223 Madrid, Spain.
| | - Neus Ramos-Escobar
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain
| | - David Cucurell
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain
| | - Clément François
- Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain; Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France
| | - Vesa Putkinen
- Turku PET Centre, University of Turku, Turku, Finland
| | - Emma Segura
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain
| | - Minna Huotilainen
- Cicero Learning and Cognitive Brain Research Unit, University of Helsinki, Helsinki, Finland
| | - Virginia Penhune
- Penhune Laboratory for Motor Learning and Neural Plasticity, Concordia University, Montreal, QC, Canada; International Laboratory for Brain, Music and Sound Research (BRAMS). Montreal, QC, Canada; Center for Research on Brain, Language and Music (CRBLM), McGill University. Montreal, QC, Canada
| | - Antoni Rodríguez-Fornells
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain; Institució Catalana de recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
11
|
Musical Training and Brain Volume in Older Adults. Brain Sci 2021; 11:brainsci11010050. [PMID: 33466337 PMCID: PMC7824792 DOI: 10.3390/brainsci11010050] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 12/24/2020] [Accepted: 12/28/2020] [Indexed: 12/14/2022] Open
Abstract
Musical practice, including musical training and musical performance, has been found to benefit cognitive function in older adults. Less is known about the role of musical experiences on brain structure in older adults. The present study examined the role of different types of musical behaviors on brain structure in older adults. We administered the Goldsmiths Musical Sophistication Index, a questionnaire that includes questions about a variety of musical behaviors, including performance on an instrument, musical practice, allocation of time to music, musical listening expertise, and emotional responses to music. We demonstrated that musical training, defined as the extent of musical training, musical practice, and musicianship, was positively and significantly associated with the volume of the inferior frontal cortex and parahippocampus. In addition, musical training was positively associated with volume of the posterior cingulate cortex, insula, and medial orbitofrontal cortex. Together, the present study suggests that musical behaviors relate to a circuit of brain regions involved in executive function, memory, language, and emotion. As gray matter often declines with age, our study has promising implications for the positive role of musical practice on aging brain health.
Collapse
|
12
|
Kim CH, Seol J, Jin SH, Kim JS, Kim Y, Yi SW, Chung CK. Increased fronto-temporal connectivity by modified melody in real music. PLoS One 2020; 15:e0235770. [PMID: 32639987 PMCID: PMC7343137 DOI: 10.1371/journal.pone.0235770] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Accepted: 06/22/2020] [Indexed: 12/20/2022] Open
Abstract
In real music, the original melody may appear intact, with little elaboration only, or significantly modified. Since a melody is most easily perceived in music, hearing significantly modified melody may change a brain connectivity. Mozart KV 265 is comprised of a theme with an original melody of “Twinkle Twinkle Little Star” and its significant variations. We studied whether effective connectivity changes with significantly modified melody, between bilateral inferior frontal gyri (IFGs) and Heschl’s gyri (HGs) using magnetoencephalography (MEG). Among the 12 connectivities, the connectivity from the left IFG to the right HG was consistently increased with significantly modified melody compared to the original melody in 2 separate sets of the same rhythmic pattern with different melody (p = 0.005 and 0.034, Bonferroni corrected). Our findings show that the modification of an original melody in a real music changes the brain connectivity.
Collapse
Affiliation(s)
- Chan Hee Kim
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Korea
- Human Brain Function Laboratory, Seoul National University, Seoul, Korea
| | - Jaeho Seol
- Human Brain Function Laboratory, Seoul National University, Seoul, Korea
- W-Mind Laboratory, Wemakeprice Inc., Seoul, Korea
| | - Seung-Hyun Jin
- Human Brain Function Laboratory, Seoul National University, Seoul, Korea
| | - June Sic Kim
- Human Brain Function Laboratory, Seoul National University, Seoul, Korea
- Research Institute of Basic Sciences, Seoul National University, Seoul, Korea
| | - Youn Kim
- Department of Music, School of Humanities, The University of Hong Kong, Pok Fu Lam, Hong Kong
| | - Suk Won Yi
- College of Music, Seoul National University, Seoul, Korea
- Western Music Research Institute, Seoul National University, Seoul, Korea
| | - Chun Kee Chung
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Korea
- Human Brain Function Laboratory, Seoul National University, Seoul, Korea
- Department of Brain and Cognitive Science, Seoul National University College of Natural Science, Seoul, Korea
- Department of Neurosurgery, Seoul National University Hospital, Seoul, Korea
- * E-mail:
| |
Collapse
|
13
|
Kim CH, Kim JS, Choi Y, Kyong JS, Kim Y, Yi SW, Chung CK. Change in left inferior frontal connectivity with less unexpected harmonic cadence by musical expertise. PLoS One 2019; 14:e0223283. [PMID: 31714920 PMCID: PMC6850538 DOI: 10.1371/journal.pone.0223283] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Accepted: 09/17/2019] [Indexed: 11/19/2022] Open
Abstract
In terms of harmonic expectancy, compared to an expected dominant-to-tonic and an unexpected dominant-to-supertonic, a dominant-to-submediant is a less unexpected cadence, the perception of which may depend on the subject’s musical expertise. The present study investigated how aforementioned 3 different cadences are processed in the networks of bilateral inferior frontal gyri (IFGs) and superior temporal gyri (STGs) with magnetoencephalography. We compared the correct rate and brain connectivity in 9 music-majors (mean age, 23.5 ± 3.4 years; musical training period, 18.7 ± 4.0 years) and 10 non-music-majors (mean age, 25.2 ± 2.6 years; musical training period, 4.2 ± 1.5 years). For the brain connectivity, we computed the summation of partial directed coherence (PDC) values for inflows/outflows to/from each area (sPDCi/sPDCo) in bilateral IFGs and STGs. In the behavioral responses, music-majors were better than non-music-majors for all 3 cadences (p < 0.05). However, sPDCi/sPDCo was prominent only for the dominant-to-submediant in the left IFG. The sPDCi was more strongly enhanced in music-majors than in non-music-majors (p = 0.002, Bonferroni corrected), while the sPDCo was vice versa (p = 0.005, Bonferroni corrected). Our data show that music-majors, with higher musical expertise, are better in identifying a less unexpected cadence than non-music-majors, with connectivity changes centered on the left IFG.
Collapse
Affiliation(s)
- Chan Hee Kim
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Korea
| | - June Sic Kim
- Department of Brain and Cognitive Science, Seoul National University College of Natural Science, Seoul, Korea
- Research Institute of Basic Sciences, Seoul National University, Seoul, Korea
| | - Yunhee Choi
- Medical Research Collaborating Center, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Korea
| | - Jeong-Sug Kyong
- Neuroscience Research Institute, Seoul National University Medical Research Center, Seoul, Korea
- Audiology Institute, Hallym University of Graduate Studies, Seoul, Korea
| | - Youn Kim
- Department of Music, School of Humanities, The University of Hong Kong, Hong Kong, China
| | - Suk Won Yi
- College of Music, Seoul National University, Seoul, Korea
- Western Music Research Institute, Seoul National University, Seoul, Korea
| | - Chun Kee Chung
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Korea
- Department of Brain and Cognitive Science, Seoul National University College of Natural Science, Seoul, Korea
- Neuroscience Research Institute, Seoul National University Medical Research Center, Seoul, Korea
- Department of Neurosurgery, Seoul National University Hospital, Seoul, Korea
- * E-mail:
| |
Collapse
|
14
|
Lee DJ, Jung H, Loui P. Attention Modulates Electrophysiological Responses to Simultaneous Music and Language Syntax Processing. Brain Sci 2019; 9:E305. [PMID: 31683961 PMCID: PMC6895977 DOI: 10.3390/brainsci9110305] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 10/22/2019] [Accepted: 10/29/2019] [Indexed: 11/16/2022] Open
Abstract
Music and language are hypothesized to engage the same neural resources, particularly at the level of syntax processing. Recent reports suggest that attention modulates the shared processing of music and language, but the time-course of the effects of attention on music and language syntax processing are yet unclear. In this EEG study we vary top-down attention to language and music, while manipulating the syntactic structure of simultaneously presented musical chord progressions and garden-path sentences in a modified rapid serial visual presentation paradigm. The Early Right Anterior Negativity (ERAN) was observed in response to both attended and unattended musical syntax violations. In contrast, an N400 was only observed in response to attended linguistic syntax violations, and a P3/P600 only in response to attended musical syntax violations. Results suggest that early processing of musical syntax, as indexed by the ERAN, is relatively automatic; however, top-down allocation of attention changes the processing of syntax in both music and language at later stages of cognitive processing.
Collapse
Affiliation(s)
- Daniel J Lee
- Department of Psychology, Wesleyan University, Middletown, CT 06459, USA.
| | - Harim Jung
- Department of Psychology, Wesleyan University, Middletown, CT 06459, USA.
| | - Psyche Loui
- Department of Psychology, Wesleyan University, Middletown, CT 06459, USA.
- Department of Music, Northeastern University, Boston, MA 02115, USA.
| |
Collapse
|
15
|
Jarret T, Stockert A, Kotz SA, Tillmann B. Implicit learning of artificial grammatical structures after inferior frontal cortex lesions. PLoS One 2019; 14:e0222385. [PMID: 31539390 PMCID: PMC6754135 DOI: 10.1371/journal.pone.0222385] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2018] [Accepted: 08/29/2019] [Indexed: 12/02/2022] Open
Abstract
OBJECTIVE Previous research associated the left inferior frontal cortex with implicit structure learning. The present study tested patients with lesions encompassing the left inferior frontal gyrus (LIFG; including Brodmann areas 44 and 45) to further investigate this cognitive function, notably by using non-verbal material, implicit investigation methods, and by enhancing potential remaining function via dynamic attending. Patients and healthy matched controls were exposed to an artificial pitch grammar in an implicit learning paradigm to circumvent the potential influence of impaired language processing. METHODS Patients and healthy controls listened to pitch sequences generated within a finite-state grammar (exposure phase) and then performed a categorization task on new pitch sequences (test phase). Participants were not informed about the underlying grammar in either the exposure phase or the test phase. Furthermore, the pitch structures were presented in a highly regular temporal context as the beneficial impact of temporal regularity (e.g. meter) in learning and perception has been previously reported. Based on the Dynamic Attending Theory (DAT), we hypothesized that a temporally regular context helps developing temporal expectations that, in turn, facilitate event perception, and thus benefit artificial grammar learning. RESULTS Electroencephalography results suggest preserved artificial grammar learning of pitch structures in patients and healthy controls. For both groups, analyses of event-related potentials revealed a larger early negativity (100-200 msec post-stimulus onset) in response to ungrammatical than grammatical pitch sequence events. CONCLUSIONS These findings suggest that (i) the LIFG does not play an exclusive role in the implicit learning of artificial pitch grammars, and (ii) the use of non-verbal material and an implicit task reveals cognitive capacities that remain intact despite lesions to the LIFG. These results provide grounds for training and rehabilitation, that is, learning of non-verbal grammars that may impact the relearning of verbal grammars.
Collapse
Affiliation(s)
- Tatiana Jarret
- CNRS, UMR5292, INSERM, U1028, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, Lyon, France
- University Lyon 1, Villeurbanne, France
| | - Anika Stockert
- Language and Aphasia Laboratory, Department of Neurology, University of Leipzig, Leipzig, Germany
| | - Sonja A. Kotz
- Dept. of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Faculty of Psychology and Neuroscience, Dept. of Neuropsychology, Maastricht University, Maastricht, The Netherlands
- Faculty of Psychology and Neuroscience, Dept. of Psychopharmacology, Maastricht University, Maastricht, The Netherlands
| | - Barbara Tillmann
- CNRS, UMR5292, INSERM, U1028, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, Lyon, France
- University Lyon 1, Villeurbanne, France
| |
Collapse
|
16
|
Zhang J, Che X, Yang Y. Event-related brain potentials suggest a late interaction of pitch and time in music perception. Neuropsychologia 2019; 132:107118. [PMID: 31176722 DOI: 10.1016/j.neuropsychologia.2019.107118] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 04/30/2019] [Accepted: 06/05/2019] [Indexed: 11/26/2022]
Abstract
Given the key role of pitch and time in the mental representation of music, the way the two dimensions combine is a crucial question in music cognition. In the present study, using electroencephalography (EEG), we manipulated both musical pitch and time structures and investigated how the two dimensions work together. Musicians were presented with eight-chord sequences, in which the last target chord was harmonically or temporally expected or unexpected based on the preceding contexts. ERP analysis showed that listeners track both dimensions as music unfolds in time. For the time dimension, irregular temporal events induced greater MMN than regular temporal events. For the pitch dimension, harmonically less-related chords revealed greater ERAN and N5 than harmonically related chords. Moreover, there was an interaction between musical pitch and time dimensions in the N5 effect. These results indicate that for music perception, pitch and time dimensions are processed independently at the early stage and interactively at the late stage.
Collapse
Affiliation(s)
- Jingjing Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China; School of Psychology, Nanjing Normal University, Nanjing, China
| | - Xinchun Che
- Music College, Nanchang Hangkong University, Nanchang, China
| | - Yufang Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
17
|
Omigie D, Pearce M, Lehongre K, Hasboun D, Navarro V, Adam C, Samson S. Intracranial Recordings and Computational Modeling of Music Reveal the Time Course of Prediction Error Signaling in Frontal and Temporal Cortices. J Cogn Neurosci 2019; 31:855-873. [PMID: 30883293 DOI: 10.1162/jocn_a_01388] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Prediction is held to be a fundamental process underpinning perception, action, and cognition. To examine the time course of prediction error signaling, we recorded intracranial EEG activity from nine presurgical epileptic patients while they listened to melodies whose information theoretical predictability had been characterized using a computational model. We examined oscillatory activity in the superior temporal gyrus (STG), the middle temporal gyrus (MTG), and the pars orbitalis of the inferior frontal gyrus, lateral cortical areas previously implicated in auditory predictive processing. We also examined activity in anterior cingulate gyrus (ACG), insula, and amygdala to determine whether signatures of prediction error signaling may also be observable in these subcortical areas. Our results demonstrate that the information content (a measure of unexpectedness) of musical notes modulates the amplitude of low-frequency oscillatory activity (theta to beta power) in bilateral STG and right MTG from within 100 and 200 msec of note onset, respectively. Our results also show this cortical activity to be accompanied by low-frequency oscillatory modulation in ACG and insula-areas previously associated with mediating physiological arousal. Finally, we showed that modulation of low-frequency activity is followed by that of high-frequency (gamma) power from approximately 200 msec in the STG, between 300 and 400 msec in the left insula, and between 400 and 500 msec in the ACG. We discuss these results with respect to models of neural processing that emphasize gamma activity as an index of prediction error signaling and highlight the usefulness of musical stimuli in revealing the wide-reaching neural consequences of predictive processing.
Collapse
Affiliation(s)
- Diana Omigie
- Max Planck Institute for Empirical Aesthetics.,Goldsmiths, University of London
| | | | - Katia Lehongre
- AP-HP, GH Pitié-Salpêtrière-Charles Foix.,Inserm U 1127, CNRS UMR 7225, Sorbonne Université, UMPC Univ Paris 06 UMR 5 1127, Institut du Cerveau et de la Moelle épinière, ICM, F-75013
| | | | - Vincent Navarro
- AP-HP, GH Pitié-Salpêtrière-Charles Foix.,Inserm U 1127, CNRS UMR 7225, Sorbonne Université, UMPC Univ Paris 06 UMR 5 1127, Institut du Cerveau et de la Moelle épinière, ICM, F-75013
| | | | - Severine Samson
- AP-HP, GH Pitié-Salpêtrière-Charles Foix.,University of Lille
| |
Collapse
|
18
|
Martins MJD, Bianco R, Sammler D, Villringer A. Recursion in action: An fMRI study on the generation of new hierarchical levels in motor sequences. Hum Brain Mapp 2019; 40:2623-2638. [PMID: 30834624 PMCID: PMC6865530 DOI: 10.1002/hbm.24549] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2018] [Revised: 01/17/2019] [Accepted: 01/30/2019] [Indexed: 02/04/2023] Open
Abstract
Generation of hierarchical structures, such as the embedding of subordinate elements into larger structures, is a core feature of human cognition. Processing of hierarchies is thought to rely on lateral prefrontal cortex (PFC). However, the neural underpinnings supporting active generation of new hierarchical levels remain poorly understood. Here, we created a new motor paradigm to isolate this active generative process by means of fMRI. Participants planned and executed identical movement sequences by using different rules: a Recursive hierarchical embedding rule, generating new hierarchical levels; an Iterative rule linearly adding items to existing hierarchical levels, without generating new levels; and a Repetition condition tapping into short term memory, without a transformation rule. We found that planning involving generation of new hierarchical levels (Recursive condition vs. both Iterative and Repetition) activated a bilateral motor imagery network, including cortical and subcortical structures. No evidence was found for lateral PFC involvement in the generation of new hierarchical levels. Activity in basal ganglia persisted through execution of the motor sequences in the contrast Recursive versus Iteration, but also Repetition versus Iteration, suggesting a role of these structures in motor short term memory. These results showed that the motor network is involved in the generation of new hierarchical levels during motor sequence planning, while lateral PFC activity was neither robust nor specific. We hypothesize that lateral PFC might be important to parse hierarchical sequences in a multi‐domain fashion but not to generate new hierarchical levels.
Collapse
Affiliation(s)
- Mauricio J D Martins
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.,Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Clinic for Cognitive Neurology, University Hospital Leipzig, Germany
| | - Roberta Bianco
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Ear Institute, University College London, London, UK
| | - Daniela Sammler
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Arno Villringer
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.,Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Clinic for Cognitive Neurology, University Hospital Leipzig, Germany
| |
Collapse
|
19
|
Shin H, Fujioka T. Effects of Visual Predictive Information and Sequential Context on Neural Processing of Musical Syntax. Front Psychol 2019; 9:2528. [PMID: 30618951 PMCID: PMC6300505 DOI: 10.3389/fpsyg.2018.02528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2018] [Accepted: 11/27/2018] [Indexed: 11/13/2022] Open
Abstract
The early right anterior negativity (ERAN) in event-related potentials (ERPs) is typically elicited by syntactically unexpected events in Western tonal music. We examined how visual predictive information influences syntactic processing, how musical or non-musical cues have different effects, and how they interact with sequential effects between trials, which could modulate with the strength of the sense of established tonality. The EEG was recorded from musicians who listened to chord sequences paired with one of four types of visual stimuli; two provided predictive information about the syntactic validity of the last chord through either musical notation of the whole sequence, or the word "regular" or "irregular," while the other two, empty musical staves or a blank screen, provided no information. Half of the sequences ended with the syntactically invalid Neapolitan sixth chord, while the other half ended with the Tonic chord. Clear ERAN was observed in frontocentral electrodes in all conditions. A principal component analysis (PCA) was performed on the grand average response in the audio-only condition, to separate spatio-temporal dynamics of different scalp areas as principal components (PCs) and use them to extract auditory-related neural activities in the other visual-cue conditions. The first principal component (PC1) showed a symmetrical frontocentral topography, while the second (PC2) showed a right-lateralized frontal concentration. A source analysis confirmed the relative contribution of temporal sources to the former and a right frontal source to the latter. Cue predictability affected only the ERAN projected onto PC1, especially when the previous trial ended with the Tonic chord. The ERAN in PC2 was reduced in the trials following Neapolitan endings in general. However, the extent of this reduction differed between cue-styles, whereby it was nearly absent when musical notation was used, regardless of whether the staves were filled with notes or empty. The results suggest that the right frontal areas carry out the primary role in musical syntactic analysis and integration of the ongoing context, which produce schematic expectations that, together with the veridical expectation incorporated by the temporal areas, inform musical syntactic processing in musicians.
Collapse
Affiliation(s)
- Hana Shin
- Department of Music, Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA, United States
| | - Takako Fujioka
- Department of Music, Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA, United States.,Stanford Neurosciences Institute, Stanford University, Stanford, CA, United States
| |
Collapse
|
20
|
Heaton P, Tsang WF, Jakubowski K, Mullensiefen D, Allen R. Discriminating autism and language impairment and specific language impairment through acuity of musical imagery. RESEARCH IN DEVELOPMENTAL DISABILITIES 2018; 80:52-63. [PMID: 29913330 DOI: 10.1016/j.ridd.2018.06.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2018] [Revised: 06/05/2018] [Accepted: 06/07/2018] [Indexed: 06/08/2023]
Abstract
Deficits in auditory short-term memory have been widely reported in children with Specific Language Impairment (SLI), and recent evidence suggests that children with Autism Spectrum Disorder and co-morbid language impairment (ALI) experience similar difficulties. Music, like language relies on auditory memory and the aim of the study was to extend work investigating the impact of auditory short-term memory impairments to musical perception in children with neurodevelopmental disorders. Groups of children with SLI and ALI were matched on chronological age (CA), receptive vocabulary, non-verbal intelligence and digit span, and compared with CA matched typically developing (TD) controls, on tests of pitch and temporal acuity within a voluntary musical imagery paradigm. The SLI participants performed at significantly lower levels than the ALI and TD groups on both conditions of the task and their musical imagery and digit span scores were positively correlated. In contrast ALI participants performed as well as TD controls on the tempo condition and better than TD controls on the pitch condition of the task. Whilst auditory short-term memory and receptive vocabulary impairments were similar across ALI and SLI groups, these were not associated with a deficit in voluntary musical imagery performance in the ALI group.
Collapse
Affiliation(s)
- Pamela Heaton
- Psychology, Goldsmiths University of London, New Cross, London, SE14 6NW, United Kingdom.
| | - Wai Fung Tsang
- Psychology, Goldsmiths University of London, New Cross, London, SE14 6NW, United Kingdom
| | - Kelly Jakubowski
- Music, University of Durham, Palace Green, Durham, DH1 3RL, United Kingdom
| | - Daniel Mullensiefen
- Psychology, Goldsmiths University of London, New Cross, London, SE14 6NW, United Kingdom
| | - Rory Allen
- Psychology, Goldsmiths University of London, New Cross, London, SE14 6NW, United Kingdom
| |
Collapse
|
21
|
Sun Y, Lu X, Ho HT, Johnson BW, Sammler D, Thompson WF. Syntactic processing in music and language: Parallel abnormalities observed in congenital amusia. NEUROIMAGE-CLINICAL 2018; 19:640-651. [PMID: 30013922 PMCID: PMC6022360 DOI: 10.1016/j.nicl.2018.05.032] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 05/22/2018] [Accepted: 05/23/2018] [Indexed: 11/23/2022]
Abstract
Evidence is accumulating that similar cognitive resources are engaged to process syntactic structure in music and language. Congenital amusia – a neurodevelopmental disorder that primarily affects music perception, including musical syntax – provides a special opportunity to understand the nature of this overlap. Using electroencephalography (EEG), we investigated whether individuals with congenital amusia have parallel deficits in processing language syntax in comparison to control participants. Twelve amusic participants (eight females) and 12 control participants (eight females) were presented melodies in one session, and spoken sentences in another session, both of which had syntactic-congruent and -incongruent stimuli. They were asked to complete a music-related and a language-related task that were irrelevant to the syntactic incongruities. Our results show that amusic participants exhibit impairments in the early stages of both music- and language-syntactic processing. Specifically, we found that two event-related potential (ERP) components – namely Early Right Anterior Negativity (ERAN) and Left Anterior Negativity (LAN), associated with music- and language-syntactic processing respectively, were absent in the amusia group. However, at later processing stages, amusics showed similar brain responses as controls to syntactic incongruities in both music and language. This was reflected in a normal N5 in response to melodies and a normal P600 to spoken sentences. Notably, amusics' parallel music- and language-syntactic impairments were not accompanied by deficits in semantic processing (indexed by normal N400 in response to semantic incongruities). Together, our findings provide further evidence for shared music and language syntactic processing, particularly at early stages of processing. Amusics displayed abnormal brain responses to music-syntactic irregularities. They also exhibited abnormal brain responses to language-syntactic irregularities. These impairments affect an early stage of syntactic processing not a later stage. Music and language involve similar cognitive mechanisms for processing syntax.
Collapse
Affiliation(s)
- Yanan Sun
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia; ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia.
| | - Xuejing Lu
- ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia; Department of Psychology, Macquarie University, New South Wales 2109, Australia; CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Hao Tam Ho
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa 56126, Italy; School of Psychology, University of Sydney, New South Wales 2006, Australia
| | - Blake W Johnson
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia; ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia
| | - Daniela Sammler
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - William Forde Thompson
- ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia; Department of Psychology, Macquarie University, New South Wales 2109, Australia
| |
Collapse
|
22
|
Abstract
Abstract
The mini-review provides an overview on the differences between the right and left hemispheres of the brain. Recent studies highlight the contribution of the two hemispheres to the physical and mental control, and the interaction language-music. We focused the attention on the behaviour of the right and left hemispheres about the music and on what happens when music areas are damaged.
Collapse
Affiliation(s)
- Giulia Gizzi
- Department of Psychology, University of Torino, Torino , Italy
| | - Elisabetta Albi
- Department of Pharmaceutical Science, University of Perugia, Perugia , Italy
| |
Collapse
|
23
|
Sihvonen AJ, Särkämö T, Ripollés P, Leo V, Saunavaara J, Parkkola R, Rodríguez-Fornells A, Soinila S. Functional neural changes associated with acquired amusia across different stages of recovery after stroke. Sci Rep 2017; 7:11390. [PMID: 28900231 PMCID: PMC5595783 DOI: 10.1038/s41598-017-11841-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 08/30/2017] [Indexed: 11/09/2022] Open
Abstract
Brain damage causing acquired amusia disrupts the functional music processing system, creating a unique opportunity to investigate the critical neural architectures of musical processing in the brain. In this longitudinal fMRI study of stroke patients (N = 41) with a 6-month follow-up, we used natural vocal music (sung with lyrics) and instrumental music stimuli to uncover brain activation and functional network connectivity changes associated with acquired amusia and its recovery. In the acute stage, amusic patients exhibited decreased activation in right superior temporal areas compared to non-amusic patients during instrumental music listening. During the follow-up, the activation deficits expanded to comprise a wide-spread bilateral frontal, temporal, and parietal network. The amusics showed less activation deficits to vocal music, suggesting preserved processing of singing in the amusic brain. Compared to non-recovered amusics, recovered amusics showed increased activation to instrumental music in bilateral frontoparietal areas at 3 months and in right middle and inferior frontal areas at 6 months. Amusia recovery was also associated with increased functional connectivity in right and left frontoparietal attention networks to instrumental music. Overall, our findings reveal the dynamic nature of deficient activation and connectivity patterns in acquired amusia and highlight the role of dorsal networks in amusia recovery.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Faculty of Medicine, University of Turku, 20520, Turku, Finland. .,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, 00014, Helsinki, Finland.
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, 00014, Helsinki, Finland
| | - Pablo Ripollés
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, 08907, Barcelona, Spain.,Department of Cognition, Development and Education Psychology, University of Barcelona, 08035, Barcelona, Spain.,Poeppel Lab, Department of Psychology, New York University, 10003, NY, USA
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, 00014, Helsinki, Finland
| | - Jani Saunavaara
- Department of Medical Physics, Turku University Hospital, 20521, Turku, Finland
| | - Riitta Parkkola
- Department of Radiology, Turku University and Turku University Hospital, 20521, Turku, Finland
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, 08907, Barcelona, Spain.,Department of Cognition, Development and Education Psychology, University of Barcelona, 08035, Barcelona, Spain.,Catalan Institution for Research and Advanced Studies, ICREA, Barcelona, Spain
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital and Department of Neurology, University of Turku, 20521, Turku, Finland
| |
Collapse
|
24
|
Sihvonen AJ, Ripollés P, Rodríguez-Fornells A, Soinila S, Särkämö T. Revisiting the Neural Basis of Acquired Amusia: Lesion Patterns and Structural Changes Underlying Amusia Recovery. Front Neurosci 2017; 11:426. [PMID: 28790885 PMCID: PMC5524924 DOI: 10.3389/fnins.2017.00426] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Accepted: 07/11/2017] [Indexed: 01/25/2023] Open
Abstract
Although, acquired amusia is a common deficit following stroke, relatively little is still known about its precise neural basis, let alone to its recovery. Recently, we performed a voxel-based lesion-symptom mapping (VLSM) and morphometry (VBM) study which revealed a right lateralized lesion pattern, and longitudinal gray matter volume (GMV) and white matter volume (WMV) changes that were specifically associated with acquired amusia after stroke. In the present study, using a larger sample of stroke patients (N = 90), we aimed to replicate and extend the previous structural findings as well as to determine the lesion patterns and volumetric changes associated with amusia recovery. Structural MRIs were acquired at acute and 6-month post-stroke stages. Music perception was behaviorally assessed at acute and 3-month post-stroke stages using the Scale and Rhythm subtests of the Montreal Battery of Evaluation of Amusia (MBEA). Using these scores, the patients were classified as non-amusic, recovered amusic, and non-recovered amusic. The results of the acute stage VLSM analyses and the longitudinal VBM analyses converged to show that more severe and persistent (non-recovered) amusia was associated with an extensive pattern of lesions and GMV/WMV decrease in right temporal, frontal, parietal, striatal, and limbic areas. In contrast, less severe and transient (recovered) amusia was linked to lesions specifically in left inferior frontal gyrus as well as to a GMV decrease in right parietal areas. Separate continuous analyses of MBEA Scale and Rhythm scores showed extensively overlapping lesion pattern in right temporal, frontal, and subcortical structures as well as in the right insula. Interestingly, the recovered pitch amusia was related to smaller GMV decreases in the temporoparietal junction whereas the recovered rhythm amusia was associated to smaller GMV decreases in the inferior temporal pole. Overall, the results provide a more comprehensive picture of the lesions and longitudinal structural changes associated with different recovery trajectories of acquired amusia.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Faculty of Medicine, University of TurkuTurku, Finland.,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of HelsinkiHelsinki, Finland
| | - Pablo Ripollés
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de LlobregatBarcelona, Spain.,Department of Cognition, Development and Education Psychology, University of BarcelonaBarcelona, Spain.,Poeppel Lab, Department of Psychology, New York UniversityNew York, NY, United States
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de LlobregatBarcelona, Spain.,Department of Cognition, Development and Education Psychology, University of BarcelonaBarcelona, Spain.,Catalan Institution for Research and Advanced Studies, Institució Catalana de Recerca i Estudis Avançats (ICREA)Barcelona, Spain
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital and Department of Neurology, University of TurkuTurku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of HelsinkiHelsinki, Finland
| |
Collapse
|
25
|
Slevc LR, Faroqi-Shah Y, Saxena S, Okada BM. Preserved processing of musical structure in a person with agrammatic aphasia. Neurocase 2016; 22:505-511. [PMID: 27112951 DOI: 10.1080/13554794.2016.1177090] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Evidence for shared processing of structure (or syntax) in language and in music conflicts with neuropsychological dissociations between the two. However, while harmonic structural processing can be impaired in patients with spared linguistic syntactic abilities (Peretz, I. (1993). Auditory atonalia for melodies. Cognitive Neuropsychology, 10, 21-56. doi:10.1080/02643299308253455), evidence for the opposite dissociation-preserved harmonic processing despite agrammatism-is largely lacking. Here, we report one such case: HV, a former musician with Broca's aphasia and agrammatic speech, was impaired in making linguistic, but not musical, acceptability judgments. Similarly, she showed no sensitivity to linguistic structure, but normal sensitivity to musical structure, in implicit priming tasks. To our knowledge, this is the first non-anecdotal report of a patient with agrammatic aphasia demonstrating preserved harmonic processing abilities, supporting claims that aspects of musical and linguistic structure rely on distinct neural mechanisms.
Collapse
Affiliation(s)
- L Robert Slevc
- a Department of Psychology , University of Maryland , College Park , Maryland , USA
| | - Yasmeen Faroqi-Shah
- b Department of Hearing and Speech Sciences , University of Maryland , College Park , Maryland , USA
| | - Sadhvi Saxena
- a Department of Psychology , University of Maryland , College Park , Maryland , USA
| | - Brooke M Okada
- a Department of Psychology , University of Maryland , College Park , Maryland , USA
| |
Collapse
|
26
|
Neural networks for harmonic structure in music perception and action. Neuroimage 2016; 142:454-464. [DOI: 10.1016/j.neuroimage.2016.08.025] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2016] [Revised: 06/30/2016] [Accepted: 08/15/2016] [Indexed: 01/21/2023] Open
|
27
|
Processing structure in language and music: a case for shared reliance on cognitive control. Psychon Bull Rev 2016; 22:637-52. [PMID: 25092390 DOI: 10.3758/s13423-014-0712-4] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
The relationship between structural processing in music and language has received increasing interest in the past several years, spurred by the influential Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, Nature Neuroscience, 6, 674-681, 2003). According to this resource-sharing framework, music and language rely on separable syntactic representations but recruit shared cognitive resources to integrate these representations into evolving structures. The SSIRH is supported by findings of interactions between structural manipulations in music and language. However, other recent evidence suggests that such interactions also can arise with nonstructural manipulations, and some recent neuroimaging studies report largely nonoverlapping neural regions involved in processing musical and linguistic structure. These conflicting results raise the question of exactly what shared (and distinct) resources underlie musical and linguistic structural processing. This paper suggests that one shared resource is prefrontal cortical mechanisms of cognitive control, which are recruited to detect and resolve conflict that occurs when expectations are violated and interpretations must be revised. By this account, musical processing involves not just the incremental processing and integration of musical elements as they occur, but also the incremental generation of musical predictions and expectations, which must sometimes be overridden and revised in light of evolving musical input.
Collapse
|
28
|
Fedorenko E, Varley R. Language and thought are not the same thing: evidence from neuroimaging and neurological patients. Ann N Y Acad Sci 2016; 1369:132-53. [PMID: 27096882 PMCID: PMC4874898 DOI: 10.1111/nyas.13046] [Citation(s) in RCA: 82] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 02/18/2016] [Accepted: 02/25/2016] [Indexed: 01/29/2023]
Abstract
Is thought possible without language? Individuals with global aphasia, who have almost no ability to understand or produce language, provide a powerful opportunity to find out. Surprisingly, despite their near-total loss of language, these individuals are nonetheless able to add and subtract, solve logic problems, think about another person's thoughts, appreciate music, and successfully navigate their environments. Further, neuroimaging studies show that healthy adults strongly engage the brain's language areas when they understand a sentence, but not when they perform other nonlinguistic tasks such as arithmetic, storing information in working memory, inhibiting prepotent responses, or listening to music. Together, these two complementary lines of evidence provide a clear answer: many aspects of thought engage distinct brain regions from, and do not depend on, language.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Psychiatry Department, Massachusetts General Hospital, Charlestown, Massachusetts
- Harvard Medical School, Boston, Massachusetts
- Center for Academic Research and Training in Anthropogeny (CARTA), University of California, San Diego, La Jolla, California
| | | |
Collapse
|
29
|
Kunert R, Willems RM, Casasanto D, Patel AD, Hagoort P. Music and Language Syntax Interact in Broca's Area: An fMRI Study. PLoS One 2015; 10:e0141069. [PMID: 26536026 PMCID: PMC4633113 DOI: 10.1371/journal.pone.0141069] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2014] [Accepted: 09/17/2015] [Indexed: 12/31/2022] Open
Abstract
Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area.
Collapse
Affiliation(s)
- Richard Kunert
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
- * E-mail:
| | - Roel M. Willems
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
| | - Daniel Casasanto
- Psychology Department, University of Chicago, Chicago, Illinois, United States of America
| | | | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
| |
Collapse
|
30
|
A Voxel-Based Morphometry Study of the Brain of University Students Majoring in Music and Nonmusic Disciplines. Behav Neurol 2015; 2015:274919. [PMID: 26494943 PMCID: PMC4606127 DOI: 10.1155/2015/274919] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2015] [Revised: 03/28/2015] [Accepted: 03/29/2015] [Indexed: 11/22/2022] Open
Abstract
The brain changes flexibly due to various experiences during the developmental stages of life. Previous voxel-based morphometry (VBM) studies have shown volumetric differences between musicians and nonmusicians in several brain regions including the superior temporal gyrus, sensorimotor areas, and superior parietal cortex. However, the reported brain regions depend on the study and are not necessarily consistent. By VBM, we investigated the effect of musical training on the brain structure by comparing university students majoring in music with those majoring in nonmusic disciplines. All participants were right-handed healthy Japanese females. We divided the nonmusic students into two groups and therefore examined three groups: music expert (ME), music hobby (MH), and nonmusic (NM) group. VBM showed that the ME group had the largest gray matter volumes in the right inferior frontal gyrus (IFG; BA 44), left middle occipital gyrus (BA 18), and bilateral lingual gyrus. These differences are considered to be caused by neuroplasticity during long and continuous musical training periods because the MH group showed intermediate volumes in these regions.
Collapse
|
31
|
Bianco R, Novembre G, Keller PE, Scharf F, Friederici AD, Villringer A, Sammler D. Syntax in Action Has Priority over Movement Selection in Piano Playing: An ERP Study. J Cogn Neurosci 2015; 28:41-54. [PMID: 26351994 DOI: 10.1162/jocn_a_00873] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Complex human behavior is hierarchically organized. Whether or not syntax plays a role in this organization is currently under debate. The present ERP study uses piano performance to isolate syntactic operations in action planning and to demonstrate their priority over nonsyntactic levels of movement selection. Expert pianists were asked to execute chord progressions on a mute keyboard by copying the posture of a performing model hand shown in sequences of photos. We manipulated the final chord of each sequence in terms of Syntax (congruent/incongruent keys) and Manner (conventional/unconventional fingering), as well as the strength of its predictability by varying the length of the Context (five-chord/two-chord progressions). The production of syntactically incongruent compared to congruent chords showed a response delay that was larger in the long compared to the short context. This behavioral effect was accompanied by a centroparietal negativity in the long but not in the short context, suggesting that a syntax-based motor plan was prepared ahead. Conversely, the execution of the unconventional manner was not delayed as a function of Context and elicited an opposite electrophysiological pattern (a posterior positivity). The current data support the hypothesis that motor plans operate at the level of musical syntax and are incrementally translated to lower levels of movement selection.
Collapse
Affiliation(s)
- Roberta Bianco
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | | | | | - Florian Scharf
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Angela D Friederici
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Arno Villringer
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Daniela Sammler
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
32
|
LaCroix AN, Diaz AF, Rogalsky C. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study. Front Psychol 2015; 6:1138. [PMID: 26321976 PMCID: PMC4531212 DOI: 10.3389/fpsyg.2015.01138] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Accepted: 07/22/2015] [Indexed: 11/30/2022] Open
Abstract
The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.
Collapse
Affiliation(s)
- Arianna N LaCroix
- Communication Neuroimaging and Neuroscience Laboratory, Department of Speech and Hearing Science, Arizona State University Tempe, AZ, USA
| | - Alvaro F Diaz
- Communication Neuroimaging and Neuroscience Laboratory, Department of Speech and Hearing Science, Arizona State University Tempe, AZ, USA
| | - Corianne Rogalsky
- Communication Neuroimaging and Neuroscience Laboratory, Department of Speech and Hearing Science, Arizona State University Tempe, AZ, USA
| |
Collapse
|
33
|
Musso M, Weiller C, Horn A, Glauche V, Umarova R, Hennig J, Schneider A, Rijntjes M. A single dual-stream framework for syntactic computations in music and language. Neuroimage 2015; 117:267-83. [PMID: 25998957 DOI: 10.1016/j.neuroimage.2015.05.020] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Accepted: 05/07/2015] [Indexed: 10/23/2022] Open
Abstract
This study is the first to compare in the same subjects the specific spatial distribution and the functional and anatomical connectivity of the neuronal resources that activate and integrate syntactic representations during music and language processing. Combining functional magnetic resonance imaging with functional connectivity and diffusion tensor imaging-based probabilistic tractography, we examined the brain network involved in the recognition and integration of words and chords that were not hierarchically related to the preceding syntax; that is, those deviating from the universal principles of grammar and tonal relatedness. This kind of syntactic processing in both domains was found to rely on a shared network in the left hemisphere centered on the inferior part of the inferior frontal gyrus (IFG), including pars opercularis and pars triangularis, and on dorsal and ventral long association tracts connecting this brain area with temporo-parietal regions. Language processing utilized some adjacent left hemispheric IFG and middle temporal regions more than music processing, and music processing also involved right hemisphere regions not activated in language processing. Our data indicate that a dual-stream system with dorsal and ventral long association tracts centered on a functionally and structurally highly differentiated left IFG is pivotal for domain-general syntactic competence over a broad range of elements including words and chords.
Collapse
Affiliation(s)
- Mariacristina Musso
- Freiburg Brain Imaging, University Hospital Freiburg, Germany; Department of Neurology, University Hospital Freiburg, Germany.
| | - Cornelius Weiller
- Freiburg Brain Imaging, University Hospital Freiburg, Germany; Department of Neurology, University Hospital Freiburg, Germany
| | - Andreas Horn
- Freiburg Brain Imaging, University Hospital Freiburg, Germany; Department of Neurology, University Hospital Freiburg, Germany
| | - Volkmer Glauche
- Freiburg Brain Imaging, University Hospital Freiburg, Germany; Department of Neurology, University Hospital Freiburg, Germany
| | - Roza Umarova
- Freiburg Brain Imaging, University Hospital Freiburg, Germany; Department of Neurology, University Hospital Freiburg, Germany
| | - Jürgen Hennig
- Department of Radiology, Medical Physics, University Hospital Freiburg, Germany
| | | | - Michel Rijntjes
- Freiburg Brain Imaging, University Hospital Freiburg, Germany; Department of Neurology, University Hospital Freiburg, Germany
| |
Collapse
|
34
|
Predictions and the brain: how musical sounds become rewarding. Trends Cogn Sci 2015; 19:86-91. [DOI: 10.1016/j.tics.2014.12.001] [Citation(s) in RCA: 200] [Impact Index Per Article: 22.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2013] [Revised: 11/24/2014] [Accepted: 12/01/2014] [Indexed: 11/27/2022]
|
35
|
Sturm I, Blankertz B, Potes C, Schalk G, Curio G. ECoG high gamma activity reveals distinct cortical representations of lyrics passages, harmonic and timbre-related changes in a rock song. Front Hum Neurosci 2014; 8:798. [PMID: 25352799 PMCID: PMC4195312 DOI: 10.3389/fnhum.2014.00798] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2014] [Accepted: 09/19/2014] [Indexed: 11/13/2022] Open
Abstract
Listening to music moves our minds and moods, stirring interest in its neural underpinnings. A multitude of compositional features drives the appeal of natural music. How such original music, where a composer's opus is not manipulated for experimental purposes, engages a listener's brain has not been studied until recently. Here, we report an in-depth analysis of two electrocorticographic (ECoG) data sets obtained over the left hemisphere in ten patients during presentation of either a rock song or a read-out narrative. First, the time courses of five acoustic features (intensity, presence/absence of vocals with lyrics, spectral centroid, harmonic change, and pulse clarity) were extracted from the audio tracks and found to be correlated with each other to varying degrees. In a second step, we uncovered the specific impact of each musical feature on ECoG high-gamma power (70-170 Hz) by calculating partial correlations to remove the influence of the other four features. In the music condition, the onset and offset of vocal lyrics in ongoing instrumental music was consistently identified within the group as the dominant driver for ECoG high-gamma power changes over temporal auditory areas, while concurrently subject-individual activation spots were identified for sound intensity, timbral, and harmonic features. The distinct cortical activations to vocal speech-related content embedded in instrumental music directly demonstrate that song integrated in instrumental music represents a distinct dimension in complex music. In contrast, in the speech condition, the full sound envelope was reflected in the high gamma response rather than the onset or offset of the vocal lyrics. This demonstrates how the contributions of stimulus features that modulate the brain response differ across the two examples of a full-length natural stimulus, which suggests a context-dependent feature selection in the processing of complex auditory stimuli.
Collapse
Affiliation(s)
- Irene Sturm
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany ; Neurotechnology Group, Department of Electrical Engineering and Computer Science, Berlin Institute of Technology Berlin, Germany ; Neurophysics Group, Department of Neurology and Clinical Neurophysiology, Charité - University Medicine Berlin Berlin, Germany
| | - Benjamin Blankertz
- Neurotechnology Group, Department of Electrical Engineering and Computer Science, Berlin Institute of Technology Berlin, Germany ; Bernstein Focus: Neurotechnology Berlin, Germany
| | - Cristhian Potes
- National Resource Center for Adaptive Neurotechnologies, Wadsworth Center, New York State Department of Health Albany, NY, USA ; Department of Electrical and Computer Engineering, University of Texas at El Paso El Paso, TX, USA
| | - Gerwin Schalk
- National Resource Center for Adaptive Neurotechnologies, Wadsworth Center, New York State Department of Health Albany, NY, USA ; Department of Electrical and Computer Engineering, University of Texas at El Paso El Paso, TX, USA ; Department of Neurosurgery, Washington University in St. Louis St. Louis, MO, USA ; Department of Biomedical Engineering, Rensselaer Polytechnic Institute Troy, NY, USA ; Department of Neurology, Albany Medical College Albany, NY, USA ; Department of Neurosurgery, Washington University in St. Louis St. Louis, MO, USA
| | - Gabriel Curio
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany ; Neurophysics Group, Department of Neurology and Clinical Neurophysiology, Charité - University Medicine Berlin Berlin, Germany ; Bernstein Focus: Neurotechnology Berlin, Germany
| |
Collapse
|
36
|
Abstract
Sixty years ago, Karl Lashley suggested that complex action sequences, from simple motor acts to language and music, are a fundamental but neglected aspect of neural function. Lashley demonstrated the inadequacy of then-standard models of associative chaining, positing a more flexible and generalized "syntax of action" necessary to encompass key aspects of language and music. He suggested that hierarchy in language and music builds upon a more basic sequential action system, and provided several concrete hypotheses about the nature of this system. Here, we review a diverse set of modern data concerning musical, linguistic, and other action processing, finding them largely consistent with an updated neuroanatomical version of Lashley's hypotheses. In particular, the lateral premotor cortex, including Broca's area, plays important roles in hierarchical processing in language, music, and at least some action sequences. Although the precise computational function of the lateral prefrontal regions in action syntax remains debated, Lashley's notion-that this cortical region implements a working-memory buffer or stack scannable by posterior and subcortical brain regions-is consistent with considerable experimental data.
Collapse
Affiliation(s)
- W Tecumseh Fitch
- Department of Cognitive Biology, University of Vienna, Vienna, Austria
| | | |
Collapse
|
37
|
Cerebral activations related to audition-driven performance imagery in professional musicians. PLoS One 2014; 9:e93681. [PMID: 24714661 PMCID: PMC3979724 DOI: 10.1371/journal.pone.0093681] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2013] [Accepted: 03/10/2014] [Indexed: 11/18/2022] Open
Abstract
Functional Magnetic Resonance Imaging (fMRI) was used to study the activation of cerebral motor networks during auditory perception of music in professional keyboard musicians (n = 12). The activation paradigm implied that subjects listened to two-part polyphonic music, while either critically appraising the performance or imagining they were performing themselves. Two-part polyphonic audition and bimanual motor imagery circumvented a hemisphere bias associated with the convention of playing the melody with the right hand. Both tasks activated ventral premotor and auditory cortices, bilaterally, and the right anterior parietal cortex, when contrasted to 12 musically unskilled controls. Although left ventral premotor activation was increased during imagery (compared to judgment), bilateral dorsal premotor and right posterior-superior parietal activations were quite unique to motor imagery. The latter suggests that musicians not only recruited their manual motor repertoire but also performed a spatial transformation from the vertically perceived pitch axis (high and low sound) to the horizontal axis of the keyboard. Imagery-specific activations in controls were seen in left dorsal parietal-premotor and supplementary motor cortices. Although these activations were less strong compared to musicians, this overlapping distribution indicated the recruitment of a general 'mirror-neuron' circuitry. These two levels of sensori-motor transformations point towards common principles by which the brain organizes audition-driven music performance and visually guided task performance.
Collapse
|
38
|
Matsunaga R, Yokosawa K, Abe JI. Functional modulations in brain activity for the first and second music: a comparison of high- and low-proficiency bimusicals. Neuropsychologia 2013; 54:1-10. [PMID: 24365653 DOI: 10.1016/j.neuropsychologia.2013.12.014] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2013] [Revised: 12/09/2013] [Accepted: 12/12/2013] [Indexed: 11/16/2022]
Abstract
Bilingual studies have shown that brain activities for first (L1) and second (L2) languages are influenced by L2 proficiency. Does proficiency with a second musical system (M2) influence bimusical brains in a manner similar to that of bilingual brains? Our magnetoencephalography study assessed the influence of M2 proficiency on the spatial, strength, and temporal properties of brain activity in a musical syntactic-processing task (i.e., tonal processing) involving first (M1) and second (M2) music systems. Two bimusical groups, differing in M2 proficiency (high, low), listened to melodies from both their M1 and M2 musical cultures. All melodies ended with a tonally consistent or inconsistent tone. In both groups, tonal deviations in both M1 and M2 elicited magnetic early right anterior negativities (mERANs) that were generated from brain areas around the inferior frontal gyrus (IFG). We also analyzed the dipole locations, dipole strengths, and peak latencies of mERAN. Results revealed: (a) the distances between dipole locations for M1 and M2 were shorter in the M2 high-proficiency group than in the M2 low-proficiency group; (b) the dipole strengths were greater in the high than the low group; (c) the peak latencies of M2 were shorter in the high than low group. The dipole location results were consistent with those from bilingual studies in that the distances between the (left) IFG peak activations for L1 and L2 syntactic processing shortened as L2 proficiency increased. The parallel results for bimusicals and bilinguals suggest that the functional changes induced by proficiency in a second (linguistic or musical) system are defined by domain-general neural constraints.
Collapse
Affiliation(s)
- Rie Matsunaga
- Department of Informatics, Shizuoka Institute of Science and Technology, 2200-2 Toyosawa, Fukuroi, Shizuoka 437-8555, Japan; Faculty of Health Sciences, Hokkaido University, Kita-12 Nishi-5, Kita-ku, Sapporo, 060-0812, Japan; Department of Psychology, Hokkaido University, Kita-10 Nishi-7, Kita-ku, Sapporo, 060-0812, Japan.
| | - Koichi Yokosawa
- Faculty of Health Sciences, Hokkaido University, Kita-12 Nishi-5, Kita-ku, Sapporo, 060-0812, Japan
| | - Jun-ichi Abe
- Department of Psychology, Hokkaido University, Kita-10 Nishi-7, Kita-ku, Sapporo, 060-0812, Japan
| |
Collapse
|
39
|
Wilson SJ, Abbott DF, Tailby C, Gentle EC, Merrett DL, Jackson GD. Changes in singing performance and fMRI activation following right temporal lobe surgery. Cortex 2013; 49:2512-24. [DOI: 10.1016/j.cortex.2012.12.019] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2012] [Revised: 12/05/2012] [Accepted: 12/24/2012] [Indexed: 10/27/2022]
|
40
|
Brattico E, Tupala T, Glerean E, Tervaniemi M. Modulated neural processing of Western harmony in folk musicians. Psychophysiology 2013; 50:653-63. [DOI: 10.1111/psyp.12049] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2012] [Indexed: 11/28/2022]
Affiliation(s)
| | - Tiina Tupala
- Cognitive Brain Research Unit; Institute of Behavioral Sciences; University of Helsinki; Helsinki; Finland
| | | | | |
Collapse
|
41
|
Syntax in a pianist's hand: ERP signatures of “embodied” syntax processing in music. Cortex 2013; 49:1325-39. [DOI: 10.1016/j.cortex.2012.06.007] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2011] [Revised: 04/02/2012] [Accepted: 06/13/2012] [Indexed: 11/19/2022]
|
42
|
Fedorenko E, McDermott JH, Norman-Haignere S, Kanwisher N. Sensitivity to musical structure in the human brain. J Neurophysiol 2012; 108:3289-300. [PMID: 23019005 PMCID: PMC3544885 DOI: 10.1152/jn.00209.2012] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2012] [Accepted: 09/23/2012] [Indexed: 11/22/2022] Open
Abstract
Evidence from brain-damaged patients suggests that regions in the temporal lobes, distinct from those engaged in lower-level auditory analysis, process the pitch and rhythmic structure in music. In contrast, neuroimaging studies targeting the representation of music structure have primarily implicated regions in the inferior frontal cortices. Combining individual-subject fMRI analyses with a scrambling method that manipulated musical structure, we provide evidence of brain regions sensitive to musical structure bilaterally in the temporal lobes, thus reconciling the neuroimaging and patient findings. We further show that these regions are sensitive to the scrambling of both pitch and rhythmic structure but are insensitive to high-level linguistic structure. Our results suggest the existence of brain regions with representations of musical structure that are distinct from high-level linguistic representations and lower-level acoustic representations. These regions provide targets for future research investigating possible neural specialization for music or its associated mental processes.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA.
| | | | | | | |
Collapse
|
43
|
Tobia MJ, Iacovella V, Davis B, Hasson U. Neural systems mediating recognition of changes in statistical regularities. Neuroimage 2012; 63:1730-42. [DOI: 10.1016/j.neuroimage.2012.08.017] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2012] [Revised: 07/08/2012] [Accepted: 08/05/2012] [Indexed: 10/28/2022] Open
|
44
|
Magnetoencephalography evidence for different brain subregions serving two musical cultures. Neuropsychologia 2012; 50:3218-27. [PMID: 23063935 DOI: 10.1016/j.neuropsychologia.2012.10.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2012] [Revised: 09/24/2012] [Accepted: 10/03/2012] [Indexed: 11/20/2022]
Abstract
Individuals who have been exposed to two different musical cultures (bimusicals) can be differentiated from those exposed to only one musical culture (monomusicals). Just as bilingual speakers handle the distinct language-syntactic rules of each of two languages, bimusical listeners handle two distinct musical-syntactic rules (e.g., tonal schemas) in each musical culture. This study sought to determine specific brain activities that contribute to differentiating two culture-specific tonal structures. We recorded magnetoencephalogram (MEG) responses of bimusical Japanese nonmusicians and amateur musicians as they monitored unfamiliar Western melodies and unfamiliar, but traditional, Japanese melodies, both of which contained tonal deviants (out-of-key tones). Previous studies with Western monomusicals have shown that tonal deviants elicit an early right anterior negativity (mERAN) originating in the inferior frontal cortex. In the present study, tonal deviants in both Western and Japanese melodies elicited mERANs with characteristics fitted by dipoles around the inferior frontal gyrus in the right hemisphere and the premotor cortex in the left hemisphere. Comparisons of the nature of mERAN activity to Western and Japanese melodies showed differences in the dipoles' locations but not in their peak latency or dipole strength. These results suggest that the differentiation between a tonal structure of one culture and that of another culture correlates with localization differences in brain subregions around the inferior frontal cortex and the premotor cortex.
Collapse
|
45
|
Sammler D, Koelsch S, Ball T, Brandt A, Grigutsch M, Huppertz HJ, Knösche TR, Wellmer J, Widman G, Elger CE, Friederici AD, Schulze-Bonhage A. Co-localizing linguistic and musical syntax with intracranial EEG. Neuroimage 2012; 64:134-46. [PMID: 23000255 DOI: 10.1016/j.neuroimage.2012.09.035] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2012] [Revised: 09/05/2012] [Accepted: 09/13/2012] [Indexed: 10/27/2022] Open
Abstract
Despite general agreement on shared syntactic resources in music and language, the neuroanatomical underpinnings of this overlap remain largely unexplored. While previous studies mainly considered frontal areas as supramodal grammar processors, the domain-general syntactic role of temporal areas has been so far neglected. Here we capitalized on the excellent spatial and temporal resolution of subdural EEG recordings to co-localize low-level syntactic processes in music and language in the temporal lobe in a within-subject design. We used Brain Surface Current Density mapping to localize and compare neural generators of the early negativities evoked by violations of phrase structure grammar in both music and spoken language. The results show that the processing of syntactic violations relies in both domains on bilateral temporo-fronto-parietal neural networks. We found considerable overlap of these networks in the superior temporal lobe, but also differences in the hemispheric timing and relative weighting of their fronto-temporal constituents. While alluding to the dissimilarity in how shared neural resources may be configured depending on the musical or linguistic nature of the perceived stimulus, the combined data lend support for a co-localization of early musical and linguistic syntax processing in the temporal lobe.
Collapse
Affiliation(s)
- Daniela Sammler
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany.
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
46
|
Fachner J, Gold C, Erkkilä J. Music therapy modulates fronto-temporal activity in rest-EEG in depressed clients. Brain Topogr 2012; 26:338-54. [PMID: 22983820 DOI: 10.1007/s10548-012-0254-x] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2011] [Accepted: 08/11/2012] [Indexed: 12/22/2022]
Abstract
Fronto-temporal areas process shared elements of speech and music. Improvisational psychodynamic music therapy (MT) utilizes verbal and musical reflection on emotions and images arising from clinical improvisation. Music listening is shifting frontal alpha asymmetries (FAA) in depression, and increases frontal midline theta (FMT). In a two-armed randomized controlled trial (RCT) with 79 depressed clients (with comorbid anxiety), we compared standard care (SC) versus MT added to SC at intake and after 3 months. We found that MT significantly reduced depression and anxiety symptoms. The purpose of this study is to test whether or not MT has an impact on anterior fronto-temporal resting state alpha and theta oscillations. Correlations between anterior EEG, Montgomery-Åsberg Depression Rating Scale (MADRS) and the Hospital Anxiety and Depression Scale-Anxiety Subscale (HADS-A), power spectral analysis (topography, means, asymmetry) and normative EEG database comparisons were explored. After 3 month of MT, lasting changes in resting EEG were observed, i.e., significant absolute power increases at left fronto-temporal alpha, but most distinct for theta (also at left fronto-central and right temporoparietal leads). MT differed to SC at F7-F8 (z scored FAA, p < .03) and T3-T4 (theta, p < .005) asymmetry scores, pointing towards decreased relative left-sided brain activity after MT; pre/post increased FMT and decreased HADS-A scores (r = .42, p < .05) indicate reduced anxiety after MT. Verbal reflection and improvising on emotions in MT may induce neural reorganization in fronto-temporal areas. Alpha and theta changes in fronto-temporal and temporoparietal areas indicate MT action and treatment effects on cortical activity in depression, suggesting an impact of MT on anxiety reduction.
Collapse
Affiliation(s)
- Jörg Fachner
- Department of Music, Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyväskylä, P.O. Box 35, 40014, Jyväskylä, Finland.
| | | | | |
Collapse
|
47
|
Does a paper's country of origin affect the length of the review process? Cortex 2012; 48:945-51. [DOI: 10.1016/j.cortex.2012.05.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2012] [Accepted: 05/25/2012] [Indexed: 11/20/2022]
|
48
|
Foley JA, Valkonen L. Are higher cited papers accepted faster for publication? Cortex 2012; 48:647-53. [DOI: 10.1016/j.cortex.2012.03.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2012] [Accepted: 03/23/2012] [Indexed: 10/28/2022]
|
49
|
Clark DG. Storage costs and heuristics interact to produce patterns of aphasic sentence comprehension performance. Front Psychol 2012; 3:135. [PMID: 22590462 PMCID: PMC3349300 DOI: 10.3389/fpsyg.2012.00135] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2012] [Accepted: 04/17/2012] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND Despite general agreement that aphasic individuals exhibit difficulty understanding complex sentences, the nature of sentence complexity itself is unresolved. In addition, aphasic individuals appear to make use of heuristic strategies for understanding sentences. This research is a comparison of predictions derived from two approaches to the quantification of sentence complexity, one based on the hierarchical structure of sentences, and the other based on dependency locality theory (DLT). Complexity metrics derived from these theories are evaluated under various assumptions of heuristic use. METHOD A set of complexity metrics was derived from each general theory of sentence complexity and paired with assumptions of heuristic use. Probability spaces were generated that summarized the possible patterns of performance across 16 different sentence structures. The maximum likelihood of comprehension scores of 42 aphasic individuals was then computed for each probability space and the expected scores from the best-fitting points in the space were recorded for comparison to the actual scores. Predictions were then compared using measures of fit quality derived from linear mixed effects models. RESULTS All three of the metrics that provide the most consistently accurate predictions of patient scores rely on storage costs based on the DLT. Patients appear to employ an Agent-Theme heuristic, but vary in their tendency to accept heuristically generated interpretations. Furthermore, the ability to apply the heuristic may be degraded in proportion to aphasia severity. CONCLUSION DLT-derived storage costs provide the best prediction of sentence comprehension patterns in aphasia. Because these costs are estimated by counting incomplete syntactic dependencies at each point in a sentence, this finding suggests that aphasia is associated with reduced availability of cognitive resources for maintaining these dependencies.
Collapse
Affiliation(s)
- David Glenn Clark
- Birmingham VA Medical CenterBirmingham, AL, USA
- Department of Neurology, University of Alabama at BirminghamBirmingham, AL, USA
| |
Collapse
|
50
|
Hsieh S, Hornberger M, Piguet O, Hodges JR. Brain correlates of musical and facial emotion recognition: evidence from the dementias. Neuropsychologia 2012; 50:1814-22. [PMID: 22579645 DOI: 10.1016/j.neuropsychologia.2012.04.006] [Citation(s) in RCA: 86] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2011] [Revised: 02/23/2012] [Accepted: 04/08/2012] [Indexed: 01/08/2023]
Abstract
The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n = 11), Alzheimer's disease (n = 12) and healthy control participants (n = 20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language.
Collapse
Affiliation(s)
- S Hsieh
- Neuroscience Research Australia, Sydney, NSW 2031, Australia
| | | | | | | |
Collapse
|